• 0 Posts
  • 46 Comments
Joined 2 years ago
cake
Cake day: July 6th, 2023

help-circle
  • If we can’t say if something is intelligent or not, why are we so hell-bent on creating this separation from LLMs? I perfectly understand the legal underminings of copyright, the weaponization of AI by the marketing people, the dystopian levels of dependence we’re developing on a so far unreliable technology, and the plethora of moral, legal, and existential issues surrounding AI, but this specific subject feels like such a silly hill to die on. We don’t know if we’re a few steps away from having massive AI breakthroughs, we don’t know if we already have pieces of algorithms that closely resemble our brains’ own. Our experiencing of reality could very well be broken down into simple inputs and outputs of an algorithmic infinite loop; it’s our hubris that elevates this to some mystical, unreproducible thing that only the biomechanics of carbon-based life can achieve, and only at our level of sophistication, because you may well recall we’ve been down this road with animals before as well, claiming they dont have souls or aren’t conscious beings, that somehow because they don’t very clearly match our intelligence in all aspects (even though they clearly feel, bond, dream, remember, and learn), they’re somehow an inferior or less valid existence.

    You’re describing very fixable limitations of chatgpt and other LLMs, limitations that are in place mostly due to costs and hardware constraints, not due to algorithmic limitations. On the subject of change, it’s already incredibly taxing to train a model, so of course continuous, uninterrupted training so as to more closely mimick our brains is currently out of the question, but it sounds like a trivial mechanism to put into place once the hardware or the training processes improve. I say trivial, making it sound actually trivial, but I’m putting that in comparison to, you know, actually creating an LLM in the first place, which is already a gargantuan task to have accomplished in itself. The fact that we can even compare a delusional model to a person with heavy mental illness is already such a big win for the technology even though it’s meant to be an insult.

    I’m not saying LLMs are alive, and they clearly don’t experience the reality we experience, but to say there’s no intelligence there because the machine that speaks exactly like us and a lot of times better than us, unlike any other being on this planet, has some other faults or limitations…is kind of stupid. My point here is, intelligence might be hard to define, but it might not be as hard to crack algorithmically if it’s an emergent property, and enforcing this “intelligence” separation only hinders our ability to properly recognize whether we’re on the right path to achieving a completely artificial being that can experience reality or not. We clearly are, LLMs and other models are clearly a step in the right direction, and we mustn’t let our hubris cloud that judgment.


  • What I never understood about this argument is…why are we fighting over whether something that speaks like us, knows more than us, bullshits and gets shit wrong like us, loses its mind like us, seemingly sometimes seeks self-preservation like us…why all of this isn’t enough to fit the very self-explanatory term “artificial…intelligence”. That name does not describe whether the entity is having a valid experiencing of the world as other living beings, it does not proclaim absolute excellence in all things done by said entity, it doesn’t even really say what kind of intelligence this intelligence would be. It simply says something has an intelligence of some sort, and it’s artificial. We’ve had AI in games for decades, it’s not the sci-fi AI, but it’s still code taking in multiple inputs and producing a behavior as an outcome of those inputs alongside other historical data it may or may not have. This fits LLMs perfectly. As far as I seem to understand, LLMs are essentially at least part of the algorithm we ourselves use in our brains to interpret written or spoken inputs, and produce an output. They bullshit all the time and don’t know when they’re lying, so what? Has nobody here run into a compulsive liar or a sociopath? People sometimes have no idea where a random factoid they’re saying came from or that it’s even a factoid, why is it so crazy when the machine does it?

    I keep hearing the word “anthropomorphize” being thrown around a lot, as if we cant be bringing up others into our domain, all the while refusing to even consider that maybe the underlying mechanisms that make hs tick are not that special, certainly not special enough to grant us a whole degree of separation from other beings and entities, and maybe we should instead bring ourselves down to the same domain as the rest of reality. Cold hard truth is, we don’t know if consciousness isn’t just an emerging property of varios different large models working together to show a cohesive image. If it is, would that be so bad? Hell, we don’t really even know if we actually have free will or if we live in a superdeterministic world, where every single particle moves with a predetermined path given to it since the very beginning of everything. What makes us think we’re so much better than other beings, to the point where we decide whether their existence is even recognizable?




  • I saw a brilliant explanation some time ago that I’m about to butcher back into a terrible one, bear with me:

    Think about 2 particles traveling together. When one gets tugged, it in turns tugs the other one with it. This tug takes some time, since one particle essentially “tells” the other particle to come with it, meaning there’s some level of information exchange happening between these two particles, and that exchange happens at the speed of light. Think about the travel distance between these two particles, it would be pretty linear, and pretty short, so you essentially do not notice this effect since it’s so fast.

    Now think about what happens when those 2 particles start going faster. The information exchange still happens, it still happens at the speed of light, but now that those particles are moving faster in some direction, the information exchange would seem to still go linearly from particle A to particle B, but in reality it would be traveling “diagonally”, since it would have to cover that extra distance being added by the particles moving in certain direction. This is the crucial part: what happens when those particles start getting closer to the speed of light? Well, the information exchange would have to cover the very small distance between these particles, plus the added distance from traveling closer to the speed of light. At first it’s pretty easy to cover this distance, but eventually you’re having to cover the entire distance light would take to travel in a given moment, PLUS the distance between the two particles, which…can’t happen since nothing can go faster than that speed.

    That’s essentially why you can never reach the speed of light, and why the more massive an object, the less speed it can achieve: all those particles have to communicate with each other, and that takes longer and longer the closer to the speed of light the whole object moves.

    See, this also perfectly explains what you’re asking: from the frame of reference of the particles, they’re seeing the information go in a straight line to them, so time is acting normally for them, but from an external perspective, that information is moving in a vector, taking a long time to reach the other particle since it’s having to cover the distance of near light speed in one direction, plus the distance between the two particles in another direction, for a total vector distance that is enormous rather than being negligible. At some point, you never see the information reach the other particle, or in other words, time for that whole object has slowed down to a near halt. This explains why time feels normal for the party traveling fast: they can’t know they’re slowed down since the information exchange is essentially the telling of time, but the external observer sees that slowdown happen, and in fact they get a compounded effect since those particles also communicate their state to the observer at the speed of light, and that distance between the observer and the particles keeps changing.

    This also explains why the particles might be able to also see everything around them happening a lot faster than it should: not only is it taking them longer to get updates about themselves between themselves, but they’re also running into the information from everything around them pretty fast, essentially receiving information from external sources faster than they do from themselves, thus causing this effect of seeing everything happening faster and faster, until it seems to all happen at once at the speed of light.

    Here’s the guy who made it all click for me, since I’m pretty sure I tangled more than one of you up with this long read: https://youtu.be/Vitf8YaVXhc


  • I don’t hate AI, I hate the system that’s using AI for purely profit-driven, capitalism-founded purposes. I hate the marketers, the CEOs, the bought lawmakers, the people with only a shallow understanding of the implications of this whole system and its interactions who become a part of this system and defend it. You see the pattern here? We can take out AI from the equation and the problematic system remains. AI should’ve been either the beginning of the end for humanity in a terminator sort of way, or the beginning of a new era of enlightenment and technological advancements for humanity. Instead we got a fast-tracked late stage capitalism doubling down on dooming us all for text that we dont have to think about writing while burning entire ecosystems to achieve it.

    I use AI on a near daily basis and find usefulness in it, it’s helped me solve a lot of issues and it’s a splendid rubber ducky for bouncing ideas, and I know people will disagree with me here but there are clear steps towards AGI here which cannot be ignored, we absolutely have systems in our brains which operate in a very similar fashion to LLMs, we just have more systems doing other shit too. Does anyone here actually think about every single word that comes out of their mouths? Has nobody ever experienced a moment where you clearly said something that you immediately have to backtrack on because you were lying for some inexplicable reason, or maybe you skipped too many words, slurred your speech or simply didn’t arrive anywhere with the words you were saying? Dismissing LLMs as advanced autocomplete absolutely ignores the fact that we’re doing exactly the same shit ourselves, with some more systems in place to guide our yapping.


  • JGrffn@lemmy.world
    cake
    toNo Stupid Questions@lemmy.worldSelling BTC or not..?
    link
    fedilink
    arrow-up
    2
    arrow-down
    4
    ·
    2 months ago

    Bitcoin went from under like 5k in 2020 to over 100k in 2024. The problem isn’t Bitcoin, it’s people thinking they can easily outperform Bitcoin by buying into shitcoins that are clearly Ponzi schemes and thinking they’ll know when to get out. Ask me how I know.

    Meanwhile, a friend wasn’t tempted by the shitcoins, simply bought BTC and ETH and held, now he’s easily 4x’d his money by doing as close to nothing as possible and most importantly, not touching fucking shitcoins.

    Bitcoin isn’t a gamble, a gamble is a gamble. Just like you can treat the SP500 as a retirement fund, or the source of your next options in a gamble.




  • I can give you my experience so far, seeing as the common criticisms of Linux usually boil down to unwillingness to try it as well as kernel level anticheat and Adobe products, and I…honestly don’t miss either of them, but I’m mostly a dev and a single player games enjoyer, so not much to miss, really.

    The speakers on my Razer blade laptop (running EndeavourOS, btw) stopped working randomly, but I’m not convinced it wasn’t my fault since I did have to work on the laptop internals for unrelated reasons and might have screwed something up.

    My webcam on my desktop, a Logitech Brio, has been acting up as of a couple of weeks on Bazzite, where the microphone keeps kinda dying and I have to unplug/re-plug the webcam to have a working mic. Also the audio quality on my Sony XM5s keeps changing to shitty quality, mostly when I do the re-plugging of the webcam, but it’s happened at random times before. Gotta go change the codec on the audio settings every now and then due to it.

    Monitor brightness can sometimes behave weirdly, not going back to a brighter setting after auto-dimming.

    Games with kernel anticheat don’t let me play online.

    This has mostly been it, to be honest. There’s a microscopic learning curve for Bazzite since it’s immutable, so I have flatpaks for most stuff, and “figure it out” for anything else, but other than that, it’s just better than Windows ever was. If you run into an issue, you’re most likely going to be able to solve it with a quick online search or by consulting the eldritch hallucinations of OpenAI or of your choosing.




  • JGrffn@lemmy.world
    cake
    toFediverse@lemmy.world*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    2 years ago

    Usually a React dev, have been some other stuff, but generally yeah, websites. Anything from resort chain websites to complex internal applications. Unit tests were optional at best in most jobs I’ve been at. I’ve heard of jobs where they’re pulled off, but from what I’ve seen, those are the exception and not the rule.

    Edit: given the downvotes on my other comment, I should add that this is both anecdotal and unopinionated from my behalf. My opinion on unit testing is “meh”, if I’m asked to do tests, I’ll do tests, if not, I won’t. I wouldn’t go out of my way to implement them, say, on a personal project or most work projects, but if I was tasked to lead certain project and that project could clearly benefit from them (i.e. Fintech, data security, high availability operation-critical tools), I wouldn’t think twice about it. Most of what I’ve worked on, however, has not been that operation-critical. What few things were critical in my work experience, would sometimes be the only code being unit tested.


  • JGrffn@lemmy.world
    cake
    toFediverse@lemmy.world*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    23
    ·
    2 years ago

    To be fair, I’ve yet had a job that actually pulls off unit testing. Most either don’t bother or just go for the grunt work bare minimum to force pass tests. Most friends in my field have had pretty much the same experience. Unit tests can be just a chore with little to no real benefit. Maybe an opensource project that actually cares about its code can pull it off, but I wouldn’t bat an eye if they never get to it.




  • There’s also what another comment pointed out. It’s not so much that most of us are stupid but that we’re not really equipped for the internet as a species. We get bombarded with too much crap from all directions, get stuck on echo-chambers, and don’t really fact-check, even when we do, because you can’t just fact-check everything that’s thrown at you 24/7. It’s a lot easier to not care, or care too much without substantiating your beliefs.

    For example, Covid wasn’t the first time the anti-mask, anti-Vax, conspiracy theorist, all-around crazy movement popped out their head. It wasn’t the first time money beat forethought. It wasn’t the first for much of the negative shit we saw, and yet for me it marked the moment I lost hope for the future of our species, after all, how can we hope to deal with stuff as huge and hard to see as climate change if we can’t even believe the existence of a virus that’s actively killing us? Are they all stupid for not putting in some effort to prevent this virus from spreading and killing millions? Am I stupid for thinking they would? Am I stupid for losing hope due to listening to all these stories of people fighting masks and vaccines? How many people worldwide actually fought back and resisted? You see it in my own words, I’m sort of convinced the crazies got riled up, and for sure in some parts of the world they did, but the scope of the internet spreads all sentiments on the matter to every corner of our interconnectedness, before we’re even aware it’s happening. All of a sudden we’re seeing conclusions from all sides without checking for how they all got where they did nor how many people actually believe it, we pick one side, maybe skim over another, and decry the rest as insane and sometimes even malevolent. These republicans sure want their voters dead or at the very least are too stupid to understand the dangers of the virus, this bill gates guy sure wants everyone microchipped or at the very least wants the medical world in his hands, these Chinese fellows for sure developed and released the virus or at the very least had it slip from their fingers. How am I supposed to know, or care, for all of it? How is any of us? Is it our personal responsibility to know and clear every fact we can? Spread awareness and fact-check everything? Just shut up and don’t get involved? What the fuck do we do, what can we do? Do we fight dissenting voices online? Do we march on the streets over beliefs we might not fully grasp nor could we?

    We’re just a bit too overloaded with everything to make a good job as a species about anything. At least that’s what I think, at least for the individuals that make up our species. Whatever you choose to believe, whatever actions you choose to take in response, someone somewhere will see you and think you’re an absolute idiot… And, I think, there’s not much to do about it.


  • JGrffn@lemmy.world
    cake
    toMicroblog Memes@lemmy.worldlike the old days
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    2 years ago

    I’m gonna add a thing here about psychedelics being amazing and at the same time horrifying. Don’t go looking for them thinking ego death will be anything less than death itself. Even though you come back, the you that goes in doesn’t really come back out. There’s also lower doses where the worst case scenario is still a bad trip and potentially months of ptsd.

    Still the most positively life-changing experience you could ever have on earth IMO, but not necessarily a fun one, at least not always. Now go, meet the light entities… Hopefully with a trip sitter.



  • You’re absolutely right and it is something I sometimes fail to account for since it nourishes hopelessness in me. I do, however, believe that such empathy is developed and not something you’re born with. You see it in varying degrees by how much someone cares for their families, friends, their community, and really even themselves. Some just care about themselves, some care about their peers but not their communities, and some don’t care about themselves but would bend over backwards for others. Empathy for lives beyond our own species is something that would be nurtured just like empathy for other humans is.

    When I talk about our options as a species, I am inclined to believe that most of our species leans towards having empathic feelings for lives beyond our own species. It may just be a matter of hope that I’m reflecting on my comments, but it is also an evolutionary advantage for us to develop such empathy as we further develop our abilities to morph this world to our needs and wants, since we do depend on other species for almost everything.

    Maybe I’m intertwining the necessities of our species with our individual feelings over those necessities, but I would believe this moral conflict would surface for most of us, with the level of such moral conflict varying greatly from person to person. My previous comment mostly wonders of the possibility that a great number of us start to develop such moral conflict over more than just domesticated species or cute mammals or such.

    With regards to the trolley problem, you’re right. By me profiting from the atrocities of others, I’m a part of such atrocity. It’s a fact of more than just harvesting farm animals. It affects our economies, our climate, our biodiversity, our social norms and behaviors towards outsiders and minorities, as well as our digital lives. It’s a cop-out to just wash my hands from such actions and only hold myself responsible for my direct actions regardless of those of others that my benefit me, and that’s why I said it was a cognitive dissonance, one that I just have to live with of my own choosing.