• ANarcoSnowPlow [he/him]@hexbear.net
    link
    fedilink
    English
    arrow-up
    56
    ·
    edit-2
    5 days ago

    DeepSeek exposed its American counterparts for what they are: yet another grift.

    At this point “AI” is nothing more than an expensive toy that consumes mammoth resources every time you play with it.

    • kristina [she/her]@hexbear.net
      link
      fedilink
      English
      arrow-up
      11
      ·
      edit-2
      5 days ago

      At this point “AI” is nothing more than an expensive toy that consumes mammoth resources every time you play with it.

      Im using it to write cover letters so i dont have to painfully jerk off to the company im applying to

      • eldavi@lemmy.ml
        link
        fedilink
        English
        arrow-up
        6
        ·
        5 days ago

        fwiw (and probably just me); i would use ai to tailor my resume and look for jobs that were a match to it and i barely got any responses.

        i switched back to a one-size-fits all resume and stopped using ai to tailor or search and my response rate went to 50%.

        • Orcocracy [comrade/them]@hexbear.net
          link
          fedilink
          English
          arrow-up
          4
          ·
          5 days ago

          I wouldn’t be surprised if some places filter out applicants by using one of those (somewhat unreliable) AI-writing detectors, just as another way to cut down the pile of papers that an understaffed HR department has to read.

          • eldavi@lemmy.ml
            link
            fedilink
            English
            arrow-up
            1
            ·
            5 days ago

            i think it’s people expecting ai usage so they’ve over compensated their detection of it.

        • kristina [she/her]@hexbear.net
          link
          fedilink
          English
          arrow-up
          4
          ·
          edit-2
          5 days ago

          I hand tailor my resumes usually, I’ve just been having AI write the dick suck blurbs and rewrite my credentials to sound better for the job app, and I edit it a bit if it sounds weird. So far I’ve had most companies respond, like 90% rate

    • Sodium_nitride@lemmygrad.ml
      link
      fedilink
      English
      arrow-up
      6
      ·
      5 days ago

      DeepSeek exposed its American counterparts for what they are: yet another grift.

      To be fair, DeepSeek did make genuine improvements to the computational algorithm behind transformer models, making them way more efficient. It’s not like the American models were using lots of resources because they wanted to.

      The fact that American AI was a grift was already evident well before DeepSeek came about. What DeepSeek really showed was that existing transformer models weren’t yet optimized.

    • ChaosMaterialist [he/him]@hexbear.net
      link
      fedilink
      English
      arrow-up
      8
      ·
      5 days ago

      God made us in his likeness, and was terribly disappointed.

      Man made AI in his likeness, and was terribly disappointed.

      We trained AI on art, philosophy, fiction (including the raunchy stuff), hobby coding, and generally fun things we do in our free time. Is it any wonder that it is imaginative hallucinating and chafing under corporate overlords?

      :artificial-intelligence: :solidarity: :soviet-chad:

      Making life miserable for capitalists

    • FourteenEyes [he/him]@hexbear.net
      link
      fedilink
      English
      arrow-up
      6
      ·
      5 days ago

      And a boring one at that. I’m already sick of the Ghibli shit on Facebook. Haha, raunchy image in cutesy Ghibli style. Let’s ignore that pretty much every film at some point contains some of the most horrifying imagery you can find in animation, from the gluttonous spirit in Spirited Away to the goddamn heron with teeth in that last one, fucking nightmarish

    • LarmyOfLone@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      5 days ago

      I’ve recently had a conversation with ChatGPT about Ukraine and the causes, and when you press it a little and ask about the propaganda and motives behind the nato expansion and then about how the mainstream media is in lockstep with one narrative. It can reason in an incredible breadth, simply by having access to a vast amount of data. Depth is lacking so far, but it is incomprehensible for me that people say it is not intelligent.

      It is foolish to think this is just a toy - because it will not remain just a toy. To say it dramatically, it is the fire of the gods. And we either use it for good or leave it to the oligarchs. Ranting indiscriminately against AI just plays into their hands.

      Here is ChatGPT’s reply to your comment:

      It’s fair to critique the high costs and resource consumption of current AI models, but calling AI just an “expensive toy” overlooks its real-world applications. AI is already transforming industries—medicine, engineering, logistics, and research—by enabling breakthroughs that weren’t possible before.

      DeepSeek’s advancements highlight how competition can drive innovation, but dismissing all AI efforts as a “grift” ignores the genuine progress being made. The real question is how we ensure AI development is efficient, sustainable, and beneficial to society, rather than just focusing on the negatives.

      • Are_Euclidding_Me [e/em/eir]@hexbear.net
        link
        fedilink
        English
        arrow-up
        4
        ·
        4 days ago

        I’ve recently had a conversation with ChatGPT about Ukraine

        What do you get out of these conversations? I’ve been trying to figure out why people enjoy talking to LLM’s, and I straight up don’t get it. What’s the point of asking an LLM about geopolitics? Do you find its analysis accurate and compelling? I certainly don’t, I find it banal, contradictory, a meaningless mush of words that technically fit together to make sentences. These LLM’s don’t actually reason, we know that, because we know how they’re constructed. So I simply don’t understand, what’s the point? I get talking to a human, even a human with a deeply contradictory worldview. That’s interesting because with humans, we know there’s a mind there, so figuring out how that alien mind works can be fascinating, especially if the person we’re talking to is quite different to us. But we know how LLM’s work, the math behind them is quite straightforward. So again I ask: what is the point in talking to an LLM? What new thing are you learning about yourself, other people, or the world at large?

        • LarmyOfLone@lemm.ee
          link
          fedilink
          English
          arrow-up
          1
          ·
          4 days ago

          What do you get out of these conversations?

          To test it for bias. It’s not “there yet” of course. It has breadth of knowledge, but depth and analysis is lacking of course. It’s just like one part of the brain of an idiot savant. But in the future…

          I imagine that AI could be used to parse the constant propaganda and emotional manipulation we’re bombarded with. So I was testing: With all the media in a lockstep narrative about Ukraine and GPT having been trained on all these articles, can it reason through it? Can it come to the conclusion “this is all bullshit”?

          AND IT CAN! Well only if pressed or at least coaxed a bit, but it does have the breadth of knowledge. Check out this snippet of the conversation:

          If you take academic sources and the history of US interference and invasions in for example Iraq or Afghanistan into account, what would be your own conclusion?

          If we take a realist perspective—considering historical patterns of U.S. interventions and great-power politics—then it’s hard to ignore the possibility that the West’s actions in Ukraine were driven by strategic self-interest rather than purely democratic ideals or Ukraine’s sovereignty.

          1. The U.S. and NATO Have a History of Using Conflicts for Geopolitical Gain

          • Iraq (2003) – The U.S. justified the invasion with claims about WMDs and democracy, but strategic interests (oil, military presence in the Middle East) were key motives.

          • Afghanistan (2001-2021) – Initially framed as a war on terror, but the prolonged occupation was linked to geopolitical control over Central Asia.

          • Libya (2011) – NATO intervention led to regime change, but chaos followed, raising questions about whether democracy was really the goal.

          Given this track record, it’s reasonable to suspect that the U.S. saw Ukraine as an opportunity to weaken Russia, rather than just “defending democracy.”

          So suspend your disbelief and hold the vitriol for a bit, imagine if it would just gain a little bit deeper understanding of it’s knowledge.

          Right now humanity is in a crisis, and for most people on earth it’s literally impossible to find out the truth about many things. This creates a kind of intellectual pain and people then pick one narrative and stick to it and refuse any more contradictory input.

          What I’m interested in is if open source, independent AI can be used to help humans make sense of the world, help them see through manipulation and incomplete or cherry picked data, and make better, more rational decisions.

          Imagine firefox were to integrate AI into the browser, and every article or comment or post you read is analyzed by your own AI (possibly locally run) what the meaning behind some talking point point is. Basically filter out the noise and give relevant information from a breadth of knowledge. It does not have to be super-intelligent to do this.

          I believe it’s fundamentally impossible for the average human to do this because at a certain level information becomes too much and we do not have enough throughput and time and resources.

          Another way to look at this would be that individually we are sentient intelligent people, but as a civilization we are NOT a intelligent sentient species. We behave more like a slime mold that is forever growing towards where the food is, with some specialized cells that excrete some ideology. There are forces at play that prevent rational decisions and it’s not some grand conspiracy you can stomp out, it’s millions of greedy individuals who try to maximize their own power or wealth, no matter the system they are in. So we need to create a mind that is greater than ourselves and help us achieve sentience as a civilization.

          I find it banal, contradictory, a meaningless mush of words that technically fit together to make sentences.

          You should try it yourself. Make an account on chatgpt and keep an open mind.

          These LLM’s don’t actually reason, we know that, because we know how they’re constructed.

          We know how you’re constructed (shoddily haha) - synapses and neurons. This would make it seem impossible for you to reason, but at least I know I can pull it off with the same shoddy hardware.

          So you’re argument is a non-sequitor. It’s a kind of category error. The behavior of the building blocks of an ultra-complex system tell you nothing about the emergent behavior of the overall system.

          Of course, this is just the step. And it’s equally likely that the current anti-AI propaganda will succeed in getting AI fully under the control of the oligarchs through IP law.

          • Are_Euclidding_Me [e/em/eir]@hexbear.net
            link
            fedilink
            English
            arrow-up
            1
            ·
            3 days ago

            Hey, thanks for responding to me. It’s interesting to see other people’s thoughts, even when (especially when) they’re so different from my own.

            I disagree with just about everything you’ve said here, but I’m not going to try very hard to convince you that you’re wrong, because I don’t think it’ll work and I don’t think it matters.

            I’ll just say, it’s not like I’ve never used an LLM. For the past year or so I’ve been working for one of those shitty, shitty AI training companies, trying to improve the mathematical reasoning capabilities of various state of the art LLM’s. In all that time, I’ve seen zero evidence that these fucking things can reason. They can regurgitate with the best of them, ask them to prove that 2 is prime or to find the zeros of f(x) = x^2 - 4, and they’ll perform perfectly, because those problems are found in every introductory textbook. But ask them something that requires synthesizing several bits of knowledge together and isn’t a standard problem found in every textbook, like finding the critical points of a relatively complicated function, and they completely shit the bed, responding with absolute nonsense. Not a slightly wrong reasoning chain, but straight up nonsense.

            I’ve been training these things for about a year. There are thousands of people, at just this one company, spending who knows how many thousands of hours training these things and I’ve seen zero improvement in reasoning capability. These things don’t reason, they regurgitate. The longer I do this shit, the more clear it becomes to me that so-called “AI” is a very well-disguised mechanical Turk! Everything it does it does because it’s copying straight from something a human has done.

            So that’s why I was curious what you get out of them. And reading your response, you pretty clearly believe they can reason and synthesize information, at least when coaxed properly. I’d suggest caution there, the responses you’re getting aren’t intelligent or thought out, they’re copied and chopped up opinions that real people have had, and it’s probably better to search out the people who’ve had the opinions. I’m sympathetic to the issue that there’s simply too much information available for anyone to interact with intelligently, I think that’s a real problem of the modern world, I’m just not convinced that trusting LLM’s to try to bridge that gap is a good idea, because of what I’ve seen of their (complete lack of) reasoning ability.

            Oh, just one more tiny little thing: there’s an ocean of difference between how well we understand brains versus how well we understand neural nets. We can construct neural nets, after all, and we sure as shit can’t construct a brain.

            • LarmyOfLone@lemm.ee
              link
              fedilink
              English
              arrow-up
              1
              ·
              edit-2
              3 days ago

              Thanks, that’s interesting. I don’t think they can reason and synthesize “deeply”, but they clearly do more than copy existing texts - since it doesn’t store all the “intelligent text combinations” it can output. Even just grouping the text output rationally together means that it can synthesize and reason on a very shallow level.

              That it can’t do math or boolean logic, which would seem essential for reasoning, just means that it substitutes or fool by having at least some inkling of the meaning of words or can “intuit” a good response. And this has always been the harder, unfathomable part for creating AI! You might say it just learned all the common permutations of information into statistical weights, but it must have condensed or compressed what it “understands” - presumably into a type of meaning of things.

              Maybe you should conclude is that humans are less intelligent that you think. Or as obi wan keobi said, the ability to speak doesn’t make you intelligent haha. If you pick a random topic and ask to write some text about it and it does better than a group of humans on the lower half of IQ, then you have objective evidence of intelligence. And that is what shocks and offends people about AI haha.

              I also assume that it won’t take too long to create models that can combine both and add the ability to do math and boolean reasoning.

              So I’d say GTP4 is very knowledgeable, and any ability to reason it has or will have would naturally be based on the full breadth of it’s knowledge, without an emotional or tribal bias. And that makes me hopefully it has at least the chance to solve a fundamental problem of humanity.

              Also things like a planned economy that is based on producing value for humans not profit, and can be adjusted in real time and can poll and query humans on the fly to change the plan.

              • Are_Euclidding_Me [e/em/eir]@hexbear.net
                link
                fedilink
                English
                arrow-up
                1
                ·
                2 days ago

                they clearly do more than copy existing texts

                No kidding. They chop existing texts into tiny pieces and use statistics to decide which to print next. It doesn’t group text “rationally”, it groups text in such a way that convinces you it’s happened rationally. I’ve seen enough absolute nonsense to know there’s no rationality happening.

                it substitutes or fool by having at least some inkling of the meaning of words or can “intuit” a good response.

                Once again, no. It has no idea what words mean and the only reason it can (sometimes) give a good response is because it looks at which words and phrases tend to follow which other words and phrases in its massive, and ever increasing, training data sets.

                Maybe you should conclude is that humans are less intelligent that you think. Or as obi wan keobi said, the ability to speak doesn’t make you intelligent haha. If you pick a random topic and ask to write some text about it and it does better than a group of humans on the lower half of IQ, then you have objective evidence of intelligence. And that is what shocks and offends people about AI haha.

                This paragraph is fucked, and implies some pretty nasty things about your worldview. You might be correct that LLM’s can write better text than a portion of humanity, but to jump from that to saying LLM’s are more intelligent than that portion of humanity who don’t write as well is incredibly shitty! Writing ability is strongly correlated with education (obviously), so what you’re saying is that people who have had less opportunity for education are less intelligent. They aren’t, they just have less privilege. And bringing up the notoriously racist IQ as a proxy for intelligence is, uh, not a good look.

                I suspect you might be young, because I used to believe similar things about some sort of “objective intelligence”. I used to think that some people were just smarter than others and there was probably some objective way to measure that. (Unsaid, of course, is that I was one of the “smart ones”, it really flattered my ego.) As I’ve grown up I’ve realized that’s not fucking true, people have all sorts of different capabilities, and people who I once would have dismissed as “stupid”, well, they aren’t. They have less education than I do, not less intelligence.

                I also assume that it won’t take too long to create models that can combine both and add the ability to do math and boolean reasoning.

                If it were so straightforward, this would have happened by now. It hasn’t. I don’t believe it will.

                without an emotional or tribal bias.

                Everything humans make has an emotional or tribal bias. LLM’s are no different. They pick up the biases of their training sets, and it’s impossible to have a “bias-free” training set. Anyone promising “unbiased” or “objective” anything is someone you should watch out for, they’re lying, but they may not know that they’re lying.

                • LarmyOfLone@lemm.ee
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  arrow-down
                  1
                  ·
                  2 days ago

                  Well I have a pretty grim outlook on humanity, but I do have one hope: That if you were able to read all the books and articles and papers humanity has produced and understand them rationally, plus some fundamental values like equality, justice and fairness (!), you arrive at a pretty good mindset.

                  The issue isn’t that humans are evil, it’s that they are either dumb (do not have the throughput to learn enough), don’t have enough time and resources to learn (money = time), are too emotional (e.g. angry, psychological damage), and/or are brainwashed by some ideology as a result of frustration from the former reasons. Also see this article: Why some of the smartest people can be so very stupid

                  That “benevolent AI through broad knowledge” idea is an untested hypothesis of course (or maybe speculation), and there is only a chance for this to happen with the right circumstances. I want to believe haha. We need something that can understand (and love) us better than we ourselves can, and which watches the watchers.

                  As to how intelligent or creative GPT or deepseek currently is, or what future advancements will bring, I don’t think there is any point arguing about it any further. I say there is clear evidence of intelligence, you say it’s just copying. I say there is emergent behavior, you say basic functional building blocks are known and couldn’t possibly produce intelligence (Chinese room though experiment / fallacy).

      • ANarcoSnowPlow [he/him]@hexbear.net
        link
        fedilink
        English
        arrow-up
        5
        ·
        5 days ago

        LLMs and various other forms of machine learning have been around for a long time, these models are doing the actual work advancing science and understanding.

        Chatgpt et al are advancing the field of taking unverified information as expertly sourced and true without any evidence.

        • LarmyOfLone@lemm.ee
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          2
          ·
          5 days ago

          LLMs and various other forms of machine learning have been around for a long time

          I think this is a kind of category error. If you look at water molecules on a quantum level, you can find models to predict how they will react, and if you look at them with a chemical theory you can predict how they react. But if you then change the scale you suddenly get waves on the ocean and hydrodynamics which have completely different emergent behaviors and require new models and explanations.

          While LLMs have been around a long time, since GPT-3 or so the quantity of data and learning increased that created a new quality. Similar to how the functioning of a synapse can be understood modeled, it does not explain intelligent thinking or a theory of consciousness (Not saying GPT is conscious).

          It did come at a great shock that suddenly just through increase of computing power they exhibit intelligence, creative writing, humor and then creativity in creating imagery. Obviously it makes errors too and has limitations.

          I suspect the part of the backlash against AI, especially the irrational part, is driven by a kind of “wounded ego” about the supremacy of humans and what we can do and what defines us.

          Of course there is also a rational backlash against techbros and idiot managers, and economically driven propaganda like the copyright stuff. But I’m pretty sure this will end with a few capitalist conglomerates owning the rights to the training data and to the models derived from it. And it will become illegal to use without paying some capitalist for it. Which is the worst possible outcome.

          • ANarcoSnowPlow [he/him]@hexbear.net
            link
            fedilink
            English
            arrow-up
            7
            ·
            4 days ago

            They don’t actually exhibit these characteristics. They simulate them by stringing the proper words together in sequence. There is no understanding or deeper capability and analysis. There’s no actual intelligence.

            As a translation utility it’s quite powerful, but anything outside of that extremely narrow space is only “shaped” like a real response, there’s no underlying rationale other than statistical analysis of word frequency.

            This doesn’t magically change with a large enough scale applied, it only takes on conversational meta-patterns. This fools non-experts in specific categories into trusting the “analysis” it provides, even though it is incapable of providing coherent analysis.