• chunes@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    11
    ·
    1 day ago

    Again with this idea of the ever-worsening ai models. It just isn’t happening in reality.

    • EldritchFemininity@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      11 hours ago

      It has been proven over and over that this is exactly what happens. I don’t know if it’s still the case, but ChatGPT was strictly limited to training data from before a certain date because the amount of AI content after that date had negative effects on the output.

      This is very easy to see because an AI is simply regurgitating algorithms created based on its training data. Any biases or flaws in that data become ingrained into the AI, causing it to output more flawed data, which is then used to train more AI, which further exacerbates the issues as they become even more ingrained in those AI who then output even more flawed data, and so on until the outputs are bad enough that nobody wants to use it.

      Did you ever hear that story about the researchers who had 2 LLMs talk to each other and they eventually began speaking in a language that nobody else could understand? What really happened was that their conversation started to turn more and more into gibberish until they were just passing random letters and numbers back and forth. That’s exactly what happens when you train AI on the output of AI. The “AI created their own language” thing was just marketing.

    • cley_faye@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      1
      ·
      16 hours ago

      Not only it is actually happening, it’s actually well researched and mathematically proven.

    • pulsewidth@lemmy.world
      link
      fedilink
      English
      arrow-up
      18
      ·
      24 hours ago

      The same reality where GPT5’s launch a couple months back was a massive failure with users and showed a lot of regression to less reliable output than GPT4? Or perhaps the reality where most corporations that have used AI found no benefit and have given up reported this year?

      LLMs are good tools for some uses, but those uses are quite limited and niche. They are however a square peg being crammed into the round hole of ‘AGI’ by Altman etc while they put their hands out for another $10bil - or, more accurately while they make a trade swap deal with MS or Nvidia or any of the other AI orobouros trade partners that hype up the bubble for self-benefit.

    • theneverfox@pawb.social
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      3
      ·
      24 hours ago

      People really latched onto the idea, which was shared with the media by people actively working on how to solve the problem