Just heard this today - co worker used an LLM to find him some shoes that fit. Basically prompted it to find specific shoes that fit wider/narrower feet and it just scraped reviews and told him what to get. I guess it worked perfectly for him.

I hate this sort of thing - however this is why normies love LLM’s. It’s going to be the new way every single person uses the internet. Hell, on a new win 11 install the first thing that comes up is copilot saying “hey use me i’m better than google!!”

Frustrating.

  • ryanvgates@infosec.pub
    link
    fedilink
    English
    arrow-up
    81
    ·
    1 day ago

    I don’t think people realize what’s going to quickly happen is that people making the models will start extorting brands to get a better ranking. You want our model to recommend your brand then pay us $X. Then any perceived utility about reading reviews vanishes kind of like with fake reviews today.

    • obsoleteacct@lemmy.zip
      link
      fedilink
      arrow-up
      3
      ·
      13 hours ago

      Worse than that people and brands are going to enshitify the internet In an effort to get their products and brands into the training data with a more positive context.

      Just use one AI to create hundreds of thousands of pages of bullshit about how great your brand is and how terrible your competitors brands are.

      Then every AI scraping those random pages trying to harvest as much data as possible folds that into their training data set. And it doesn’t just have to be things like fake product reviews. Fake peer-reviewed studies and fake white papers. It doesn’t even have to be on the surface. It can be buried in a 1000 web servers accessible to scrapers but not to typical users.

      Then all the other brands will have to do the same to compete. All of this enshitifying the models themselves more and more as they go.

      Self-inflicted digital brain tumors.

    • wizardbeard@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      22
      ·
      1 day ago

      Most AI company executives have already spoken openly about how that’s their plan for future financial growth: advertisements delivered naturally in the output with no clear division between ads and the content.

    • bridgeenjoyer@sh.itjust.worksOP
      link
      fedilink
      arrow-up
      23
      ·
      1 day ago

      Oh 100%, we already see what fElon programmed HitlerBot to do. It’s going to be an ultra-capitalists wet dream once the internet is destroyed and people only have access to Corpo LLM for only the cost of 3 pints of blood a month!

    • makeshiftreaper@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      1 day ago

      Arguably this is already happening. AIs are trained mostly by web scraping and specifically scraping Reddit which has a known astroturfing problem. So it’s already being fed non-genuine inputs and isn’t likely isn’t being used with tools to flag reviews as fake

    • driving_crooner@lemmy.eco.br
      link
      fedilink
      arrow-up
      7
      ·
      1 day ago

      Already happening. I was using chatgpt to make a script to download my YouTube music liked videos and it’s keep giving me a pop up with the message “use spotify instead”

    • brucethemoose@lemmy.world
      link
      fedilink
      arrow-up
      5
      arrow-down
      1
      ·
      edit-2
      1 day ago

      This isn’t so dystopian if open-weight LLMs keep their momentum. Hosts for models become commodities, not brands to capture users, if anyone can host them, and it gives each host less power for extortion.

    • CheesyFingers@piefed.social
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      1 day ago

      You seem to imply that they would care about that at all. They won’t. It’s already happening in the shops they frequent (Amazon) and they don’t care.

    • HaraldvonBlauzahn@feddit.org
      link
      fedilink
      arrow-up
      1
      ·
      1 day ago

      Reports exist that there are already software companies building features that are hallucinated by AI because people search and demand them.