• ignirtoq@fedia.io
    link
    fedilink
    arrow-up
    53
    ·
    19 hours ago

    The first statement is not even wholly true. While training does take more, executing the model (called “inference”) takes much, much more power than non-AI search algorithms, or really any traditional computational algorithm besides bogosort.

    Big Tech weren’t doing the best they possibly could transitioning to green energy, but they were making substantial progress before LLMs exploded on the scene because the value proposition was there: traditional algorithms were efficient enough that the PR gain from doing the green energy transition offset the cost.

    Now Big Tech have for some reason decided that LLMs represent the biggest game of gambling ever. The first to find the breakthrough to AGI will win it all and completely take over all IT markets, so they need to consume as much as they can get away with to maximize the probability that that breakthrough happens by their engineers.

    • ch00f@lemmy.world
      link
      fedilink
      arrow-up
      15
      arrow-down
      1
      ·
      edit-2
      18 hours ago

      Yeah, I ran some image generators on my RTX2070, and it took a solid minute at full power to do it. Sure, it’s not a crazy amount, but it’s not like it’s running on your iPhone.

        • lime!@feddit.nu
          link
          fedilink
          English
          arrow-up
          6
          ·
          10 hours ago

          there aren’t many games that tax your gpu like inference, where it’s pegged for the entire time. i have a power usage tracker on my desktop because my gpu is stupidly powerhungry, and it uses way more power doing inference than playing a modern graphics-intensive game.

      • Match!!@pawb.social
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        11 hours ago

        you can get a very small generator running on a modern phone if you want a grainy 400x400 piece of anime trash