Archive for who gets paywall https://archive.is/09ZtS

(I didn’t get paywall but the verge is in my noscript blacklist)

They boast having hired 5 slop specialists that chose the least worse shots over 70000 prompts

They have something like $50 billion yearly revenue, can’t pay real people for an ad? Literally peanuts for them.

  • Naz@sh.itjust.works
    link
    fedilink
    arrow-up
    89
    arrow-down
    4
    ·
    edit-2
    3 days ago

    Hemlo. Am AI “expert.”

    It takes around 307.2 Wh to generate 8 seconds of AI video.

    (8 seconds × 24.0 fps × 12 seconds/frame × 480W combined TPU load ÷ 3,600)

    70,000 prompts is 21.5 Megawatt-Hours.

    They are very likely not saving any money or time by doing this.

    • NoneOfUrBusiness@fedia.io
      link
      fedilink
      arrow-up
      28
      ·
      3 days ago

      But are they paying for this energy or is it AI companies doing so and waiting for the golden goose that will make it all worth it?

      • danielton1@lemmy.world
        link
        fedilink
        arrow-up
        35
        arrow-down
        1
        ·
        3 days ago

        No, the ones who are paying for the energy are the people who live near whatever data centers received the slop requests.

      • mushroommunk@lemmy.today
        link
        fedilink
        arrow-up
        12
        ·
        3 days ago

        Well the Microsoft fillings hint that OpenAI lost $11.5 billion last, so I’m going with they’re waiting for the golden goose

    • Ulrich@feddit.org
      link
      fedilink
      English
      arrow-up
      13
      arrow-down
      1
      ·
      3 days ago

      It takes around 307.2 Wh to generate 8 seconds of AI video.

      That’s fucking insane. Do you have a source for that that you can share?

      • Naz@sh.itjust.works
        link
        fedilink
        arrow-up
        17
        arrow-down
        4
        ·
        3 days ago

        Try running a local video generation model like DALLE-3 on your machine; or just generate a few frames, like 96 sequentially in FLUX at 1024×1024.

        Video generation is a lot harder on GPUs than single image gen.

        My own local performance is 480W with consumer level hardware, and obviously enterprise grade can be about 5× more efficient (see: Nvidia H200/600W) depending on optimizations, load balancing, and highest grade chipsets, but overall, it’s still a pretty gigantic computer task to generate even a five minute long video from scratch.

    • TheTechnician27@lemmy.world
      link
      fedilink
      English
      arrow-up
      17
      arrow-down
      7
      ·
      edit-2
      2 days ago

      Okay, so:

      • Getting your figure from your specific home setup. Cool methodology you fail to mention until later.
      • Admitting later on that modern datacenter GPUs get much higher compute efficiency than your home setup (where you pull “5x” out of your ass, but we’ll run with it).
      • Assuming the videos generated on average were 12 seconds, which is a wild-ass assumption if you see how often the actual ad cuts.
      • Implicitly failing to account for economies of scale on the electricity itself.
      • Assuming specifically DALL-E 3 which even in its own line has been superseded since this March by GPT-4o.

      Like I know you didn’t list a final price, but if you’re suggesting this runs at 5x the compute efficiency of your home setup, so about 4.5 MWh (again, your assumptions, and we’ll even keep that baseless 12-second prompt output average), then assuming a comically high rate of 15¢ per kWh at this scale (that’s more like a household consumer rate), that would be 4500 x 0.15 or six hundred seventy-five dollars to render. If you think $700-ish is even a drop in Coca-Cola’s advertising budget, you might be delusional.

      Did you learn to become an expert in basic arithmetic before you got your expert degree in AI-ology? I’m not a fan of the proliferation of genAI, so it hurts me to point out how ridiculous what you’re saying is.

      • Naz@sh.itjust.works
        link
        fedilink
        arrow-up
        15
        ·
        edit-2
        2 days ago

        Yeah, $700 isn’t even a drop in their budget, I agree; the issue is with just using a team to render an actual commercial.

        Sifting through 70,000 generations likely cost them labor-hours regardless, and throwing away 5 MWh (to 21 MWh at the very worst end of the estimates) on top of that, seems like a waste of time and energy.

        If they’re hiring people to make a CGI advert, why not just … have CGI people make the advert?

        I’m not sure in the difference of hourly pay between a CGI artist and “routine AI video sifter/rater” but on sheer guesswork, I’d have to say there’s likely a net negative on the process (in terms of quality for money spent on the project).

        Alternatively, I could just be completely wrong and the future of advertising is everyone just shooting out AI-genned adverts at Mach 10.

        • TheTechnician27@lemmy.world
          link
          fedilink
          English
          arrow-up
          6
          arrow-down
          2
          ·
          edit-2
          2 days ago

          Don’t forget that manual review is not inherently required for all 70,000 prompts. GenAI also classifies images, and it’s probably a reasonable workflow within this studio’s capabilties to, say, mindlessly create a bunch of minor variations of the same thing, feed a sample of a few frames from the first N frames of each video into a classifier model, and tell it to mark them as probably bad and put them to the side if they look anomalous.

          Obviously a GPT classifier has zero understanding of what that actually means, but you can probably filter out a sizable chunk of prompts this way – heuristic reason being that an AI-generated video tends to decohere over time as it feeds on itself, so the initial frames are probably the best you’re going to get.

          You’ll have tons of false positives and won’t filter out all of the bad ones, but who cares about false positives when you can generate another one with a change of the prompt? This method I made up just now in five seconds is probably well-surpassed by a rapidly developing knowledge base about how to microoptimize the shit out of GenAI.

      • yetAnotherUser@discuss.tchncs.de
        link
        fedilink
        arrow-up
        2
        ·
        edit-2
        2 days ago

        There are a couple more assumption that may result in costs worse than reality:

        • It is assuming each frame is independently generated. In reality, the AI model may use keyframes and interpolation which would reduce computation costs.
        • It is assuming the 70,000 prompts were all fully generated. But since AI is deterministic, taking a low resolution, low framerate sample of each prompt to discard the 99% of trash would be an easy way to save a lot of resources

        Though I am neither an AI expert nor in charge of creating AI videos so both of these suggestions may not reflect reality.

        • ZDL@lazysoci.al
          link
          fedilink
          arrow-up
          1
          ·
          1 day ago

          Degenerative AI is the precise opposite of deterministic. It’s stochastic.

          • yetAnotherUser@discuss.tchncs.de
            link
            fedilink
            arrow-up
            1
            ·
            1 day ago

            No, it’s deterministic. If you control the source of random numbers (such as with a seed), you will always get the same result from the same prompt.

            Computers are mathematically incapable of randomness. Even a stochastic sampling requires randomness which will be deterministic if your source of random numbers is controlled.

            • ZDL@lazysoci.al
              link
              fedilink
              arrow-up
              1
              ·
              18 hours ago

              So let me translate this from technobabble to English.

              If you explicitly make it non-random it’s non-random.

              Duh.