Archive for who gets paywall https://archive.is/09ZtS
(I didn’t get paywall but the verge is in my noscript blacklist)
They boast having hired 5 slop specialists that chose the least worse shots over 70000 prompts
They have something like $50 billion yearly revenue, can’t pay real people for an ad? Literally peanuts for them.



Okay, so:
Like I know you didn’t list a final price, but if you’re suggesting this runs at 5x the compute efficiency of your home setup, so about 4.5 MWh (again, your assumptions, and we’ll even keep that baseless 12-second prompt output average), then assuming a comically high rate of 15¢ per kWh at this scale (that’s more like a household consumer rate), that would be 4500 x 0.15 or six hundred seventy-five dollars to render. If you think $700-ish is even a drop in Coca-Cola’s advertising budget, you might be delusional.
Did you learn to become an expert in basic arithmetic before you got your expert degree in AI-ology? I’m not a fan of the proliferation of genAI, so it hurts me to point out how ridiculous what you’re saying is.
Yeah, $700 isn’t even a drop in their budget, I agree; the issue is with just using a team to render an actual commercial.
Sifting through 70,000 generations likely cost them labor-hours regardless, and throwing away 5 MWh (to 21 MWh at the very worst end of the estimates) on top of that, seems like a waste of time and energy.
If they’re hiring people to make a CGI advert, why not just … have CGI people make the advert?
I’m not sure in the difference of hourly pay between a CGI artist and “routine AI video sifter/rater” but on sheer guesswork, I’d have to say there’s likely a net negative on the process (in terms of quality for money spent on the project).
Alternatively, I could just be completely wrong and the future of advertising is everyone just shooting out AI-genned adverts at Mach 10.
Don’t forget that manual review is not inherently required for all 70,000 prompts. GenAI also classifies images, and it’s probably a reasonable workflow within this studio’s capabilties to, say, mindlessly create a bunch of minor variations of the same thing, feed a sample of a few frames from the first N frames of each video into a classifier model, and tell it to mark them as probably bad and put them to the side if they look anomalous.
Obviously a GPT classifier has zero understanding of what that actually means, but you can probably filter out a sizable chunk of prompts this way – heuristic reason being that an AI-generated video tends to decohere over time as it feeds on itself, so the initial frames are probably the best you’re going to get.
You’ll have tons of false positives and won’t filter out all of the bad ones, but who cares about false positives when you can generate another one with a change of the prompt? This method I made up just now in five seconds is probably well-surpassed by a rapidly developing knowledge base about how to microoptimize the shit out of GenAI.
There are a couple more assumption that may result in costs worse than reality:
Though I am neither an AI expert nor in charge of creating AI videos so both of these suggestions may not reflect reality.
Degenerative AI is the precise opposite of deterministic. It’s stochastic.
No, it’s deterministic. If you control the source of random numbers (such as with a seed), you will always get the same result from the same prompt.
Computers are mathematically incapable of randomness. Even a stochastic sampling requires randomness which will be deterministic if your source of random numbers is controlled.
So let me translate this from technobabble to English.
If you explicitly make it non-random it’s non-random.
Duh.