

“AI model unlearning” is the equivalent of saying “removing a specific feature from a compiled binary executable”. So, yeah, basically not feasible.
But the solution is painfully easy: you remove the data from your training set (ie, the source code), and re-train your model (recompile the executable).
Yes, it may cost you a lot of time and money to accomplish this, but such are the consequences of breaking the law. Maybe be extra careful about obeying laws going forward, eh?
I’m a little unclear what they’re saying gives it away as being AI? Occasional choppiness?
The tweet link posted elsewhere in this thread doesn’t give much to go on, but the “choppiness”, while noticeable if I was really looking for it, did not stand out to me. What did stand out to me was the clarity of the audio. Every “on the phone with Trump interview” I’ve heard (which is few, but enough) has had really horrible, standard phone line quality. This had a “computer microphone picking up a computer speaker” quality. Which I suppose lends credence to the theory that they, themselves, are doing the duping, otherwise the faked audio would have come through the phone lines? Probably would have been more believable then, like how grainy pictures of Bigfoot are more believable.