Just listened to Naomi Brockwell talk about how AI is basically the perfect surveillance tool now.
Her take is very interesting: what if we could actually use AI against that?
Like instead of trying to stay hidden (which honestly feels impossible these days), what if AI could generate tons of fake, realistic data about us? Flood the system with so much artificial nonsense that our real profiles basically disappear in the noise.
Imagine thousands of AI versions of me browsing random sites, faking interests, triggering ads, making fake patterns. Wouldn’t that mess with the profiling systems?
How could this be achieved?
As far as I know, none of them had random false data so I’m not sure why you would think that?
I feel like you’re greatly exaggerating the level of intelligence at work here. It’s not hard to figure out people’s political affiliations with something as simple as their browsing history, and it’s not hard to manipulate them with propaganda accordingly. They did not have an “exact customized lie” for every individual, they just grouped individuals into categories (AKA profiling) and showed them a select few forms of disinformation accordingly.
Good input, thank you.
You can use topic B as an illustration for topic A, even if topic B does not directly contain topic A. For example: (during a chess game analysis) “Moving the knight in front of the bishop is like a punch in the face from mike tyson.”
There are probably better examples of more complex algorithms that work on data collected online for various goals. When developing those, a problem that naturaly comes up would be filtering out garbage. Do you think it is absolutely infeasable to implement one that would detect adnauseum specifically?
Sometimes yes. In this case, no.
I think the users of such products are extremely low (especially since they’ve been kicked from Google store) that it wouldn’t be worth their time.
But no, I don’t think they could either. It’s just an automation script that runs actions the same way you would.