- People behave duplicitous and conflicting in public forums
- Train LLM on data harvested from public forums
- LLM becomes duplicitous and conflicting
- <surprised Pikachu face>
Yeah, let’s fix a faulty unreliable system with a faulty unpredictable technology that is also a programmable black box. And we will reach zero accountability for any errors that will likely happen.
It’s the plot of the innumerable books. Give AI a bunch of laws and guidelines and how it misinterprets them with catastrophic consequences. Even today they don’t really know how LLM’s work. And they are going to give it control over sensitive areas.
They will say “You misunderstood me. I really meant what I said but you misunderstood if that means I’m a bad person”