Record numbers of people are turning to AI chatbots for therapy, reports Anthony Cuthbertson. But recent incidents have uncovered some deeply worrying blindspots of a technology out of control
It will give you whatever you want. Just like social media and google searches. The information exists.
When you tell it to give you information in a way you want to hear it, you’re just talking to yourself at that point.
People should probably avoid giving it prompts that fuel their mania. However, since mania is totally subjective and the topics range broadly, what does an actual regulation look like?
Lmao, I never said I’ve never touched AI I just don’t use it for anything because it doesn’t do anything useful for me.
Yes, delusional content on Google is also a problem. But you have to understand how delusional content tailored uniquely to the individual fed to them by a sycophantic chatbot is several times more dangerous.
Oh, well that explains everything, you are using it wrong.
A lot of people think that you’re supposed to argue with it and talk about things that are subjective or opinion based. But that’s ridiculous because it’s a computer program.
ChatGPT and others like it are calculators. They help you brainstorm problems. Ultimately, you are responsible for the outcome.
There’s a phrase I like to use at work when junior developers complain the outcome is not how they wanted it: shit in shit out.
So next time you use AI, perhaps consider are you feeding it shit? Because if you’re getting it, you are.
So what is the correct usage?
It will give you whatever you want. Just like social media and google searches. The information exists.
When you tell it to give you information in a way you want to hear it, you’re just talking to yourself at that point.
People should probably avoid giving it prompts that fuel their mania. However, since mania is totally subjective and the topics range broadly, what does an actual regulation look like?
What do you use AI for?
Yeah, because someone in a manic state can definitely be trusted to carefully analyze their prompts to minimize bias.
I don’t.
So… you have no clue at all what any of this is. Got it. I’ll bet you blame video games for violence.
Yeah, because video games are definitely the same as a search engine that tells you all your delusions are real.
Tell me more about how you’ve never ever used it and that everything you’re saying is influenced by the media and other anti-ai user comments.
Let’s see what happens when I google for UFOs or chemtrails or deep state or anti-vaccine, etc. how much user created delusional content will I find?
Lmao, I never said I’ve never touched AI I just don’t use it for anything because it doesn’t do anything useful for me.
Yes, delusional content on Google is also a problem. But you have to understand how delusional content tailored uniquely to the individual fed to them by a sycophantic chatbot is several times more dangerous.
Oh, well that explains everything, you are using it wrong.
A lot of people think that you’re supposed to argue with it and talk about things that are subjective or opinion based. But that’s ridiculous because it’s a computer program.
ChatGPT and others like it are calculators. They help you brainstorm problems. Ultimately, you are responsible for the outcome.
There’s a phrase I like to use at work when junior developers complain the outcome is not how they wanted it: shit in shit out.
So next time you use AI, perhaps consider are you feeding it shit? Because if you’re getting it, you are.