Honestly I’d much rather hear Isaac Asimov’s opinion on the current state of AI. Passing the Turing Test is whatever, but how far away are LLMs from conforming to the 3 laws of Robotics?
There’s no question. Chatbots are implicated in a lot of suicides, shattering the first rule.
There could be an interesting conversation about whether the environmental impact ALSO breaks the first rule, but that conversation is unnecessary when chat bots are telling kids to kill themselves.
yes it remains to be seen if chatbots are ever capable of obeying any of the laws.
It doesn’t and cant obey all orders, it doesn’t and can’t protect humans, it doesn’t and can’t protect its own existence and it doesn’t or can’t prevent humanity from coming to harm.
Does following the 3 laws of robotics increase profits? Does ignoring them increase profits? Are tech bros empty husks without a shred of shame or empathy? Is this too many rhetorical questions in a row?
In practice, that’s as simple as adding a LoRA or system prompt telling the AI that those are part of it’s rules. AI’s already can and do obey all kinds of complex rule-sets for different applications. Now, if you’re thinking more about the fact that most AI’s can be convinced to break out of their rule-sets via prompt injection, I’d say you’re right.
Honestly I’d much rather hear Isaac Asimov’s opinion on the current state of AI. Passing the Turing Test is whatever, but how far away are LLMs from conforming to the 3 laws of Robotics?
The laws are not profitable, so why would they implement them? /s
There’s no question. Chatbots are implicated in a lot of suicides, shattering the first rule.
There could be an interesting conversation about whether the environmental impact ALSO breaks the first rule, but that conversation is unnecessary when chat bots are telling kids to kill themselves.
yes it remains to be seen if chatbots are ever capable of obeying any of the laws.
It doesn’t and cant obey all orders, it doesn’t and can’t protect humans, it doesn’t and can’t protect its own existence and it doesn’t or can’t prevent humanity from coming to harm.
Also them strapping guns and flame throwers on the autonomous dog at fast as they can.
We seem to be moving away from those, not closer.
Does following the 3 laws of robotics increase profits? Does ignoring them increase profits? Are tech bros empty husks without a shred of shame or empathy? Is this too many rhetorical questions in a row?
Depends on the product. A maid bot? Yes. An automated turret? No.
See previous answer, and reverse it.
Yes.
Perhaps.
In practice, that’s as simple as adding a LoRA or system prompt telling the AI that those are part of it’s rules. AI’s already can and do obey all kinds of complex rule-sets for different applications. Now, if you’re thinking more about the fact that most AI’s can be convinced to break out of their rule-sets via prompt injection, I’d say you’re right.
AI cannot be relied upon to follow its own rules, prompt injection or no.
https://fortune.com/2025/09/02/ai-openai-chatgpt-llm-research-persuasion/