It’s not that they may be deceived, it’s that they have no concept of what truth or fiction, mistake or success even are.
Our brains know the concepts and may fall to deceipt without recognizing it, but we at least recognize that the concept exists.
An AI generates content that is a blend of material from the training material consistent with extending the given prompt. It only seems to introduce a concept of lying or mistakes when the human injects that into the human half of the prompt material. It will also do so in a way that the human can just as easily instruct it to correct a genuine mistake as well as the human instruct it to correct something that is already correct (unless the training data includes a lot of reaffirmation of the material in the face of such doubts).
An LLM can consume more input than a human can gather in multiple lifetimes and still bo wonky in generating content, because it needs enough to credibly blend content to extend every conceivable input. It’s why so many people used to judging human content get derailed by judging AI content. An AI generates a fantastic answer to an interview question that only solid humans get right, only to falter ‘on the job’ because the utterly generic interview question looks like millions of samples in the input, but the actual job was niche.
It’s not that they may be deceived, it’s that they have no concept of what truth or fiction, mistake or success even are.
Our brains know the concepts and may fall to deceipt without recognizing it, but we at least recognize that the concept exists.
An AI generates content that is a blend of material from the training material consistent with extending the given prompt. It only seems to introduce a concept of lying or mistakes when the human injects that into the human half of the prompt material. It will also do so in a way that the human can just as easily instruct it to correct a genuine mistake as well as the human instruct it to correct something that is already correct (unless the training data includes a lot of reaffirmation of the material in the face of such doubts).
An LLM can consume more input than a human can gather in multiple lifetimes and still bo wonky in generating content, because it needs enough to credibly blend content to extend every conceivable input. It’s why so many people used to judging human content get derailed by judging AI content. An AI generates a fantastic answer to an interview question that only solid humans get right, only to falter ‘on the job’ because the utterly generic interview question looks like millions of samples in the input, but the actual job was niche.