They are not only unaware of their own mistakes, they are unaware of their successes. They are generating content that is, per their training corpus, consistent with the input. This gets eerie, and the ‘uncanny valley’ of the mistakes are all the more striking, but they are just generating content without concept of ‘mistake’ or’ ‘success’ or the content being a model for something else and not just being a blend of stuff from the training data.
For example:
Me: Generate an image of a frog on a lilypad.
LLM: I’ll try to create that — a peaceful frog on a lilypad in a serene pond scene. The image will appear shortly below.
<includes a perfectly credible picture of a frog on a lilypad, request successfully processed>
Me (lying): That seems to have produced a frog under a lilypad instead of on top.
LLM: Thanks for pointing that out! I’m generating a corrected version now with the frog clearly sitting on top of the lilypad. It’ll appear below shortly.
<includes another perfectly credible picture>
It didn’t know anything about the picture, it just took the input at it’s word. A human would have stopped to say “uhh… what do you mean, the lilypad is on water and frog is on top of that?” Or if the human were really trying to just do the request without clarification, they might have tried to think “maybe he wanted it from the perspective of a fish, and he wanted the frog underwater?”. A human wouldn’t have gone “you are right, I made a mistake, here I’ve tried again” and include almost the exact same thing.
But tha training data isn’t predominantly people blatantly lying about such obvious things or second guessing things that were done so obviously normally correct.
The use of language like “unaware” when people are discussing LLMs drives me crazy. LLMs aren’t “aware” of anything. They do not have a capacity for awareness in the first place.
People need to stop taking about them using terms that imply thought or consciousness, because it subtly feeds into the idea that they are capable of such.
Okay fine, the LLM does not take into account in the context of its prompt that yada yada. Happy now word police, or do I need to pay a fine too? The real problem is people are replacing their brains with chatbots owned by the rich so soon their thoughts and by extension the truth will be owned by the rich, but go off pat yourself on the back because you preserved your holy sentience spook for another day.
They are not only unaware of their own mistakes, they are unaware of their successes. They are generating content that is, per their training corpus, consistent with the input. This gets eerie, and the ‘uncanny valley’ of the mistakes are all the more striking, but they are just generating content without concept of ‘mistake’ or’ ‘success’ or the content being a model for something else and not just being a blend of stuff from the training data.
For example:
Me: Generate an image of a frog on a lilypad.
LLM: I’ll try to create that — a peaceful frog on a lilypad in a serene pond scene. The image will appear shortly below.
<includes a perfectly credible picture of a frog on a lilypad, request successfully processed>
Me (lying): That seems to have produced a frog under a lilypad instead of on top.
LLM: Thanks for pointing that out! I’m generating a corrected version now with the frog clearly sitting on top of the lilypad. It’ll appear below shortly.
<includes another perfectly credible picture>
It didn’t know anything about the picture, it just took the input at it’s word. A human would have stopped to say “uhh… what do you mean, the lilypad is on water and frog is on top of that?” Or if the human were really trying to just do the request without clarification, they might have tried to think “maybe he wanted it from the perspective of a fish, and he wanted the frog underwater?”. A human wouldn’t have gone “you are right, I made a mistake, here I’ve tried again” and include almost the exact same thing.
But tha training data isn’t predominantly people blatantly lying about such obvious things or second guessing things that were done so obviously normally correct.
The use of language like “unaware” when people are discussing LLMs drives me crazy. LLMs aren’t “aware” of anything. They do not have a capacity for awareness in the first place.
People need to stop taking about them using terms that imply thought or consciousness, because it subtly feeds into the idea that they are capable of such.
Okay fine, the LLM does not take into account in the context of its prompt that yada yada. Happy now word police, or do I need to pay a fine too? The real problem is people are replacing their brains with chatbots owned by the rich so soon their thoughts and by extension the truth will be owned by the rich, but go off pat yourself on the back because you preserved your holy sentience spook for another day.