Thanks for sharing this! I really think that when people see LLM failures and say that such failures demonstrate how fundamentally different LLMs are from human cognition, they tend to overlook how humans actually do exhibit remarkably similar failures modes. Obviously dementia isn’t really analogous to generating text while lacking the ability to “see” a rendering based on that text. But it’s still pretty interesting that whatever feedback loops did get corrupted in these patients led to such a variety of failure modes.
As an example of what I’m talking about, I appreciated and generally agreed with this recent Octomind post, but I disagree with the list of problems that “wouldn’t trip up a human dev”; these are all things I’ve seen real humans do, or could imagine a human doing.
What i find interesting is that in both cases there is a certain consistency in the mistakes too - basically every dementia patient still understands the clock is something with a circle and numbers and not a square with letters for example. LLMs can tell you cokplete bullshit, but still understands it has to be done with perfect grammar in a consistant language. So much so it struggles to respond outside of this box - ask it to insert spelling errors to look human for example.
the ability to “see”
This might be the true problem in both cases, both the patient and the model can not comprehend the bigger picture (a circle is divided into 12 segments, because that is how we deconstructed the time it takes for the earth to spin around it’s axis). Things that seem logical to use, are logical because of these kind of connections with other things we know and comprehend.
Thanks for sharing this! I really think that when people see LLM failures and say that such failures demonstrate how fundamentally different LLMs are from human cognition, they tend to overlook how humans actually do exhibit remarkably similar failures modes. Obviously dementia isn’t really analogous to generating text while lacking the ability to “see” a rendering based on that text. But it’s still pretty interesting that whatever feedback loops did get corrupted in these patients led to such a variety of failure modes.
As an example of what I’m talking about, I appreciated and generally agreed with this recent Octomind post, but I disagree with the list of problems that “wouldn’t trip up a human dev”; these are all things I’ve seen real humans do, or could imagine a human doing.
What i find interesting is that in both cases there is a certain consistency in the mistakes too - basically every dementia patient still understands the clock is something with a circle and numbers and not a square with letters for example. LLMs can tell you cokplete bullshit, but still understands it has to be done with perfect grammar in a consistant language. So much so it struggles to respond outside of this box - ask it to insert spelling errors to look human for example.
This might be the true problem in both cases, both the patient and the model can not comprehend the bigger picture (a circle is divided into 12 segments, because that is how we deconstructed the time it takes for the earth to spin around it’s axis). Things that seem logical to use, are logical because of these kind of connections with other things we know and comprehend.