Key points
- The hippocampus enables abstract reasoning; LLMs mirror this through pattern-based language prediction.
- Future AI could emulate human inference by integrating multimodal learning and reinforcement methods.
- AI’s evolution hinges on bridging prediction and reasoning, moving toward deeper, human-like understanding.
“Can LLMs think like us?”
No.
"Can LLMs think—?”
No.
“Can LLMs—?”
No.
No
Not like us, but maybe like OP 🤣
Facts, reasoning, ethics, ect. are outside the scope of an LLM. Expecting otherwise is like expecting a stand mixer to bake a cake. It is helpful for a decent part of the process, but typically is lacking in the using heat to process batter into a tasty desert area. An AI like one from the movies would require many more pieces than an LLM can provide and saying otherwise is a a category mistake*.
That isn’t to say that something won’t be developed eventually, but it would be FAR beyond an LLM if it is even possible.
(* See also: https://plato.stanford.edu/entries/category-mistakes/)
“Can LLMs Think ?” YES “Like Us ?” NO … not right now anyway.
The fear in here is palpable.