If Turing was alive he would say that LLMs are wasting computing power to do something a human should be able to do on their own, and thus we shouldn’t waste time studying them.
Which is what he said about compilers and high level languages (in this instance, high level means like Fortran, not like python)
Humans are able to do it but it takes us weeks instead of seconds.
Many, many tasks that would have taken hours or days to learn are just instant now. I dont know why people dont appreciate that technology. Is it because its sometimes wrong? Even with the time spent fixing errors, its many many times faster than doing the task manually.
Maybe the difference in opinions is because people talk about very different tasks and the llm just sucks at some of them, while being excellent at others.
I don’t like it because people don’t shut up about it and insist everyone should use it when it’s clearly stupid.
LLMs are language models, they don’t actually reason (not even reasoning models), when they nail a reasoning it’s by chance, not by design. Everything that is not language processing shouldn’t be done by an LLM. Viceversa, they are pretty good with language.
We already had automated reasoning tools. They are used for industrial optimization (i.e. finding optimal routes, finding how to allocate production, etc.) and no one cared about those.
As if it wasn’t enough. The internet is now full of slop. And hardware companies are warmongering an arms race that is fueling an economic bubble. And people are being fired to be replaced by something that will not actually work in the long run because it does not reason.
Yeah I totally agree about the slop and how its destroying what the web was supposed to be. It does make sense that people would hate it based on that.
I dont really use them for reasoning, I just use them for helping me with code, or finding facts faster.
But I know these things are the beginning of a very dystopian society as well. Once all the data centers are built, each person is going to be watched forever by Ai.
LLM are not the path to go forward to simulate a person, this is a fact. By design they cannot reason, it’s not a matter of advancement, it’s literally how they work as a principle. It’s a statistical trick to generate random texts that look like thought out phrases, no reasoning involved.
If someone tells you they might be the way forward to simulate a human, they are scamming you. No one who actually knows how they work says that unless they are a CEO of a trillion dollar company selling AI.
If Turing was alive he would say that LLMs are wasting computing power to do something a human should be able to do on their own, and thus we shouldn’t waste time studying them.
Which is what he said about compilers and high level languages (in this instance, high level means like Fortran, not like python)
Humans are able to do it but it takes us weeks instead of seconds.
Many, many tasks that would have taken hours or days to learn are just instant now. I dont know why people dont appreciate that technology. Is it because its sometimes wrong? Even with the time spent fixing errors, its many many times faster than doing the task manually.
Maybe the difference in opinions is because people talk about very different tasks and the llm just sucks at some of them, while being excellent at others.
I don’t like it because people don’t shut up about it and insist everyone should use it when it’s clearly stupid.
LLMs are language models, they don’t actually reason (not even reasoning models), when they nail a reasoning it’s by chance, not by design. Everything that is not language processing shouldn’t be done by an LLM. Viceversa, they are pretty good with language.
We already had automated reasoning tools. They are used for industrial optimization (i.e. finding optimal routes, finding how to allocate production, etc.) and no one cared about those.
As if it wasn’t enough. The internet is now full of slop. And hardware companies are warmongering an arms race that is fueling an economic bubble. And people are being fired to be replaced by something that will not actually work in the long run because it does not reason.
Yeah I totally agree about the slop and how its destroying what the web was supposed to be. It does make sense that people would hate it based on that.
I dont really use them for reasoning, I just use them for helping me with code, or finding facts faster.
But I know these things are the beginning of a very dystopian society as well. Once all the data centers are built, each person is going to be watched forever by Ai.
Where did he say that about compilers and high level languages? He died before Fortran was released and probably programmed on punch cards or tape.
I’ll try to find it later, I read he said that in a book from Martin Davis. He didn’t speak about Fortran, I just used it as an analogy
Wasn’t his ideal to simulate a brain?
Neural networks don’t simulate a brain, it’s a misconception caused by their name. They have nothing to do with brain neurons
Not what I meant. What I mean is: this could be the path he would go for, since his desire was to make a stimulated person (AI).
LLM are not the path to go forward to simulate a person, this is a fact. By design they cannot reason, it’s not a matter of advancement, it’s literally how they work as a principle. It’s a statistical trick to generate random texts that look like thought out phrases, no reasoning involved.
If someone tells you they might be the way forward to simulate a human, they are scamming you. No one who actually knows how they work says that unless they are a CEO of a trillion dollar company selling AI.