Humans are able to do it but it takes us weeks instead of seconds.
Many, many tasks that would have taken hours or days to learn are just instant now. I dont know why people dont appreciate that technology. Is it because its sometimes wrong? Even with the time spent fixing errors, its many many times faster than doing the task manually.
Maybe the difference in opinions is because people talk about very different tasks and the llm just sucks at some of them, while being excellent at others.
I don’t like it because people don’t shut up about it and insist everyone should use it when it’s clearly stupid.
LLMs are language models, they don’t actually reason (not even reasoning models), when they nail a reasoning it’s by chance, not by design. Everything that is not language processing shouldn’t be done by an LLM. Viceversa, they are pretty good with language.
We already had automated reasoning tools. They are used for industrial optimization (i.e. finding optimal routes, finding how to allocate production, etc.) and no one cared about those.
As if it wasn’t enough. The internet is now full of slop. And hardware companies are warmongering an arms race that is fueling an economic bubble. And people are being fired to be replaced by something that will not actually work in the long run because it does not reason.
Yeah I totally agree about the slop and how its destroying what the web was supposed to be. It does make sense that people would hate it based on that.
I dont really use them for reasoning, I just use them for helping me with code, or finding facts faster.
But I know these things are the beginning of a very dystopian society as well. Once all the data centers are built, each person is going to be watched forever by Ai.
Humans are able to do it but it takes us weeks instead of seconds.
Many, many tasks that would have taken hours or days to learn are just instant now. I dont know why people dont appreciate that technology. Is it because its sometimes wrong? Even with the time spent fixing errors, its many many times faster than doing the task manually.
Maybe the difference in opinions is because people talk about very different tasks and the llm just sucks at some of them, while being excellent at others.
I don’t like it because people don’t shut up about it and insist everyone should use it when it’s clearly stupid.
LLMs are language models, they don’t actually reason (not even reasoning models), when they nail a reasoning it’s by chance, not by design. Everything that is not language processing shouldn’t be done by an LLM. Viceversa, they are pretty good with language.
We already had automated reasoning tools. They are used for industrial optimization (i.e. finding optimal routes, finding how to allocate production, etc.) and no one cared about those.
As if it wasn’t enough. The internet is now full of slop. And hardware companies are warmongering an arms race that is fueling an economic bubble. And people are being fired to be replaced by something that will not actually work in the long run because it does not reason.
Yeah I totally agree about the slop and how its destroying what the web was supposed to be. It does make sense that people would hate it based on that.
I dont really use them for reasoning, I just use them for helping me with code, or finding facts faster.
But I know these things are the beginning of a very dystopian society as well. Once all the data centers are built, each person is going to be watched forever by Ai.