• @Barbarian772 also, I never demanded a definition of intelligence that explicitly excluded “AI”. I asked for one that excluded simple calculators but included human beings. The Wikipedia one is good enough for this conversation, and it just so happens that ChatGPT nor any other LLMs simply do not meet it.

          • @lloram239

            > But human sensory inputs aren’t special

            It’s not about sensory inputs, it’s about having a model of the world and objects in it and ability to make predictions.

            > The important part is that the AI can figure out the pattern in the data it does get and so far AI systems are doing very well.

            GPT cannot “figure” anything out. That’s the point. It only probabilistically generates text. That’s what it does, there is no model of the world behind it, no predictions, no"figuring out".

              • @lloram239 ah, so you’re down to throwing epithets like “idiotic” around. Clearly a mark of thoughtful and well-reasoned argument.

                > Predictions about the world are probabilistic by nature, since the future hasn’t happened yet.

                Thing is: GPT doesn’t make predictions about the world, it makes predictions about what the next word, phrase, sentence should be in a text, based on the prompt and the corpus it got “trained” on.

                  • @lloram239 that’s really akin to claiming that a mannequin is a human being because it really really looks alike.

                    The “predictions about the world” you refer to here are instead predictions about the text. They are not based on a model of the world, they are based on loads and loads of text the model was trained on.

                    I don’t have to prove ChatGPT is not intelligent. That would be proving a negative. The burden of proof is on those claiming that it is intelligent.