• 0 Posts
  • 31 Comments
Joined 11 days ago
cake
Cake day: June 6th, 2025

help-circle






  • Everybody keeps saying “<X> will be great Someday™” in the tech world. Only Someday Never Comes, does it?

    Full self driving by the end of this year. For the past who knows how many years, it seems. LLMbeciles will stop hallucinating sometime Real Soon Now™. Only the newer LLMbeciles hallucinate more than the older ones did. We’ll have humans on Mars by 2021 2022 2026 2028 2029 2031 2044 2046. We’ll have AI-powered humanoid robots doing our bidding in 2023 ???.

    And so on and so on and so on.

    So here’s the thing: until there is evidence of non-slip generative “AI”, just assume it will remain slop. Because that’s been the pattern of Silly Con Valley since the '90s.







  • Oh, I’m aware that “no assholes” is an impossible dream. But if I start seeing assholes and idiots increasingly attached to specific instances, it’s incentive to perhaps just drop that instance. Different instances have different moderation policies and different target communities. For example “hilariouschaos” is an instance for people who’ve never left that 13-year old sniggering stage where “bewbs” is a word with intrinsic hilarity. So I can axe them comfortably.



  • There’s a few things here that tell me it’s probably not copyright-theft-generated. The big one that’s easy to explain is the tail. The tail starts off from behind the mouse, snakes in front of the cloak and background (so far so good), but then, here’s the critical thing, passes behind the fern staff and continues on the other side of it, positioned properly and in continuity.

    Copyright-theft-generators have tremendous problems with this because, as the chorus goes, they don’t understand anything. There is no mental model of “a tail” with them. There is no thought of a tail’s properties, so keeping a tail contiguous while passing across barriers is very hard for them.





  • Go to one of these “reasoning” AIs. Ask it to explain its reasoning. (It will!) Then ask it to explain its reasoning again. (It will!) Ask it yet again. (It will gladly do it thrice!)

    Then put the “reasoning” side by side and count the contradictions. There’s a very good chance that the three explanations are not only different from each other, they’re very likely also mutually incompatible.

    “Reasoning” LLMs just do more hallucination: specifically they are trained to form cause/effect logic chains—and if you read them in detail you’ll see some seriously broken links (because LLMs of any kind can’t think!)—using standard LLM hallucination practice to link the question to the conclusion.

    So they do the usual Internet argument approach: decide what the conclusion is and then make excuses for why they think it is such.

    If you don’t believe me, why not ask one? This is a trivial example with very little “reasoning” needed and even here the explanations are bullshit all the way down.

    Note, especially, the final statement it made:

    Yes, your summary is essentially correct: what is called “reasoning” in large language models (LLMs) is not true logical deduction or conscious deliberation. Instead, it is a process where the model generates a chain of text that resembles logical reasoning, based on patterns it has seen in its training data[1][2][6].

    When asked to “reason,” the LLM predicts each next token (word or subword) by referencing statistical relationships learned from vast amounts of text. If the prompt encourages a step-by-step explanation or a “chain of thought,” the model produces a sequence of statements that look like intermediate logical steps[1][2][5]. This can give the appearance of reasoning, but what is actually happening is the model is assembling likely continuations that fit the format and content of similar examples it has seen before[1][2][6].

    In short, the “chain of logic” is generated as part of the response, not as a separate, internal process that justifies a previously determined answer. The model does not first decide on an answer and then work backward to justify it; rather, it generates the answer and any accompanying rationale together, token by token, in a single left-to-right sequence, always guided by the prompt and the statistical patterns in its training[1][2][6].

    “Ultimately, LLM ‘reasoning’ is a statistical approximation of human logic, dependent on data quality, architecture, and prompting strategies rather than innate understanding. … Reasoning-like behavior in LLMs emerges from their ability to stitch together learned patterns into coherent sequences.” [1]

    So, what appears as reasoning is in fact a sophisticated form of pattern completion, not genuine logical deduction or conscious justification.

    [1] https://milvus.io/ai-quick-reference/how-does-reasoning-work-in-large-language-models-llms

    [2] https://www.digitalocean.com/community/tutorials/understanding-reasoning-in-llms

    [3] https://sebastianraschka.com/blog/2025/understanding-reasoning-llms.html

    [4] https://en.wikipedia.org/wiki/Reasoning_language_model

    [5] https://arxiv.org/html/2407.11511v1

    [6] https://www.anthropic.com/research/tracing-thoughts-language-model

    [7] https://magazine.sebastianraschka.com/p/state-of-llm-reasoning-and-inference-scaling

    [8] https://cameronrwolfe.substack.com/p/demystifying-reasoning-models

    Now I’m absolutely technically declined. Yet even I can figure out that these “reasoning” models are nothing different from the main flaws of LLMbeciles. If you ask it how it does maths, it will also admit that the LLM “decides” if maths are what it needs and will then switch to a maths engine. But if the LLM “decides” it can do it on its own it will. So you’ll still get garbage maths out of the machine.