• 0 Posts
  • 76 Comments
Joined 2 years ago
cake
Cake day: March 3rd, 2024

help-circle
  • “Users accustomed to receiving confident answers to virtually any question would likely abandon such systems rapidly,” the researcher wrote.

    While there are “established methods for quantifying uncertainty,” AI models could end up requiring “significantly more computation than today’s approach,” he argued, “as they must evaluate multiple possible responses and estimate confidence levels.”

    “For a system processing millions of queries daily, this translates to dramatically higher operational costs,” Xing wrote.

    1. They already require substantially more computation than search engines.
    2. They already cost substantially more than search engines.
    3. Their hallucinations make them unusable for any application beyond novelty.

    If removing hallucinations means Joe Shmoe isn’t interested in asking it questions a search engine could already answer, but it brings even 1% of the capability promised by all the hype, they would finally actually have a product. The good long-term business move is absolutely to remove hallucinations and add uncertainty. Let’s see if any of then actually do it.


  • As far as I’ve ever been paying attention, conservatives only argue in bad faith. It’s always been about elevating their own speech and suppressing speech that counters theirs. They just couch it in terms that sound vaguely reasonable or logical in the moment if you don’t know their history and don’t think about it more deeply than very surface-level.

    Before, platforms were suppressing their speech, so they were promoters of free speech. Now platforms are not suppressing speech counter to them, so it’s all about content moderation to protect the children, or whatever. But their policies always belie their true motive: they never implement what research shows supports their claimed position of the moment. They always create policies that hurt their out-groups and may sometimes help their in-groups (helping people is optional).


  • Can we be so sure such a stock market dip is due to the ongoing daytime TV drama that is AI?

    There’s also the undercurrent of the Trump administration steamrolling over decades- or century-old precedents daily, putting our country, and thus the economy, in new territory. Basic assumptions about the foundations of our economy are crumbling, and the only thing keeping it from collapsing outright is inertia. But inertia will only last so long. This is affecting every aspect of the real economy, goods and services that are moving around right now, as opposed to the speculative facets like the AI bubble.

    I’m waiting for the other shoe to drop and for Wall Street to realize Trump has really screwed over vast swaths of supply chains all across the economy.


  • My understanding of why digital computers rose to dominance was not any superiority in capability but basically just error tolerance. When the intended values can only be “on” or “off,” your circuit can be really poor due to age, wear, or other factors, but if it’s within 40% of the expected “on” or “off” state, it will function basically the same as perfect. Analog computers don’t have anywhere near tolerances like that, which makes them more fragile, expensive, and harder to scale production.

    I’m really curious if the researchers address any of those considerations.


  • Vibe coding anything more complicated than the most trivial example toy app creates a mountain of security vulnerabilities. Every company that fires human software developers and actually deploys applications entirely written by AI will have their systems hacked immediately. They will either close up shop, hire more software security experts than the number of developers they fired just to keep up with the garbage AI-generated code, or try to hire all of the software developers back.




  • Several years ago I created a Slack bot that ran something like Jupyter notebook in a container, and it would execute Python code that you sent to it and respond with the results. It worked in channels you invited it to as well as private messages, and if you edited your message with your code, it would edit its response to always match the latest input. It was a fun exercise to learn the Slack API, as well as create something non-trivial and marginally useful in that Slack environment. I knew the horrible security implications of such a bot, even with the Python environment containerized, and never considered opening it up outside of my own personal use.

    Looks like the AI companies have decided that exact architecture is perfectly safe and secure as long as you obfuscate the input pathway by having to go through a chat-bot. Brilliant.


  • A balloon full of helium has more mass than a balloon without helium, but less weight

    That’s not true. A balloon full of helium has more mass and more weight than a balloon without helium. Weight is dependent only on the mass of the balloon+helium and the mass of the planet (Earth).

    The balloon full of helium displaces way more air than the balloon without helium since it is inflated. The volume of displaced air of the inflated balloon has more weight than the combined weight of the balloon and helium within, so it floats due to buoyancy from the atmosphere. Its weight is the same regardless of the medium it’s in, but the net forces experienced by it are not.



  • ignirtoq@fedia.iotoTechnology@beehaw.orgThe rise of Whatever
    link
    fedilink
    arrow-up
    13
    arrow-down
    1
    ·
    3 months ago

    The thing is it’s been like that forever. Good products made by small- to medium-sized businesses have always attracted buyouts where the new owner basically converts the good reputation of the original into money through cutting corners, laying off critical workers, and other strategies that slowly (or quickly) make the product worse. Eventually the formerly good product gets bad enough there’s space in the market for an entrepreneur to introduce a new good product, and the cycle repeats.

    I think what’s different now is, since this has gone on unabated for 70+ years, economic inequality means the people with good ideas for products can’t afford to become entrepreneurs anymore. The market openings are there, but the people that made everything so bad now have all the money. So the cycle is broken not by good products staying good, but by bad products having no replacements.


  • The technological progress LLMs represent has come to completion. They’re a technological dead end. They have no practical application because of hallucinations, and hallucinations are baked into the very core of how they work. Any further progress will come from experts learning from the successes and failures of LLMs, abandoning them, and building entirely new AI systems.

    AI as a general field is not a dread end, and it will continue to improve. But we’re nowhere near the AGI that tech CEOs are promising LLMs are so close to.


  • Oppenheimer was already really long, and I feel like it portrayed the complexity of the moral struggle Oppenheimer faced pretty well, as well as showing him as the very fallible human being he was. You can’t make a movie that talks about every aspect of such an historical event as the development and use of the first atomic bombs. There’s just too much. It would have to be a documentary, and even then it would be days long. Just because it wasn’t the story James Cameron considers the most compelling/important about the development of the atomic bomb doesn’t mean it’s not a compelling/important story.


  • The first statement is not even wholly true. While training does take more, executing the model (called “inference”) takes much, much more power than non-AI search algorithms, or really any traditional computational algorithm besides bogosort.

    Big Tech weren’t doing the best they possibly could transitioning to green energy, but they were making substantial progress before LLMs exploded on the scene because the value proposition was there: traditional algorithms were efficient enough that the PR gain from doing the green energy transition offset the cost.

    Now Big Tech have for some reason decided that LLMs represent the biggest game of gambling ever. The first to find the breakthrough to AGI will win it all and completely take over all IT markets, so they need to consume as much as they can get away with to maximize the probability that that breakthrough happens by their engineers.




  • My point is that this kind of pseudo intelligence has never existed on Earth before, so evolution has had free reign to use language sophistication as a proxy for humanity and intelligence without encountering anything that would put selective pressure against this heuristic.

    Human language is old. Way older than the written word. Our brains have evolved specialized regions for language processing, so evolution has clearly had time to operate while language has existed.

    And LLMs are not the first sophisticated AI that’s been around. We’ve had AI for decades, and really good AI for a while. But people don’t anthropomorphize other kinds of AI nearly as much as LLMs. Sure, they ascribe some human like intelligence to any sophisticated technology, and some people in history have claimed some technology or another is alive/sentient. But with LLMs we’re seeing a larger portion of the population believing that that we haven’t seen in human behavior before.


  • My running theory is that human evolution developed a heuristic in our brains that associates language sophistication with general intelligence, and especially with humanity. The very fact that LLMs are so good at composing sophisticated sentences triggers this heuristic and makes people anthropomorphize them far more than other kinds of AI, so they ascribe more capability to them than evidence justifies.

    I actually think this may explain some earlier reporting of some weird behavior of AI researchers as well. I seem to recall reports of Google researchers believing they had created sentient AI (a quick search produced this article). The researcher was fooled by his own AI not because he drank the Koolaid, but because he fell prey to this neural heuristic that’s in all of us.