• 0 Posts
  • 65 Comments
Joined 1 year ago
cake
Cake day: March 3rd, 2024

help-circle
  • The technological progress LLMs represent has come to completion. They’re a technological dead end. They have no practical application because of hallucinations, and hallucinations are baked into the very core of how they work. Any further progress will come from experts learning from the successes and failures of LLMs, abandoning them, and building entirely new AI systems.

    AI as a general field is not a dread end, and it will continue to improve. But we’re nowhere near the AGI that tech CEOs are promising LLMs are so close to.


  • Oppenheimer was already really long, and I feel like it portrayed the complexity of the moral struggle Oppenheimer faced pretty well, as well as showing him as the very fallible human being he was. You can’t make a movie that talks about every aspect of such an historical event as the development and use of the first atomic bombs. There’s just too much. It would have to be a documentary, and even then it would be days long. Just because it wasn’t the story James Cameron considers the most compelling/important about the development of the atomic bomb doesn’t mean it’s not a compelling/important story.


  • The first statement is not even wholly true. While training does take more, executing the model (called “inference”) takes much, much more power than non-AI search algorithms, or really any traditional computational algorithm besides bogosort.

    Big Tech weren’t doing the best they possibly could transitioning to green energy, but they were making substantial progress before LLMs exploded on the scene because the value proposition was there: traditional algorithms were efficient enough that the PR gain from doing the green energy transition offset the cost.

    Now Big Tech have for some reason decided that LLMs represent the biggest game of gambling ever. The first to find the breakthrough to AGI will win it all and completely take over all IT markets, so they need to consume as much as they can get away with to maximize the probability that that breakthrough happens by their engineers.




  • My point is that this kind of pseudo intelligence has never existed on Earth before, so evolution has had free reign to use language sophistication as a proxy for humanity and intelligence without encountering anything that would put selective pressure against this heuristic.

    Human language is old. Way older than the written word. Our brains have evolved specialized regions for language processing, so evolution has clearly had time to operate while language has existed.

    And LLMs are not the first sophisticated AI that’s been around. We’ve had AI for decades, and really good AI for a while. But people don’t anthropomorphize other kinds of AI nearly as much as LLMs. Sure, they ascribe some human like intelligence to any sophisticated technology, and some people in history have claimed some technology or another is alive/sentient. But with LLMs we’re seeing a larger portion of the population believing that that we haven’t seen in human behavior before.


  • My running theory is that human evolution developed a heuristic in our brains that associates language sophistication with general intelligence, and especially with humanity. The very fact that LLMs are so good at composing sophisticated sentences triggers this heuristic and makes people anthropomorphize them far more than other kinds of AI, so they ascribe more capability to them than evidence justifies.

    I actually think this may explain some earlier reporting of some weird behavior of AI researchers as well. I seem to recall reports of Google researchers believing they had created sentient AI (a quick search produced this article). The researcher was fooled by his own AI not because he drank the Koolaid, but because he fell prey to this neural heuristic that’s in all of us.





  • Even more surprising: the droplets didn’t evaporate quickly, as thermodynamics would predict.

    “According to the curvature and size of the droplets, they should have been evaporating,” says Patel. “But they were not; they remained stable for extended periods.”

    With a material that could potentially defy the laws of physics on their hands, Lee and Patel sent their design off to a collaborator to see if their results were replicable.

    I really don’t like the repeated use of the phrase “defy the laws of physics.” That’s an extraordinary claim, and it needs extraordinary proof, and the researchers already propose a mechanism by which the droplets remained stable under existing physical laws, namely that they were getting replenished from the nanopores inside the material as fast as evaporation was pulling water out of the droplets.

    I recognize the researchers themselves aren’t using the phrase, it’s the Penn press release organization trying to further drum up interest in the research. But it’s a bad framing. You can make it sound interesting without resorting to clickbait techniques like “did our awesome engineers just break the laws of physics??” Hell, the research is interesting enough on its own; passive water collection from the air is revolutionary! No need for editorializing!



  • The main issue is that nobody is going to want to create new content when they get paid nothing or almost nothing for doing so.

    This is the whole reason copyright is supposed to exist. Content creators get exclusive control over the content they create for the duration of the copyright, so they can make a living off of work that then enriches society. And for the further benefit of society, after 14 years this copyright ends and the works become public domain, where anyone can create derivative works that will have copyright on them going to their own creators and the cycle continues, further enriching society.

    Large companies first perverted this by getting Congress to extend the duration of copyright to truly absurd levels so they could continue to extract wealth from works they had to spend very little to maintain (mostly lawyers to enforce their copyrights). Since only they could create derivative works for 100(!) years, they did not have to compete with other creators in society, giving themselves a monopoly on what become cultural icons. Now corporate America has found a way to subvert creation itself, but it requires access to effectively all copyrighted works everywhere simultaneously. So now they just ignore the copyright, since it is impeding their wealth accumulation.

    And so now the creative engine copyright is supposed to foster dies, taking the social enrichment it was designed to facilitate with it. People won’t stop making art or generating what’s supposed to be copyrighted works, but when they can’t making a living on it, they have to turn it into a hobby and spend the bulk of their time and energy on work that will put food on the table.


  • The criminal networks will just immediately switch to VPNs and using end-to-end encryption services hosted in another country. VPN technology for phones is already available and has been for a while. On day one this legislation will be useless for its primary (purported) purpose. No exceptions or winner-choosing necessary.

    Then they’ll go after VPNs with the argument of criminals using the technology to skirt law enforcement backdoor requirements in end-to-end encryption.




  • People are making fun of the waffling and the apparent indecision and are missing the point. Trump isn’t flailing and trying to figure out how to actually make things work. He’s doing exactly what he intended: he’s holding the US economy for ransom and building a power base among the billionaires.

    He used the poor and ignorant to get control of the public institutions, and now he’s using that power to get control over the private institutions (for-profit companies). He’s building a carbon copy of Russia with himself in the role of Putin. He’s almost there, and it’s taken him 2 months to do it.


  • The author hits on exactly what’s happening with the comparison to carcinisation: crustacean evolution converges to a crab like form because that’s the optimization for the environmental stresses.

    As tiramichu said in their comment, digital platforms are converging to the same form because they’re optimizing for the same metric. But the reason they’re all optimizing that metric is because their monetization is advertising.

    In the golden days of digital platforms, i.e. the 2010s, everything was venture capital funded. A quality product was the first goal, and monetization would come “eventually.” All of the platforms operated this way. Advertising was discussed as one potential monetization, but others were on the table, too, like the “freemium” model that seemed to work well for Google: provide a basic tier for free that was great in its own right, and then have premium features that power users had to pay for. No one had detailed data for what worked and what didn’t, and how well each model works for a given market, because everything was so new. There were a few one-off success stories, many wild failures from the dotcom crash, but no clear paths to reliable, successful revenue streams.

    Lots of products now do operate with the freemium model, but more and more platforms had moved and are still moving to advertising ultimately because of the venture capital firms that initially funded them have strong control over them and have more long term interest in money than a good product. The data is now out there that the advertising model makes so, so much more money than a freemium model ever could in basically any market. So VCs want advertising, so everything is TikTok.



  • The open availability of cutting-edge models creates a multiplier effect, enabling startups, researchers, and developers to build upon sophisticated AI technology without massive capital expenditure. This has accelerated China’s AI capabilities at a pace that has shocked Western observers.

    Didn’t a Google engineer put out a white paper about this around the time Facebook’s original LLM weights leaked? They compared the rate of development of corporate AI groups to the open source community and found there was no possible way the corporate model could keep up if there were even a small investment in the open development model. The open source community was solving in weeks open problems the big companies couldn’t solve in years. I guess China was paying attention.