Eskating cyclist, gamer and enjoyer of anime. Probably an artist. Also I code sometimes, pretty much just to mod titanfall 2 tho.

Introverted, yet I enjoy discussion to a fault.

  • 674 Posts
  • 1.08K Comments
Joined 2 years ago
cake
Cake day: June 13th, 2023

help-circle





  • No, you’re right. He is in his 20s. But yeah he’s full Laconian. He literally hadn’t ever known any other society. It’s why he drank the Laconia cool-aid with such fervor, as compared to other POV Laconians like Tanaka.

    That’s also why he was “tested”. He was put in a position of power to see whether Laconia could produce competent personnel. Duarte was fully aware Santiago was underqualified. He was sent out to see whether Laconian-born personnel could snap out of believing their own propaganda, and become effective self-aware decision-makers.

    Duarte essentially wanted to “trial by fire” Santiago in order to produce competent people on par with the MCRN vets like Tanaka or Trejo.

    That’s why Tanaka and later Overstreet had orders that gave them the real authority on Medina. They were to allow Santiago to play at leader unless he fucked up.

    Which he did. Showing that in isolation Laconian culture had leaned too far into zealotry and sycophancy to produce effective individuals.












  • Superpowered lying is already a thing, and all we needed was demographic data and context control.

    Today, it is possible to get a population to believe almost anything. Show them the right argument, at the right time, in the right context, and they believe it. Facebook and google have scaled up exactly that into their main sources of revenue.

    Same goes for attention hacking. AI generated content designed to hook viewers functions in entirely predictable, and fairly well understood ways. And the same goes for the algorithms which “recommend” additional content based on what someone is watching.

    As for why doctors can’t do things AIs are pulling off, I’d suggest that’s because current systems are using indicators we don’t know about, which they aren’t sentient enough to explain. If they could, I have no doubt a human doctor, given enough time, could learn about, and detect, such indicators.

    There is no evidence that what these models are doing, is “beyond our scale of thinking”.

    But again, I do think the machine will be faster.

    Current models display “emergent capabilities”, as in abilities we don’t know about before the model is created and tested. But once it is created, we can and have figured out what it is doing and how.





  • Logic is logic. There is no “advanced” logic that somehow allows you to decipher aspects of reality you otherwise could not. Humanity has yet to encounter anything that cannot be consistently explained in more and more detail, as we investigate it further.

    We can and do answer complex questions. That human society is too disorganized to disseminate the answers we do have, and act on them at scale, isn’t going to be changed by explaining the same thing slightly better.

    Imagine trying to argue against a perfect proof. Take something as basic as 1 + 1 = 2. Now imagine an argument for something much more complex - like a definitive answer to climate change, or consciousness, or free will - delivered with the same kind of clarity and irrefutability.

    Absolutely nothing about humans makes me think we are incapable of finding such answers on our own. And if we are genuinely incapable of developing a definitive answer on something, I’m more inclined to believe there isn’t one, than assume that we are simply too “small-minded” to find an answer that is obvious to the hypothetical superintelligence.

    But precision of thought orders of magnitude beyond our own.

    This is just the “god doesn’t need to make sense to us, his thoughts are beyond our comprehension” -argument, again.

    Just like a five-year-old thinks they understand what it means to be an adult - until they grow up and realize they had no idea.

    They don’t know, because we don’t tell them. Children in adverse conditions are perfectly capable of understanding the realities of survival.

    You are using the fact that there are things we don’t understand, yet, as if it were proof that there are things we can’t understand, ever. Or eventually figure out on our own.

    That non-sentients cannot comprehend sentience (ants and humans) has absolutely no relevance on whether sentients are able to comprehend other sentients (humans and machine intelligences).

    I think machine thinking, in contrast to the human mind, will just be a faster processor of logic.

    There is absolutely nothing stopping the weakest modern CPU from running the exact same code as the fastest modern CPU. The only difference will be the rate at which the work is completed.




  • This is the same logic people apply to God being incomprehensible.

    Are you suggesting that if such a thing can be built, its word should be gospel, even if it is impossible for us to understand the logic behind it?

    I don’t subscribe to this. Logic is logic. You don’t need a new paradigm of mind to explore all conclusions that exist. If something cannot be explained and comprehended, transmitted from one sentient mind to another, then it didn’t make sense in the first place.

    And you might bring up some of the stuff AI has done in material science as an example of it doing things human thinking cannot. But that’s not some new kind of thinking. Once the molecular or material structure was found, humans have been perfectly capable of comprehending it.

    All it’s doing, is exploring the conclusions that exist, faster. And when it comes to societal challenges, I don’t think it’s going to find some win-win solution we just haven’t thought of. That’s a level of optimism I would consider insane.