• 0 Posts
  • 15 Comments
Joined 2 months ago
cake
Cake day: July 7th, 2024

help-circle
  • Classical computers compute using 0s and 1s which refer to something physical like voltage levels of 0v or 3.3v respectively. Quantum computers also compute using 0s and 1s that also refers to something physical, like the spin of an electron which can only be up or down. Although these qubits differ because with a classical bit, there is just one thing to “look at” (called “observables”) if you want to know its value. If I want to know the voltage level is 0 or 1 I can just take out my multimeter and check. There is just one single observable.

    With a qubit, there are actually three observables: σx, σy, and σz. You can think of a qubit like a sphere where you can measure it along its x, y, or z axis. These often correspond in real life to real rotations, for example, you can measure electron spin using something called Stern-Gerlach apparatus and you can measure a different axis by physically rotating the whole apparatus.

    How can a single 0 or 1 be associated with three different observables? Well, the qubit can only have a single 0 or 1 at a time, so, let’s say, you measure its value on the z-axis, so you measure σz, and you get 0 or 1, then the qubit ceases to have values for σx or σy. They just don’t exist anymore. If you then go measure, let’s say, σx, then you will get something entirely random, and then the value for σz will cease to exist. So it can only hold one bit of information at a time, but measuring it on a different axis will “interfere” with that information.

    It’s thus not possible to actually know the values for all the different observables because only one exists at a time, but you can also use them in logic gates where one depends on an axis with no value. For example, if you measure a qubit on the σz axis, you can then pass it through a logic gate where it will flip a second qubit or not flip it because on whether or not σx is 0 or 1. Of course, if you measured σz, then σx has no value, so you can’t say whether or not it will flip the other qubit, but you can say that they would be correlated with one another (if σx is 0 then it will not flip it, if it is 1 then it will, and thus they are related to one another). This is basically what entanglement is.

    Because you cannot know the outcome when you have certain interactions like this, you can only model the system probabilistically based on the information you do know, and because measuring qubits on one axis erases its value on all others, then some information you know about the system can interfere with (cancel out) other information you know about it. Waves also can interfere with each other, and so oddly enough, it turns out you can model how your predictions of the system evolve over the computation using a wave function which then can be used to derive a probability distribution of the results.

    What is even more interesting is that if you have a system like this where you have to model it using a wave function, it turns out it can in principle execute certain algorithms exponentially faster than classical computers. So they are definitely nowhere near the same as classical computers. Their complexity scales up exponentially when trying to simulate quantum computers on a classical computer. Every additional qubit doubles the complexity, and thus it becomes really difficult to even simulate small numbers of qubits. I built my own simulator in C and it uses 45 gigabytes of RAM to simulate just 16. I think the world record is literally only like 56.



  • Even if you believe there really exists a “hard problem of consciousness,” even Chalmers admits such a thing would have to be fundamentally unobservable and indistinguishable from something that does not have it (see his p-zombie argument), so it could never be something discovered by the sciences, or something discovered at all. Believing there is something immaterial about consciousness inherently requires an a priori assumption and cannot be something derived from a posteriori observational evidence.


  • bunchberry@lemmy.worldto196@lemmy.blahaj.zoneRule elitism
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    2 months ago

    We feel conscious and have an internal experience

    It does not make sense to add the qualifier “internal” unless it is being contrasted with “external.” It makes no sense to say “I’m inside this house” unless you’re contrasting it with “as opposed to outside the house.” Speaking of “internal experience” is a bit odd in my view because it implies there is such thing as an “external experience”. What would that even be?

    What about the p-zombie, the human person who just doesn’t have an internal experience and just had a set of rules, but acts like every other human?

    The p-zombie argument doesn’t make sense as you can only conceive of things that are remixes of what you’ve seen before. I have never seen a pink elephant but I’ve seen pink things and I’ve seen elephants so I can remix them in my mind and imagine it. But if you ask me to imagine an elephant a color I’ve never seen before? I just can’t do it, I wouldn’t even know what that means. Indeed, a person blind since birth cannot “see” at all, not in their imagination, not even in their dreams.

    The p-zombie argument asks us to conceive of two people that are not observably different in every way yet still different because one is lacking some property that the other has. But if you’re claiming you can conceive of this, I just don’t believe you. You’re probably playing some mental tricks on yourself to make you think you can conceive of it but you cannot. If there is nothing observably different about them then there is nothing conceivably different about them either.

    What about a cat, who apparently has a less complex internal experience, but seems to act like we’d expect if it has something like that? What about a tick, or a louse? What about a water bear? A tree? A paramecium? A bacteria? A computer program?

    This is what Thomas Nagel and David Chalmers ask and then settles on “mammals only” because they have an unjustified mammalian bias. Like I said, there is no “internal” experience, there is just experience. Nagel and Chalmers both rely on an unjustified premise that “point-of-view” is unique to mammalian brains because supposedly objective reality is point-of-view independent and since experience clearly has an aspect of point-of-view then that means experience too must be a product purely of mammalian brains, and then demands the “physicalists” prove how non-experiential reality gives rise to the experiential realm.

    But the entire premise is arbitrary and wrong. Objective reality is not point-of-view independent. In general relativity, reality literally change depending on your point-of-view. Time passes a bit faster for people standing up than people sitting down, lengths of rulers can change between observers, velocity of objects can change as well. Relational quantum mechanics goes even further and shows that all variable properties of particles depend upon point-of-view.

    The idea that objective reality is point-of-view independent is just entirely false. It is point-of-view dependent all the way down. Experience is just objective reality as it actually exists independent of the observer but dependent upon the point-of-view in which they occupy. It has nothing to do with mammalian brains, “consciousness,” or subjectivity. If reality is point-of-view dependent all the way down, then it is not even possible to conceive of an intelligent being that would occupy a unique point-of-view, because everything occupies their own unique point-of-view, even a rock. It’s not a byproduct of the “conscious mind” but just a property of objective reality: experience is objective reality independent of the observer, but dependent upon the context of that experience.

    There’s a continuum one could construct that includes all those things and ranks them by how similar their behaviors are to ours, and calls the things close to us conscious and the things farther away not, but the line is ever going to be fuzzy. There’s no categorical difference that separates one end of the spectrum from the other, it’s just about picking where to put the line.

    When you go down this continuum what gradually disappears is cognition, that is to say, the ability to think about, reflect upon, be self-aware of, one’s point-of-view. The point-of-viewness of reality, or more simply the contextual nature of reality, does not disappear at any point. Only the ability to talk about it disappears. A rock cannot tell you anything about what it’s like to be a rock from its context, it has no ability to reflect upon the point-of-view it occupies.

    Although you’re right there is no hard-and-fast line for cognition, but that’s true of anything in nature. There’s no hard-and-fast line for anything. Take a cat for example, where does the cat begin and end, both in space in time? Create a rigorous definition of its borders. You won’t be able to do it. All our conceptions are human creations and therefore a bit fuzzy. Reality is infinitely complex and we cannot deal with the infinite complexity all at once so we break it up into chunks that are easier to work with: cats, dogs, trees, red, blue, hydrogen, helium, etc. But you always find when you look at these things a little more closely that their nature as discrete “things” becomes rather fuzzy and disappears.


  • There shouldn’t be a distinction between quantum and non-quantum objects. That’s the mystery. Why can’t large objects exhibit quantum properties?

    What makes quantum mechanics distinct from classical mechanics is the fact that not only are there interference effects, but statistically correlated systems (i.e. “entangled”) can seem to interfere with one another in a way that cannot be explained classically, at least not without superluminal communication, or introducing something else strange like the existence of negative probabilities.

    If it wasn’t for these kinds of interference effects, then we could just chalk up quantum randomness to classical randomness, i.e. it would just be the same as any old form of statistical mechanics. The randomness itself isn’t really that much of a defining feature of quantum mechanics.

    The reason I say all this is because we actually do know why there is a distinction between quantum and non-quantum objects and why large objects do not exhibit quantum properties. It is a mixture of two factors. First, larger systems like big molecules have smaller wavelengths, so interference with other molecules becomes harder and harder to detect. Second, there is decoherence. Even small particles, if they interact with a ton of other particles and you average over these interactions, you will find that the interference terms (the “coherences” in the density matrix) converge to zero, i.e. when you inject noise into a system its average behavior converges to a classical probability distribution.

    Hence, we already know why there is a seeming “transition” from quantum to classical. This doesn’t get rid of the fact that it is still statistical in nature, it doesn’t give you a reason as to why a particle that has a 50% chance of being over there and a 50% chance of being over here, that when you measure it and find it is over here, that it wasn’t over there. Decoherence doesn’t tell you why you actually get the results you do from a measurement, it’s still fundamentally random (which bothers people for some reason?).

    But it is well-understood how quantum probabilities converge to classical probabilities. There have even been studies that have reversed the process of decoherence.


  • That’s actually not quite accurate, although that is how it is commonly interpreted. The reason it is not accurate is because Bell’s theorem simply doesn’t show there is no hidden variables and indeed even Bell himself states very clearly what the theorem proves in the conclusion of his paper.

    In a theory in which parameters are added to quantum mechanics to determine the results of individual measurements, without changing the statistical predictions, there must be a mechanism whereby the setting of one measuring device can influence the reading of another instrument, however remote. Moreover, the signal involved must propagate instantaneously, so that such a theory could not be Lorentz invariant.[1]

    In other words, you can have hidden variables, but those hidden variables would not be Lorentz invariant. What is Lorentz invariance? Well, to be “invariant” basically means to be absolute, that is to say, unchanging based on reference frame. The term Lorentz here refers to Lorentz transformations under Minkowski space, i.e. the four-dimensional spacetime described by special relativity.

    This implies you can actually have hidden variables under one of two conditions:

    1. Those hidden variables are invariant under some other framework that is not special relativity, basically meaning the signals would have to travel faster than light and thus would contradict special relativity and you would need to replace it with some other framework.
    2. Those hidden variables are variant. That would mean they do indeed change based on reference frame. This would allow local hidden variable theories and thus even allow for current quantum mechanics to be interpreted as a statistical theory in a more classical sense as it even evades the PBR theorem.[2]

    The first view is unpopular because special relativity is the basis of quantum field theory, and thus contradicting it would contradict with one of our best theories of nature. There has been some fringe research into figuring out ways to reformulate special relativity to make it compatible with invariant hidden variables,[3] but given quantum mechanics has been around for over a century and nobody has figured this out, I wouldn’t get your hopes up.

    The second view is unpopular because it can be shown to violate a more subtle intuition we all tend to have, but is taken for granted so much I’m not sure if there’s even a name for it. The intuition is that not only should there be no mathematical contradictions within a single given reference frame so that an observer will never see the laws of physics break down, but that there should additionally be no contradictions when all possible reference frames are considered simultaneously.

    It is not physically possible to observe all reference frames simulatenously, and thus one can argue that such an assumption should be abandoned because it is metaphysical and not something you can ever observe in practice.[4] Note that inconsistency between all reference frames considered simulatenously does not mean observers will disagree over the facts, because if one observer asks another for information about a measurement result, they are still acquiring information about that result from their reference frame, just indirectly, and thus they would never run into a disagreement in practice.

    However, people still tend to find it too intuitive to abandon this notion of simultaneous consistency, so it remains unpopular and most physicists choose to just interpret quantum mechanics as if there are no hidden variables at all. #1 you can argue is enforced by the evidence, but #2 is more of a philosophical position, so ultimately the view that there are no hidden variables is not “proven” but proven if you accept certain philosophical assumptions.

    There is actually a second way to restore local hidden variables which I did not go into detail here which is superdeterminism. Superdeterminism basically argues that if you did just have a theory which describes how particles behave now but a more holistic theory that includes the entire initial state of the universe going back to the Big Bang and tracing out how all particles evolved to the state they are now, you can place restrictions on how that system would develop that would such that it would always reproduce the correlations we see even with hidden variables that is indeed Lorentz invariant.

    Although, the obvious problem is that it would never actually be possible to have such a theory, we cannot know the complete initial configuration of all particles in the universe, and so it’s not obvious how you would derive the correlations between particles beforehand. You would instead have to just assume they “know” how to be correlated already, which makes them equivalent to nonlocal hidden variable theories, and thus it is not entirely clear how they could be made Lorentz invariant. Not sure if anyone’s ever put forward a complete model in this framework either, same issue with nonlocal hidden variable theories.


  • So… there are things that are either within the category of thought or not?

    Objects are in the category of thought but not in some spatial “realm” or “world” of thought. It is definitional, linguistic, not a statement about ontology.

    Is thought mutually exclusive to material? Is thought composed of material or the other way around? Or are they both the same?

    From an a priori standpoint there is no material, there is just reality. Our understanding of material reality comes from an a posteriori standpoint of investing it, learning about it, forming laws etc, and we do come to understand thought from an a posteriori lens as something that can be observed and implemented in other systems.

    Usually thought itself is not even considered as part of the so-called “hard problem” as that’s categorized into the “easy problem.”

    That is the standard definition of idealism, is it not? That existence is immaterial?

    They say existence is “mind” which includes both thought and experience which they both argue are products of the mind, and so if we start off with thought and experience as the foundations of philosophy then we’re never able to leave the mind. That’s how idealism works, the “thought” part of basically the “easy” problem and the “experience” part is what entails the “hard” problem since even idealists would concede that it is not difficult to conceive of constructing an intelligent machine that can reason, potentially even as good as humans can.



  • bunchberry@lemmy.worldto196@lemmy.blahaj.zonedamn…
    link
    fedilink
    arrow-up
    5
    ·
    edit-2
    2 months ago

    You shouldn’t take it that seriously. MWI has a lot of zealots in the popular media who act like it’s a proven fact, kind of like some String Theorists do, but it is actually rather dubious.

    MWI claims it is simpler because they are getting rid of the Born rule, so it has less assumptions, but the reason there is the Born rule in QM is because… well, it’s needed to actually predict the right results. You can’t just throw it out. It’s also impossible to derive the Born rule without some sort of additional assumption, and there is no agreed upon way to do this.[1]

    This makes MWI actually more complicated than traditional quantum mechanics because they have to add different arbitrary assumptions and then add an additional layer of mathematics to derive the Born rule from it, rather than assuming it. These derivations also tend to be incredibly arbitrary because the assumptions you have to make to derive it are always chosen specifically for the purpose of deriving the Born rule and don’t seem to make much sense otherwise, and thus are just as arbitrary as assuming the Born rule directly.[2] [3]

    If you prefer a video, the one below discusses various “multiverse” ideas including MWI and also discusses how it ultimately ends up being more mathematically complicated than other interpretations of QM.

    https://www.youtube.com/watch?v=QHa1vbwVaNU

    MWI also makes no sense for a separate reason. If you consider the electromagnetic field for example, how do we know it exists? We know it exists because we can see its effect on particles. If you drop some iron filings around a magnet, it conforms to the shape of a field, but ultimately what you are seeing is the iron filings and not the field itself, but the effects of the field. Now, imagine if someone claimed the iron filings don’t even exist, only the field. You’d be a bit confused because, well, you only know the field exists because of its effects on the filings. You can’t see the field, only the particles, so if you deny the particles, then you’re just left in confusion.

    This is effectively what MWI does. We live in a world composed of spacetime containing particles, yet wave functions describe, well, waves made of nothing that exist in an abstract space known as Hilbert space. Schrodinger’s derivation of his famous wave equation is based on observing the behavior of particles. MWI denies particles even exist and everything is just waves in Hilbert space made of nothing, which is very bizarre because then you would be effectively claiming the entire universe is composed of something entirely invisible. So how does that explain everything we see?

    [I]t does not account, per se, for the phenomenological reality that we actually observe. In order to describe the phenomena that we observe, other mathematical elements are needed besides ψ: the individual variables, like X and P, that we use to describe the world. The Many Worlds interpretation does not explain them clearly. It is not enough to know the ψ wave and Schrödinger’s equation in order to define and use quantum theory: we need to specify an algebra of observables, otherwise we cannot calculate anything and there is no relation with the phenomena of our experience. The role of this algebra of observables, which is extremely clear in other interpretations, is not at all clear in the Many Worlds interpretation.

    --- Carlo Rovelli, Helgoland: Making Sense of the Quantum Revolution

    The philosopher Tim Maudlin has a whole lecture you can watch below on this problem, pointing out how MWI makes no sense because nothing in the interpretation includes anything we can actually observe. It quite literally describes a whole universe without observables.

    https://www.youtube.com/watch?v=us7gbWWPUsA

    Not to rain on your parade or anything if you are just having fun, but there is a lot of misinformation on websites like YouTube painting MWI as more reasonable than it actually is, so I just want people to be aware.






  • I have never understood the argument that QM is evidence for a simulation because the universe is using less resources or something like that by not “rendering” things at that low of a level. The problem is that, yes, it’s probabilistic, but it is not merely probabilistic. We have probability in classical mechanics already like when dealing with gasses in statistical mechanics and we can model that just fine. Modeling wave functions is far more computationally expensive because they do not even exist in traditional spacetime but in an abstract Hilbert space that can grows in complexity exponentially faster than classical systems. That’s the whole reason for building quantum computers, it’s so much more computationally expensive to simulate this that it is more efficient just to have a machine that can do it. The laws of physics at a fundamental level get far more complex and far more computationally expensive, and not the reverse.


  • Quantum internet is way overhyped and likely will never exist. There are not only no practical benefits to using QM for internet but it has huge inherent problems that make it unlikely to ever scale.

    • While technically yes you can make “unbreakable encryption” this is just a glorified one-time cipher which requires the key to be the same length of the message, and AES256 is already considered unbreakable even by quantum computers, so good luck cutting your internet bandwidth in half for purely theoretical benefits that exist on paper but will never be noticeable in practice!
    • Since it’s a symmetric cipher it doesn’t even work for internet communication unless you have a way to distribute keys, and there is something called quantum key distribution (QKD) based around algorithms like BB84. However, this algorithm only allows you to guarantee that you can exchange keys without anyone snooping in on it being undetected, but it does not actually stop them from snooping in on your key like Diffie-Hellman achieves. Meaning, a person can literally shut down the entire network traffic just by observing the packets in transit without having to even do anything to do them. How can the government and private companies possibly build an internet whereby you guarantee nobody ever looks at packages as they’re transmitted through the network?
    • QKD is also susceptible to man-in-the-middle attacks just like Diffie-Hellman, which we solve that problem in classical cryptography with digital signature algorithms. There are quantum digital signature algorithms (QDS) but they rely on Holevo’s theorem which says that the “collapse” is effectively a one-way process and only limited amount of information can be extrapolated from it, and thus you cannot derive the qubit’s initial state simply by measuring it. The problem, however, is Holevo’s theorem also says if you had tons of copies of the same qubit, you could derive even more information from it. Meaning, all public keys would have to be consumable, because making copies of them would undermine their security, and this makes it just not something that can scale.

    And all this for what? You have all these drawbacks for what? Imagined security benefits that you won’t actually notice in real life? Only people I could ever see using this are governments that are hyperparanoid. A government intranet could be highly controlled, highly centralized, and not particularly large scale by its very nature that you don’t want many people having access to it. So I could see such a government getting something like that to work, but there would be no reason to replace the internet with it.