• Sekoia@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    Oh yeah, that. My bad, mixed 'em up.

    The original algorithm doesn’t use entanglement, though! Just the fact that measurements can change the state. You can pick an axis to measure a quantum state in. If you pick two axes that are diagonal to each other, measuring a state in the “wrong” axis can give a random result (the first time), whereas the “right” one always gives the original data.

    So the trick is to have the sender encode their bits into a randomly-picked axis per bit (the quantum states), send the states over, and then the receiver decodes them along a random axis as well. On average, half the axes will match up and those bits will correspond. The other bits are junk (random). They then tell each other the random axes they picked, which identifies the right bits!

    They can compare a certain amount of their “correct” bits: if there’s an eavesdropper, they must have measured in the wrong state half the time (on average). Measurement changes the state into its own axis, so the receiver gets a random bit instead of the right one half the time. 25% of the time, the bits mismatch, when they should always correspond.

    • bunchberry@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      Entanglement plays a key role.

      Any time you talk about “measurement” this is just observation, and the result of an observation is to reduce the state vector, which is just a list of complex-valued probability amplitudes. The fact they are complex numbers gives rise to interference effects. When the eavesdropper observes definite outcome, you no longer need to treat it as probabilistic anymore, you can therefore reduce the state vector by updating your probabilities to simply 100% for the outcome you saw. The number 100% has no negative or imaginary components, and so it cannot exhibit interference effects.

      It is this loss of interference which is ultimately detectable on the other end. If you apply a Hadamard gate to a qubit, you get a state vector that represents equal probabilities for 0 or 1, but in a way that could exhibit interference with later interactions. Such as, if you applied a second Hadamard gate, it would return to its original state due to interference. If you had a qubit that was prepared with a 50% probability of being 0 or 1 but without interference terms (coherences), then applying a second Hadamard gate would not return it to its original state but instead just give you a random output.

      Hence, if qubits have undergone decoherence, i.e., if they have lost their ability to interfere with themselves, this is detectable. Obvious example is the double-slit experiment, you get real distinct outcomes by a change in the pattern on the screen if the photons can interfere with themselves or if they cannot. Quantum key distribution detects if an observer made a measurement in transit by relying on decoherence. Half the qubits a Hadamard gate is randomly applied, half they are not, and which it is applied to and which it is not is not revealed until after the communication is complete. If the recipient receives a qubit that had a Hadamard gate applied to it, they have to apply it again themselves to cancel it out, but they don’t know which ones they need to apply it to until the full qubits are transmitted and this is revealed.

      That means at random, half they receive they need to just read as-is, and another half they need to rely on interference effects to move them back into their original state. Any person who intercepts this by measuring it would cause it to decohere by their measurement and thus when the recipient applies the Hadamard gate a second time to cancel out the first, they get random noise rather than it actually cancelling it out. The recipient receiving random noise when they should be getting definite values is how you detect if there is an eavesdropper.

      What does this have to do with entanglement? If we just talk about “measuring a state” then quantum mechanics would be a rather paradoxical and inconsistent theory. If the eavesdropper measured the state and updated the probability distribution to 100% and thus destroyed its interference effects, the non-eavesdroppers did not measure the state, so it should still be probabilistic, and at face value, this seems to imply it should still exhibit interference effects from the non-eavesdroppers’ perspective.

      A popular way to get around this is to claim that the act of measurement is something “special” which always destroys the quantum probabilities and forces it into a definite state. That means the moment the eavesdropper makes the measurement, it takes on a definite value for all observers, and from the non-eavesdroppers’ perspective, they only describe it still as probabilistic due to their ignorance of the outcome. At that point, it would have a definite value, but they just don’t know what it is.

      However, if you believe that, then that is not quantum mechanics and in fact makes entirely different statistical predictions to quantum mechanics. In quantum mechanics, if two systems interact, they become entangled with one another. They still exhibit interference effects as a whole as an entangled system. There is no “special” interaction, such as a measurement, which forces a definite outcome. Indeed, if you try to introduce a “special” interaction, you get different statistical predictions than quantum mechanics actually makes.

      This is because in quantum mechanics, every interaction leads to growing the scale of entanglement, and so the interference effects never go away, just spread out. If you introduce a “special” interaction such as a measurement whereby it forces things into a definite value for all observers, then you are inherently suggesting there is a limitation to this scale of entanglement. There is some cut-off point whereby interference effects can no longer be scaled passed that, and because we can detect if a system exhibits interference effects or not (that’s what quantum key distribution is based on), then such an alternative theory (called an objective collapse model) would necessarily have to make differ from quantum mechanics in its numerical predictions.

      The actual answer to this seeming paradox is provided by quantum mechanics itself: entanglement. When the eavesdropper observes the qubit in transit, for the perspective of the non-eavesdroppers, the eavesdropper would become entangled with the qubit. It then no longer becomes valid in quantum mechanics to assign the state vector to the eavesdropper and the qubit separately, but only them together as an entangled system. However, the recipient does not receive both the qubit and the eavesdropper, they only receive the qubit. If they want to know how the qubit behaves, they have to do a partial trace to trace out (ignore) the eavesdropper, and when they do this, they find that the qubit’s state is still probabilistic, but it is a probability distribution with only terms between 0% and 100%, that is to say, no negatives or imaginary components, and thus it cannot exhibit interference effects.

      Quantum key distribution does indeed rely on entanglement as you cannot describe the algorithm consistently from all reference frames (within the framework of quantum mechanics and not implicitly abandoning quantum mechanics for an objective collapse theory) without taking into account entanglement. As I started with, the reduction of the wave function, which is a first-person description of an interaction (when there are 2 systems interacting and one is an observer describing the second), leads to decoherence. The third-person description of an interaction (when there are 3 systems and one is on the “outside” describing the other two systems interacting) is entanglement, and this also leads to decoherence.

      You even say that “measurement changes the state”, but how do you derive that without entanglement? It is entanglement between the eavesdropper and the qubit that leads to a change in the reduced density matrix of the qubit on its own.

      • Sekoia@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        … what you said is correct, but that’s superposition, not entanglement. Entanglement is when you create a product state of several qubits that cannot be decomposed into a tensor product of basic states (a single proton/photon/whatever).

        • bunchberry@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          2 months ago

          I am factually correct, I am not here to “debate,” I am telling you how the theory works. When two systems interact such that they become statistically correlated with one another and knowing the state of one tells you the state of the other, it is no longer valid to assign a state vector to the system subsystems that are part of the interaction individually, you have to assign it to the system as a whole. When you do a partial trace on the system individually to get a reduced density matrix for the two systems, if they are perfectly entangled, then you end with a density matrix without coherence terms and thus without interference effects.

          This is absolutely entanglement, this is what entanglement is. I am not misunderstanding what entanglement is, if you think what I have described here is not entanglement but a superposition of states then you don’t know what a superposition of states is. Yes, an entangled state would be in a superposition of states, but it would be a superposition of states which can only be applied to both correlated systems together and not to the individual subsystems.

          Let’s say R = 1/sqrt(2) and Alice sends Bob a qubit. If the qubit has a probability of 1 of being the value 1 and Alice applies the Hadamard gate, it changes to R probability of being 0 and -R probability of being 1. In this state, if Bob were to apply a second Hadamard gate, then it undoes the first Hadamard gate and so it would have a probability of 1 of being a value of 1 due to interference effects.

          However, if an eavesdropper, let’s call them Eve, measures the qubit in transit, because R and -R are equal distances from the origin, it would have an equal chance of being 0 or 1. Let’s say it’s 1. From their point of view, they would then update their probability distribution to be a probability of 1 of being the value 1 and send it off to Bob. When Bob applies the second Hadamard gate, it would then have a probability of R for being 0 and a probability of -R for being 1, and thus what should’ve been deterministic is now random noise for Bob.

          Yet, this description only works from Eve’s point of view. From Alice and Bob’s point of view, neither of them measured the particle in transit, so when Bob received it, it still is probabilistic with an equal chance of being 0 and 1. So why does Bob still predict that interference effects will be lost if it is still probabilistic for him?

          Because when Eve interacts with the qubit, from Alice and Bob’s perspective, it is no longer valid to assign a state vector to the qubit on its own. Eve and the qubit become correlated with one another. For Eve to know the particle’s state, there has to be some correlation between something in Eve’s brain (or, more directly, her measuring device) and the state of the particle. They are thus entangled with one another and Alice and Bob would have to assign the state vector to Eve and the qubit taken together and not to the individual parts.

          Eve and the qubit taken together would have a probability distribution of R for the qubit being 0 and Eve knowing the qubit is 0, and a probability of -R of the qubit being 1 and Eve knowing the qubit is 1. There is still interference effects but only of the whole system taken together. Yet, Bob does not receive Eve and the qubit taken together. He receives only the qubit, so this probability distribution is no longer applicable to the qubit.

          He instead has to do a partial trace to trace out (ignore) Eve from the equation to know how his qubit alone would behave. When he does this, he finds that the probability distribution has changed to 0.5 for 0 and 0.5 for 1. In the density matrix representation, you will see that the density matrix has all zeroes for the coherences. This is a classical probability distribution, something that cannot exhibit interference effects.

          Bob simply cannot explain why his qubit loses its interference effects by Eve measuring it without Bob taking into account entanglement, at least within the framework of quantum theory. That is just how the theory works. The explanation from Eve’s perspective simply does not work for Bob in quantum mechanics. Reducing the state vector simultaneously between two different perspectives is known as an objective collapse model and makes different statistical predictions than quantum mechanics. It would not merely be an alternative interpretation but an alternative theory.

          Eve explains the loss of coherence due to her reducing the state vector due to seeing a definite outcome for the qubit, and Bob explains the loss of coherence due to Eve becoming entangled with the qubit which leads to decoherence as doing a partial trace to trace out (ignore) Eve gives a reduced density matrix for the qubit whereby the coherence terms are zero.