Hopefully by this stage you will have read the previous pages on Quantum Mechanics: An Introduction, The Quantum Casino, and Quantum Entanglement. On those pages several weird and puzzling questions were raised. In this page we start to see some answers.
In the previous page on Quantum Entanglement we saw that it is not possible to separate an object being measured (observed) from the apparatus performing the measurement. This is clearly shown in the quantum world where you cannot divorce the property you are trying to measure from the type of observation you make: the property is dependent on the measurement.
It's actually quite like a rainbow. When a person looks at a rainbow he sees it starting in a certain position and ending in a certain position. However, when a second person - who is standing in another place - looks at the rainbow he will see it starting and ending in a completely different spot. So the two people are effectively seeing different rainbows, with different starting and ending positions. That's why you will never find a pot of gold at the end of a rainbow!
How is this possible? It's possible because you have to consider the rainbow and the observer as a single system. Actually, this is true of all measurements and observations: you really cannot separate the object being measured from the device performing the measurement or observation - you have to consider them as a single system. For example, when you take the temperature of an object using a thermometer, you have to remove a very small sample of heat from the object. The measuring device has altered the object - the two entities are not separate. The object and the device performing the measurement are bound together as a single system.
This idea of the observed object and the observer being bound together as a single system will be shown to be the key to providing the mechanism for the apparent "collapse of the wavefunction". Because, in reality, a quantum particle is rarely completely isolated from its environment. Rather, the particle and the environment are bound together as one system. Not only do we have to consider the effect of the measured object on the measuring device, but we also have to consider the effect of the measuring device (and its environment) on the observed object.
How the environment eliminates interference effects
In the page on The Quantum Casino we have seen that when a measurement of an observable is performed, the quantum state appears to "jump" to a particular eigenstate (with the observable taking the associated eigenvalue). This apparent jumping puzzled physicists for many years because it was not understood how and why the usually linear time-evolution of the Schrödinger equation should suddenly decide to make a sudden jump.
Also, as a quantum state can be viewed as a superposition of many other states, the question can also be asked as to why we never see these other states in macroscopic objects. For example, why is Schrödinger's cat never seen as being both alive and dead at the same time?
However, in the double-slit experiment we do see the other states of the superposition, as they provide constructive and destructive interference effects (see Quantum Mechanics: An Introduction). Why do these so-called interference states appear in the double-slit experiment but apparently vanish in macroscopic objects?
Let's remind ourselves of how a quantum state can be expressed as a linear combination of components of other eigenstates (this was considered in the page on The Quantum Casino):
Now here is the absolutely key point: every component eigenstate has an associated phase (this was considered back in The Quantum Casino). It is this phase which gives the wavefunction its "wavelike" character (in complex space, remember). In order for the components to combine together correctly to produce a superposition state, they must be in the same phase (must be coherent). This is what happens in the double-slit experiment: interference components possessing the same phase combine to produce the interference effects.
As explained in the "rainbow" example at the top of this page, we cannot separate an observed object from the observer: we have to treat the resultant combined system as one system. What happens in the real world is that a particle is not perfectly isolated: a particle inevitably interacts with the environment. These interactions have the effect of the particle "being observed" by the environment - the "environment" might very well be a man-made measuring device, for example. (For a technical discussion about the measured system and the measuring apparatus acting as one system, see the paper by Erich Joos Elements of Environmental Decoherence).
What happens to a quantum particle in the real world is that each of its component states gets entangled (separately) with different aspects of its environment. As seen in the page on Quantum Entanglement, when particles become entangled you have to consider them as one single, entangled state (you use the tensor product to calculate the resultant state). So each component of our quantum particle forms separate entangled states. The phases of these states will be altered. This destroys the coherent phase relationships between the components. The components are said to decohere.
If a particle interacts with just a single photon, for example, then the two particles will enter an entangled state and that will be enough to trigger the onset of decoherence (for example a single photon entering the double-slit experiment will be enough to destroy the interference pattern). However, for all interference effects to disappear, the particle must have a macroscopic (rather than a microscopic) effect by forming entanglements with billions of particles in, say, a Geiger counter. This is described in the book Quantum Enigma: "Whenever any property of a microscopic object affects a macroscopic object, that property is 'observed' and becomes a physical reality". In that case, if there are no longer any interference terms then to all intents and purposes the particle is now in a single, quantum state - one of the component eigenstates:
(In the page on Quantum Entanglement it was shown how the dimensionality of the Hilbert state space increases rapidly with each entanglement, thus further reducing the chance of coherent interference effects - see here)
Note that the interference components do not actually disappear - because they are out of phase we just don't notice them at the macroscopic level. In fact, they just get dissipated out into the wider environment. I always imagine them as little ripples in the ocean - we only ever notice the big (macroscopic) waves in the ocean. The little ripples get entangled with other little ripples until it is impossible to tell from which big wave each little ripple came.
Imagine you throw a rock in the sea off the coast of the United Kingdom. After the initial big splash, the ripples dissipate and apparently disappear. But of course, they haven't really disappeared. The ripples have decreased in size, and they have mixed and interfered with other waves, but they have not disappeared. Two weeks later, on the rocky shore of Tierra del Fuego off the Argentinian coast, one of the small waves washing to shore is maybe an imperceptible fraction of one micron higher because of that rock you threw.
So the ripples (interference terms) do not actually disappear. They dissipate into the environment and become effectively undetectable. And it's certainly not possible to associate the microscopic change in the height of the wave in Tierra del Fuego with the rock you threw - there have been so many interactions with other waves along the way. In this sense, the process of decoherence is irreversible - and that's a key feature of decoherence: we can't reverse the process (to regenerate the initial interference components) - they're gone for good. And even the "little ripple" echoes of the interference effects have become imperceptible due to interactions with the environment. Then, for all intents and purposes, the interference effects (ripples) have completely disappeared.
At last we seem to have found the mechanism behind the disappearance of the interference effects, the truth behind the mysterious "collapse of the wavefunction".
Decoherence, then, is not a sudden "jumping" effect. Rather, the interference terms disappear due to the progressive influence of billions of particles (and associated entanglements) as a particle passes through our measuring apparatus. So there is a progressive filtering (of the interference terms) and amplification (of the eventual measurement - Bohr referred to an "irreversible act of amplification"). As Brian Greene explains in his book The Fabric of the Cosmos: "Decoherence forces much of the weirdness of quantum physics to 'leak' from large objects since, bit by bit, the quantum weirdness is carried away by the innumerable impinging particles from the environment."
However, the decoherence process fooled physicists for many years because it is such an efficient process - decoherence happens so fast (in the region of 10-27 seconds!) - giving a false impression of a discontinuous, instantaneous quantum "jump". However, recent experiments have managed to delay decoherence by decoupling quantum particles from their environment. If decoherence is delayed then the superposition states become evident. As an example, an electric current has been made to flow in opposite directions at the same time by using a superconducting ring (see this article by Tony Leggett which considers the effect of decoherence on Schrödinger's cat, and this New Scientist article).
The reason we never see Schrödinger's cat both dead and alive at the same time is because decoherence takes place within the box long before we open it. This is due to the device housing the radioactive nucleus and the poison. This is what forms the macroscopic environment immediately surrounding the radioactive nucleus.
So why does the electron in the double-slit experiment still show interference effects? Why does it not decohere? The answer is because it is not a macroscopic object, it is an isolated microscopic object. While decoherence happens extraordinarily fast for macroscopic objects, for an electron the decoherence time (the so-called coefficient fluctuation time) is about 107 seconds, or about a year - plenty of time to perform the double-slit experiment and see interference effects.
For a clear explanation of decoherence, I can recommend Chapter 7 of Brian Greene's book The Fabric of the Cosmos.
Decoherence and Entropy
There is a parallel here with thermodynamic behaviour, considering the statistical movement of particles under heat. The second law of thermodynamics says that the amount of entropy in a closed system (the amount of disorder, basically) will always increase (see the Arrow of Time page for full discussion about entropy). For example, if we have a sample of gas in a closed container in a corner of a room, when we open the container the gas will spread throughout the room. Eventually we will reach a state when all the molecules of the gas are completely randomly orientated throughout the room. This state is called thermal equilibrium. It is the state of maximum entropy (disorder).
The dissipation of the interference components into the environment during decoherence behaves in a similar way. The environment can be considered a heat bath into which the interference terms spread and become completely disordered. At that point, the process is said to be thermodynamically irreversible. Interference is gone for good.The "collapse of the wavefunction" is like a snooker break-off shot. Imagine each ball represents an interference term of the quantum state. Before the shot, (before we make a quantum observation), we see low entropy - everything is nicely ordered. All the interference terms are coherent, and capable of producing interference patterns.After the shot, the system of balls represents a system with greatly-increased entropy (disorder). This is what happens when we make a quantum observation: interference terms dissipate into the "heat bath" environment, and all coherence is lost in the confused mess. The situation is now one of thermal irreversibility: it is extremely unlikely the original ordered situation could re-form itself. We therefore only see the collapse of the wavefunction operating the forward time direction (for the same reason we don't see broken eggs mending themselves).
See the Arrow of Time page for a full discussion about entropy and the thermodynamic arrow of time.
Decoherence in an Ensemble of Particles
So decoherence solves the mystery of apparent wavefunction collapse, and also explains why we do not see superposition states in macroscopic objects, but it does not explain which particular eigenstate is selected during the "measurement" process. As explained in the page on The Quantum Casino, the selection of a particular eigenstate is governed by a purely probabilistic process, so in order to analyse this probabilistic behaviour we should consider a number of quantum particles in a similar state (called an ensemble), and then we can use a useful statistical tool called a density matrix.
Let us consider a ensemble of particles in a box. The whole box can then be treated as a single quantum system. When we extract a particle from the box and measure it we find it to be either "blue" or "green", say. Before measurement, this system can then be in one of two states (see this paper by John Boccio):
- A pure state - each of the particles is in the same state with the same state vector. For example, let's suppose all the particles in our are in the same superposition state before measuring, an equal superposition of blue and green.
- A mixed state - the particles are all in different states and so the entire system cannot be described by a single state vector (i.e., it is not a pure state). Basically, if a system is not in a pure state then it must be in a mixed state. For example, the particles are either blue or green (in this case, this is just a classical mix of blue and green particles).
When we extract particles from the box one-by-one and measure each particle to determine if it is "blue" or "green" we find that, in both the case of the pure state and the mixed state, 50% of the particles measure as "blue" and 50% of the particles measure as "green". So, after measurement, the two quantum systems appear to be identical. However, the states of the two systems before measurement were clearly different: the pure state could be described by a single state vector (all of the particles were in the same superposition state). The mixed state, on the other hand, could not be described by a single state vector (because the particles were not all in the same state: some were green, some were blue). The statistical properties of both systems before measurement, however, could be described by a density matrix. So for an ensemble system such as this the density matrix is a better representation of the state of the system than the state vector.
So how do we calculate the density matrix? The density matrix is defined as the weighted sum of the tensor products over all the different states (see the page on The Quantum Casino for a description of the tensor product):
Where p and q refer to the relative probability of each state. For the example of particles in a box, p would represent the number of particles in state , and q would represent the number of particles in state .
Let's imagine we have a number of qubits in a box (these can take the value or , see the page on Quantum Entanglement). Let's say all the qubits are in the following superposition state:
In other words, the ensemble system is in a pure state, with all of the particles in an identical quantum superposition of states and . As we are dealing with a single, pure state, the construction of the density matrix is particularly simple: we have a single probability p, which is equal to 1.0 (certainty), while q (and all the other probabilities) are equal to zero. The density matrix then simplifies to:
This state can be written as a column ("ket") vector (see back to the page on The Quantum Casino for a discussion of bra-ket notation). Note the imaginary component (the expansion coefficients are in general complex numbers):
In order to generate the density matrix we need to use the Hermitian conjugate (or adjoint) of this column vector (the transpose of the complex conjugate of ). So in this case the adjoint is the following row ("bra") vector:
So, in this case, we can calculate the density matrix defined as the single tensor product:
What does this density matrix tell us about the statistical properties of our pure state ensemble quantum system? For a start, the diagonal elements tell us the probabilities of finding the particle in the or eigenstate. For example, the 0.36 component informs us that there will be a 36% probability of the particle being found in the state after measurement. Of course, that leaves a 64% chance that the particle will be found in the state (the 0.64 component).
The way the density matrix is calculated, the diagonal elements can never have imaginary components (this is similar to the way the eigenvalues are always real - see back to the page on The Quantum Casino). However, the off-diagonal terms can have imaginary components (as shown in the above example). These imaginary components have a associated phase (complex numbers can be written in polar form). It is the phase differences of these off-diagonal elements which produces interference (for more details, see this extract from the book Quantum Mechanics Demystified here).
So how do the off-diagonal elements (and related interference effects) vanish during decoherence?
The off-diagonal (imaginary) terms have a completely unknown relative phase factor which must be averaged over during any calculation since it is different for each separate measurement (each particle in the ensemble). As the phase of these terms is not correlated (not coherent) the sums cancel out to zero. The matrix becomes diagonalised (all off-diagonal terms become zero). Interference effects vanish. The quantum state of the ensemble system is then apparently "forced" into one of the diagonal eigenstates with the probability of a particular eigenstate selection predicted by the value of the corresponding diagonal element of the density matrix.
Consider the following density matrix for a pure state ensemble in which the off-diagonal terms have a phase factor of :
|1||0.7071 +0.7071 i|
|0.7071 -0.7071 i||1|
(If you find one of the numbers fails to decrease to zero then it means there has been a rupture in the very fabric of spacetime and the universe will explode in 30 seconds).
An Alternative Interpretation: Parallel Universes
The so-called "quantum measurement problem" has baffled physicists ever since quantum mechanics was first discovered: What constitutes a measurement? What random process selects the observed value from the possible values in the superposition? What happens to the other terms in the superposition?
In 1957, Hugh Everett proposed the many-worlds interpretation (MWI) of quantum mechanics in an attempt to provide an answer to the quantum measurement problem. The MWI suggests that when we make a measurement, the universe itself splits into different parallel universes, each universe containing one possible outcome of the observation. For example, in the case of Schrödinger's cat, when we open the box the universe splits into two: the cat is alive in one universe, and dead in the other.
To my mind, the MWI seems very much a product of the fifties. Recent results in quantum decoherence have given us new insights into the quantum measurement problem, and there is no longer any need to propose parallel universes to explain the process. As was explained previously on this page, interference terms get dissipated out into the wider environment and become effectively undetectable. We do not need to propose that they teleport into a parallel universe - the terms remain firmly in this universe.
As was explained in the main text, there is now experimental support for this decoherence viewpoint. If particles can be isolated from the environment we can manage to view multiple interference superposition terms as a physical reality in this universe. This is called macroscopic quantum coherence. For example, the electric current being made to flow in opposite directions (see this article by Tony Leggett, which also explains Schrödinger's cat in terms of decoherence), or the research at NIST which has created an atom in two places at the same time (see this excerpt from Jim Al-Khalili's Quantum here, or see this press release from NIST). If the interference terms had really escaped to a parallel universe then we should never be able to observe them both as physical reality in this universe.
In his paper The Wave Function: It or Bit?, H. Dieter Zeh agrees that superposition terms can exist in this universe - no need for parallel universes: "During the recent decades, more and more superpositions have been confirmed to exist by clever experimentalists. We have learned about SQUIDs, mesoscopic Schrödinger cats, Bose condensates, and even superpositions of a macroscopic current running in opposite directions (very different from two currents cancelling each other). Hence, their components exist simultaneously."
For some reason in the literature the MWI is often connected with environmental decoherence, but there is no reason for this: the MWI makes no mention of why the connection with the wider environment is essential. It is this MWI misconception - that particles should be considered as isolated entities, not intimately connected with the rest of the universe - that provides the whole motivation for the misconceived Many-Worlds interpretation. Whereas it is only through the realisation that particles are never completely isolated and should always be considered as one with the rest of the universe does an explanation for quantum behaviour become clear.
And the importance of the role of the whole universe is precisely what we will discover on the next page: Reality is Relative.
Comments are now closed on this page.
Does decoherence theory formally define the necessary conditions for determining the outcome of a quantum measurement?
It seems strange that there should still be an irreducible probabilistic quality to quantum measurements if deterministic waves are just mixing with other deterministic waves. - Simeon, 19th March 2007
You make a very good point, though. If the quantum jump is an illusion or an approximation to the reality of events at a quantum level, then maybe events at the quantum level are actually deterministic and we could, in principle, determine the outcome of a quantum measurement. This all boils down to what is happening in the quantum "foam" at the smallest Planck Scale. This is all highly-speculative, and there a few tentative theories. But decoherence says nothing about this. - Andrew Thomas, 19th March 2007
It is disappointing that decoherence does not say anything about determinism.
What are you views on Shariar Afshar's double slit experiment? - Simeon, 19th March 2007
Any views on quantum chaos as a link to a deterministic QM? - Simeon, 20th March 2007
At any rate, much obliged for this exceptionally lucid explanation of decoherence etc. - Patel, 28th March 2007
Decoherence solves this apparent quantum jumping by revealing that there is no sudden jump, but rather we see the progressive influence of billions of particles (in our measuring device) and associated entanglements, each playing a part in reducing the interference terms in the state vector (a process which happens incredibly fast) so we apparently get an immediate jump. Jim Al-Khalili explains this so well in his book "Quantum": "The phenomenon of decoherence shows that there is no sharp dividing line between the micro and macro worlds, but rather that the interference effects of superpositions disappear increasingly quickly with increasing complexity of a quantum system". I've included the relevant two pages of the book: http://www.ipod.org.uk/reality/reality_alkhalili_decoherence.asp which also describes an experiment very similar to the one you suggest. - Andrew Thomas, 28th September 2007
The interference terms present in the pure state would vanish into the environment when we take a measurement. At that point, the density matrix would be diagonalised, i.e., it would no longer have any off-diagonal terms capable of causing interference. - Andrew Thomas, 18th November 2007
In the superposition diagram in your text, the eigenstates seem to be orthogonal. In fact, basis states are orthogonal if I'm not wrong. (For eg., a 2 state system where the 2 states span the entire space) This being the case, how can orthogonal components show superposition/ interference? Isnt orthogonality of components a condition that arises only when decoherence happens?? - Sindhuja, 18th December 2007
Regarding your point on eigenstates, eigenvectors with different eigenvalues are always orthogonal. So we can think of the eigenstates as forming the basis vectors, and a superposition vector is then a combination (mix) of those possible eigenstates. Wikipedia says it well: "The observable has a set of eigenvectors which span the state space. It follows that each observable generates an orthonormal basis of eigenvectors (called an eigenbasis). Physically, this is the statement that any quantum state can always be represented as a superposition of the eigenstates of an observable." http://en.wikipedia.org/wiki/Measurement_in_quantum_mechanics - Andrew Thomas, 18th December 2007
But from the Quantum eraser experiment it seems that by erasing the information regardin the slit through which the photon passed, it is possible to restore the interference pattern.
Am I missing something, or does your article explain this effect too?
http://en.wikipedia.org/wiki/Quantum_eraser_experiment - Vimil, 26th June 2008
The interference pattern on the screen is caused by the accumulation of many photon spots as those photons hit the screen - the pattern of dots slowly builds up. But once we have detected a single dot on the screen, we cannot go back from that point (for that single photon) to produce the wavefunction for that photon (i.e., it cannot go back to its superposition of many states). So you said "There is no way to get back the interference pattern", well, you NEVER go back to the interference pattern - the interference pattern is built up from the dots of many photons. What you mean is "There is no way to go back to the wavefunction superposition state for that single photon", and that is correct.
In the quantum eraser experiment you no longer detect the "which way" information which tells you which slit the photon goes through. As a result, the interference pattern of dots of many photons will slowly build up again. But that's not the same as going back to the wavefunction superposition state for a single photon. I hope that helps. - Andrew Thomas, 26th June 2008
So we can create matter if we have enough energy. - Andrew Thomas, 19th October 2008
"The concept of mass–energy equivalence unites the concepts of conservation of mass and conservation of energy, allowing rest mass to be converted to forms of active energy (such as kinetic energy, heat, or light) while still retaining mass. Conversely, active energy in the form of kinetic energy or radiation can be converted to particles which have rest mass. The total amount of mass/energy in a closed system (as seen by a single observer) remains constant because energy cannot be created or destroyed and, in all of its forms, trapped energy exhibits mass. In relativity, mass and energy are two forms of the same thing, and neither one appears without the other." - Austin Cunningham, California, 19th October 2008
Regarding change of entropy, in the Many Worlds interpretation, entropy increases after each universe-branching operation (the resultant universes being slightly more disordered). So Many Worlds **causes** an increase in entropy. But in the explanation of decoherence I presented in the main article, the increase in entropy due to the Second Law **causes** decoherence. So that is far more preferable to the Many Worlds cause/effect sequence. And decoherence is explained by an existing physical principle: the second law of thermodynamics.
And I have another reservation about the Many Worlds interpretation. Considering the Many Worlds interpretation of the double-slit experiment, when the electron passes through one of the two slits, the universe splits in two: in one universe the electron goes through one slit, and in the other universe the electron goes through the other slit. Now I've got to admit that that is a pretty neat solution to the quantum measurement problem. However, when we examine the marks when the electrons hit the screen, we find an interference pattern. That means the electron in one parallel universe would have to be somehow "leaking out" of that parallel universe and interfering with the electron in this universe. So it's not really a parallel universe at all! I haven't read a satisfactory explanation of that from the proponents of Many Worlds. Surely if something is capable of having an effect in this universe then it is **in** this universe, not in some separate plane of existence (a purely parallel universe).
The more I think about the Many Worlds interpretation, the less sense it makes. - Andrew Thomas, 27th October 2008
I just want to discuss one point by the way.. decoherence tells you why in ONE single measurment you always find the macroscopic system in a certain eigenstate and never in a superposition of them. I've read that this is nothing more that what Schroedinger equation tells us when we consider a very general coupling hamiltonian of the system with its environment.
Given decoherence, the schroedinger equation tells you nothing about why the system decides to choose ONE eigenstate in ONE measure. So, given decoherence, the system has a probability distribution of being found in each of his eigenstates. Which kind of physical process select a value of the ascissa of this probability distribution (i.e. the eigenstate) in ONE measure? I think that the Everett interpretation should be interpreted as a theoretical tool to avoid introducing this difficult (and this questions).. I don't know and I don't ask myself, for the moment, what are the consequences of living in a branch of an infinite multiverse. I just want to say that maybe it would be useful to think in this terms, just for the theory to be more complete..
- Marco, 24th November 2008
However, I would suggest that just because we do not yet know the mechanism behind the quantum measurement problem, we should not jump to wild conclusions involving parallel universes. We should be more patient, and wait for a more reasonable explanation. See Is Big Physics peddling science pornography?
- Andrew Thomas, 24th November 2008
Interesting idea, though. - Andrew Thomas, 14th January 2009
We have NO interest in the polarization or spin states of the particle, only in its existence.
Assume there are two independent entanglement experiments going on and our only interest is when the decoherence occurs for each experiment and in particular, the time difference between the two. The time difference is controlled by the sender who causes the collapses of the two "beams" to occur with a predetermined time delay between the decoherence of the two beams. This time delay is carefully noted at the receiving station and it constitutes information which is inherent in the exact amount of delay itself, with no regard to the polarization, spin, or other characteristics of the particle, - only its presence at a certain time. If this works, and I don't see the fallacy; the method by which nature will thwart it, then we have sent information much faster than c!!
Thanks for your patience. - dick lowrie, 14th January 2009
But there is no such restriction on particle position: if you find a particle in one place, then the other particle could still be anywhere in the universe. Nice idea, though. - Andrew Thomas, 14th January 2009
Suppose Joe decoheres his particle deliberately at time t3, and Sam his at time t4.
Now, at time of decoherence, both J and S particles are independently detected at a remote site, and the difference in the time of detection t4 minus t3 is what is important here, having nothing whatever to do with any spin or polarization of either particle. Am I still way off base and missing something? I have studied your respsonses and still can't see what I am doing wrong. Thanks for your responses and patience. - Dick Lowrie, 15th January 2009
So we have an entangled particle pair - two particles in total, yes? Or are you suggesting that Joe and Sam produce separate entangled particle pairs, in which case there are four particles in the system? Can you clarify a bit, please. Thanks a lot. - Andrew Thomas, 15th January 2009
Perhaps the confusion is on my part.
At any rate I am thinking of TWO separate entangled particle pairs, thus there are four particles in the system, as you clarified it.
I guess my bare-bones question is: if an entangled pair is established, then deliberately decohered at station A, and assuming a clever experimental setup, can someone at a remote station B waiting and watching, detect that decoherence occurred at station A simply by having the other de-entangled particle (of the pair) then trigger a photocell (for example) at station B?
I either am sadly way off-base in my understanding of the fundamentals of entanglement or maybe there's something else escaping me.
Thanks for your help. - Dick Lowrie, 15th January 2009
So the person at the receiving end has a particle detector - that's all he needs.
And it's the same for Joe and Sam. When you say "Joe decoheres his particle deliberately at time t3" then what that means in this situation is that Joe detects his particle. Nothing more than that. Decoherence fixes its position. But this does not mean any "spooky action at a distance" is sent to the receiving station. The man at the receiving station just detects his particle as well. It's just people detecting particles. It's not a particularly remarkable scenario. There's no way to send any faster-than-light information here.
A more interesting case is when there is only one particle in the system, as in the double-slit experiment. And one slit points to a detector on planet Earth, while the other slit points to Alpha Centauri, many light years away. Now let's imagine there's a man with a detector on Earth, and another man with a similar detector on Alpha Centauri. Now, they both can't detect the one particle, so only one of them will detect it. But if they choose to do an interference experiment with the particle (instead of detecting it) the particle will appear to have gone through both slits!
So when the particle is detected on Earth, say, then there has to be "spooky action at a distance" to Alpha Centauri saying "This particle has been detected, so don't be detected on Alpha Centauri as well". Now that's strange! - Andrew Thomas, 15th January 2009
I am enjoying the "Understanding Quantum Mechanics" book by Roland Omnes you list bear the top of the page. He is very much a "Copenhagen Interpretation" (Andrew goes into this on the next page) kind of guy, and apparently feels that decoherence both alters slightly but strengthens that interpretation. He also mentions "consistent histories". Did I miss something or do you mention that anywhere on your website? - Steven Colyer, 24th March 2009
The key experiment in consistent histories is the delayed choice experiment: http://en.wikipedia.org/wiki/Wheeler's_delayed_choice_experiment The idea is that by making an observation now we somehow "change the past", or, at the very least, we "select" the past from a number of possible alternatives. For example, in the delayed choice experiment, when the photon hits the screen and its position is found, the consistent histories approach says - at that point - the past is fixed to determine precisely which slit the photon went through.
In his book "The Fabric of the Cosmos" Brian Greene tries to talk his way out of this need to change the past (but doesn't convince me): "Quantum mechanics does not deny that the past has happened, and happened fully. Tension arises simply because the concept of 'past' according to the quantum is different from the concept of 'past' according to classical intuition. Our classical upbringing makes us long to say that a given photon 'did' this or 'did' that. But in a quantum world, our world, this reasoning imposes upon the photon a reality that is too restrictive. As we have see, in quantum mechanics the norm is an indeterminate fuzzy, hybrid reality consisting of many strands, which only crystallizes into a more familiar, definite reality when a suitable observation is carried out. Observations we make today cause one of the strands of quantum history to gain prominence in our recounting of the past."
The real problem with consistent histories is that it attempts to cling to a particle-oriented view of reality in which the only reality is particles which are localised in space. So in this way it attempts to deny any "quantum weirdness" such as particle non-locality ("spooky action at a distance"). However, the price it has to pay for clinging to this idea of localised particles is that it has to introduce this horrible idea of changing the past. In consistent histories the past is changed to show that the point particle definitely went through just one slit.
However, Bell's Inequality just drives a stake through the heart of the consistent histories interpretation because it shows that there really is "spooky action at a distance" and particles are not localised in space. See http://www.ilja-schmelzer.de/realism/game.php which says "Does that mean that consistent histories does not have an explanation for the violation of Bell's inequality? In some sense, yes: There is nothing consistent histories can present that I would accept as a sufficient explanation." - Andrew Thomas, 25th March 2009
Here's a quote from the Wikipedia entry for "Consistent Histories":
"It thus becomes possible to demonstrate formally why it is that the questions which Einstein, Podolsky and Rosen assumed could be asked together, of a single quantum system, simply cannot be asked together." So consistent histories attempts to dodge Einstein's EPR thought experiment (which would appear to show non-locality) by just saying "You can't ask that question". Very weak - that's no answer.
- Andrew Thomas, 27th March 2009
transcompressed compressed,,n=3 5n=15 magic cube
euclidean math ,,parables to exodus ,,milky way out of fuel fule almost and jacking andromeda
noble entry system model,,we know..love your work
working on exodus currantly h=Delta Alpha Omega systems - ari hock, 21st April 2009
But these two usages of the word 'observation' are very, very different.
For one thing, we will require some apparatus in addition to our eyesight in order to observe the wavelength of a light beam to be 475nm, and in the course of this observation we shall also be taking a measurement (a needle might swing across a scale to '475' on a scale, for instance.)
In the second usage of the word 'observation' the only apparatus we need is our own visual system, plus our conscious mind; and, interestingly, no measurement is made nor even possible (we don't have any units for 'blueness', for one thing.)
But there's another possible difference between our observing that light is blue and that it has a wavelength of 475nm. This is that in the first case decoherence occurs at the moment we make our observation; whereas it need not necessarily be the case that it occurs in the instant we see 'blue.'
Or, rather -- let us be clear about this -- it need not be the case that decoherence* occurs when my mind observes the colour blue; it may well already have occurred when the (anatomical and physiological, hence physical) 'measuring apparatus' which includes my retina, fovea, optic nerve and visual cortex 'observed' -- that is, processed -- the light beam.
* (I'm correct, am I, in equating - even if casually - 'decoherence' with 'the collapse of the quantum waveform'?)
I think I will propose that observation by the mind -- that is, the generation of qualia or sense experiences from neural impulses -- does not produce decoherence (waveform collapse) whereas all other forms of 'observation', which all involve physical apparatus of some kind, do so.
I have no evidence to support this, but it seems at least likely in the sense that we might claim that only physical (that is, material) events exert quantum influences, whereas there's no great reason to suppose that mental (hence immaterial) events need necessarily have this effect.
- Martin Woodhouse, 22nd May 2009
1 - Why does a particle (be it a photon, an electron, a Fulleren molecule, ..) not decohere at the double slit itself? I mean the double slit is a macroscopic structure, isn't it? When the particle is in superposition and goes through both slits at the same time, why does it not rather "just" hit the gigantic wall between the slits, causing an observation and thus simply decohere?
2 - How does a photon detector work? I mean especially the detectors installed in the two slits in order to trace the way a photon took. Is a photon "still alive" after being detected or is it destroyed and a new one is produced?
I hope I have not overread already given answers to already asked questions.
Best Wishes and thank you again! - Mathias, 29th May 2009
I thought so beforehand, too, that some particles "wouldn't make it". But how comes - and that is my question - that the passing particles do interact with the wall somehow without being observed by it?
I mean a particle that passes both slits "virtually scans" the wall without touching it in a manner that would lead to being observed by the wall.
I see it is difficult to express, what's in my head. Sorry, I hope you catch what I mean :-)
Might it be, that decoherence is something that is NOT inevitable when a particle interacts with a macroscopic structure? So is it possible that a particle stays in superposition and no observation takes place although this particle comes in contact with a macroscopic structure?
Leading me to the question, is decoherence itself subject to random behavior? That is, when touching a macroscopic structure a particle "may" decohere or it "may" stay as it is not being measured. Then what would the decoherence probability be? :-) Or did I completely loose the way?
Best wishes and many greetings (from Japan by the way) - Mathias, 29th May 2009
Lets say the slit size (the open space) is very small. It will be very likely then that a wavefunction hitting the slit structure will collapse and thus most of the particles shot will not pass. But those wavefunctions that pass will go through both slits and behave like a wave (hence the name), that is the particle position probabilities become "rearranged", because the wavefunction interferes with itself.
And with the concept of decoherence we have a tool to describe the collapse of the wavefunction. Did I get that right?
Now I will have to think about the meaning of the crests and troughs. The position probality where a wavefunction crest meats a wavefunction trough drops to zero. So if a crest means "probable here" then would a trough mean "anti-probable here"? I think I read your article again. :-)
Thank you and best wishes - Mathias, 30th May 2009
Have you any thoughts on Lewis Little's Theory of Elementary Waves? It sounds, at first, like the old "ether," but it isn't. I understand that there are some technical (math) problems with it, but it could explain much, if proved. Is it terribly fanciful to imagine that the measurement device itself, or the observer, is not "quantumly" passive?
By the way, your site is outstanding! Thank you. - Greg, 1st June 2009
However, I think it is a huge fallacy to think that any spatial wave based theory is ever going to explain quantum mechanics. OK, so it can explain the double slit experiment. But what about the superposition of particle spin? That's absolutely nothing to do with waves in space. It's much better to think of the superposition theory of quantum mechanics (a particle's state is the sum of all possible states) rather than any spatial wave based theory (which would only be able to explain the double slit experiment). In the superposition theory we find the particle taking ALL possible routes to the screen (Feynman's "Many Paths") - including going through both slits in the double slit experiment. That's a much more elegant explanation than Mr. Little's idea. And the superposition theory can also explain particle spin: the particle's state is the sum of all possible spins. It's just so much neater.
If I was a gambling man, I wouldn't be putting my money on Mr. Little's theory. - Andrew Thomas, 2nd June 2009
Feynman's Sum-over-Histories is an excellent way to resolve several tough challenges in Quantum Physics, but even Feynman didn't think it represented reality so much as it was an excellent mathematical tool, which it is.
At the heart of all this is "point particles," which even the newest to this stuff hopefully realize is a convenient mathematical assumption but most likely does not describe reality.
Superposition, and its special particular solution case: Entanglement, is well explained when the fundamental particles are treated as zero-dimensional "points." They do, but they cannot be that. At the level of "fundies" though, all is well if they are treated as such.
Fortunately, some of us are more into the "Physics" (Reality) than the math. The math is important and crucial, but it is a tool, not an explanation.
I have no comment on Little's theory at the moment for I never heard of it, and I thank "Greg" (excellent name btw) for alerting us to it, thank you.
Decoherence is almost the opposite of Entanglement and Superposition in Physics, because unlike them, it actually makes sense.
Case in point: WHERE does The Sun end? Where is its edge? In other words, draw me a border, on one side of which you state: here is The Sun, and on the other side: Not Sun.
I'll tell you right now where to draw it: 5 billion light years from here, where the first photons of The Sun are still traveling. Not at the center of the Galaxy however, as those photons have been absorbed into the Milky Way Galaxy's black hole. Our galactic black hole eats, and now we know what it eats. Decoherence stops when the decohered wave is absorbed, and in its last gasp, affects that which absorbs it.
Everywhere else though, the waves skip along their merry way.
Ciao. - Greg Sivco, 2nd June 2009
By the way, Walters is a theoretical particle Physicist at the University of Wales in Swansea,I'm curious if you know him (Also "The Strangest Man" about Paul Dirac is out)
OK then: TEW = Dr. Lewis Little's Theory of Elementary Waves. I've looked into (thanks, Internet) and find it is considered to be in the "crank" category, although less than most, as some "aspects" of it are thoughtful and intriguing.
The basic problem however is that like many "TOE" theories the Theorist assumes Einstein right and Bohr wrong, specifically that non-locality is wrong, when many "Bell Test Experiments" (please Wiki that up) have proven otherwise. "Greg" is correct that there is a problem with the math. Little defended his TEW against the DDC = Double-delayed choice results of the BTE incorrectly via incorrect probability mathematics, as explained at the following link:
How would TEW would rank on John Baez's famous "Crackpot Index ?" I have the feeling it would rate low, which is better that most. But it would still be a positive number.
On John Baez' index, any Theory begins with minus-five points with points added for crackpot-ery. It was intended as a bit of a joke but is actually quite good. Here is the link to The Crackpot Index:
So once again most theories like this reject Quantum Physics and go full bore Relativity. In this case they maintain that Non-Locality is false, that the Universe is 100% local.
I'm sorry, but experiments disagree. In Psychology this is called Double-Blind Technique, and that's Bad Science i.e., Pseudo-Science, or at the very least bad scientific technique (same thing, really).
Arivedirche - Greg Sivco, 4th June 2009
However, in reality, to even place an observer (another particle, measuring instrument or some other object) at a finite displacement from the particle is not a sudden process, and doing so will gradually change the wave-function of the particle (assuming for it is stationary in this frame of reference and not unstable).
Also, the wave-function of any real particle is very much time-dependent, and always remains an uncollapsed wave-function. When a particle seems to approach an object (slit, screen, detector..), its wave-function is actually simply continuing to evolve, in most cases also "spreading out", in a way governed by the environment including that object. Its "measurement" is not an instantaneous but a continuous effect on the wave-function. As implied in the above article, and Wikipedia, The "wave-function collapse" is only part of the time-dependent wave-function, and the other part, even though very small if the "observer" is macroscopic, is always still there "in superposition". - LWQ, 23rd June 2009
The absolutely key thing to remember is that we must avoid falling into the trap of just considering the particle in isolation, and just considering the simple quantum state of that isolated particle (in the way you have suggested - leading to much puzzlement!). No particle is ever completely isolated, so it is a mistake to treat it as such. Instead, it is vital to treat the particle and its environment as **one entangled system**. The picture then becomes far less simple and more complex. The role of the environment then becomes key.
While no one knows precisely what goes on at the very small Planck scale (and until we do, it is impossible to give a definitive answer to your question), we now have clear experimental evidence that the environment plays a key role in eliminating the interference terms, and so undoubtedly it is the environment which chooses the preferred basis.
For example, if you have a measuring device with a pointer (which is the environment) then that pointer will only allow a result of 1 or 0, say. So the pointer decides what basis (and what eigenvalues) are allowable. Perhaps think of a measuring device as a radio receiver with the ability to amplify and filter a very weak signal (as I described in the text, there is a progressive process of "filtering and amplification" during decoherence). But this radio "measuring device" is a very peculiar radio which is only capable of playing the music received on only one of two possible frequencies: the 0 and 1 frequency! So whatever signal you put into it, no matter how weak, you're guaranteed to get a strong signal out of either the 0 or 1 channel. Whatever output signal is strongest will depend on the initial state of the input signal - if it was nearer 0 or 1. You see the similarity to the quantum state?
So it is the precise details and idiosyncracies of the *environment* - right down to the atomic scale - which selects the preferred basis.
Thanks for your question, Chris. - Andrew Thomas, 11th August 2009
However, it doesn't explain why the only possible eigenstates are dead or alive: why couldn't one of the eigenstates be half dead/half alive? This problem about the selection of the only possible eigenstates is called the "preferred basis" problem.
However, I do believe that a theory based on decoherence will one day provide a complete solution to the measurement problem. - Andrew Thomas, 5th October 2009
1.-Did you already hear about "Quantum Darwinism"? This is a recent major approach towards the solution of the long-lasting measurement problem (collapse of the wavefunction), so it may also be an interesting topic for both Chris (his comment about the preferred basis problem, August 2009) and Kris (his question, October 2009). QD and related quantum information theory was developed during the last decade by W. Zurek and his group of collaborators. Cast in a nutshell, QD explains the emergence of our objective, classical reality from the quantum through monitoring of open quantum systems by their environment, where decoherence and environment induced superselection ("einselection", for short) of preferred pointer states are the relevant processes. The environment of the quantum system becomes imprinted with redundant copies of information related to the pointer states in a darwinian manner; the environment is a "witness" to all quantum processes. So real experiments always rely on indirect but redundant measurements due to the correlation System-Environment.
Well written introductions are found at
Wikipedia + Note and Ref 1, and at http://www.universaldarwinism.com/quantum%20darwinism.htm
2.- I have at least one question left about decoherence. Since macro objects decohere very fast and as the decoherence time for an electron is about one year, why are superposition states of elementary particles still possible? All should have decohered irreversibly during the age of the universe. Or can quantum superpositions manage to evade the constant threat of decoherence and remain isolated somehow? Moreover, the reversibility of decoherence has been demonstrated experimentally by S. Haroche in 1996. I tried to answer the question for myself but I do not know if my answer is correct. I think particle-waves are constantly interacting (quantum chaos) and thus may reenter temporarily a coherent low entropy state which, however, is unstable.
Greetings from Switzerland
Rene - Rene, 19th December 2009
As for your second question, decoherence will only happen when a particle interacts with the environment in some way, for example, by being illuminated by light (hit by a photon). An electron orbiting a nucleus is not interacting with the envoronment in this way, so its position is undertermined and appears more like a "cloud". The decoherence time of about a year which you quote is for an isolated electron (isolation is never perfect, hence the slow decoherence).
Bear in mind, though, that decoherence for a particle is not a once-in-a-lifetime event. We can identify a particle's position by making it hit a screen (for example), but if that particle is then hit by another particle its position will once again become uncertain, only expressible by a wavefunction. We would then have to make it hit another screen (decohere again) to find its position.
- Andrew Thomas, 19th December 2009
I suggest you should include a Blue Box about Quantum Darwinism as an update in your Decoherence page. - Rene, 19th December 2009
1. I am trying to understand quantum physics from a philosophical point of view. The particular topic I am currently interested is the role of the observer[any sensing instrument or an observer] in this circus.
If you see, observer is the common factor in double split experiment, Bell's inequality, EPR Paradox, etc. So can you suggest me any particular book which deals in detail about this topic/context.
1.1 I went through some of the books. They had a chapter which basically talks about this with no emphasis on the act of observing. It was taken for granted.
So, if you happen to know, please do recommend.
2. In any experiment, when we say we "measure" the particle, any of its observables, how do we do that? If an electron is traveling, I think, we measure it by hitting it with a photon, or something like that. Am I correct?
I am still in your third chapter/section. So if you think you have already explained this somewhere, just please point me to that, that would be enough. - Varun Bhatta, 21st December 2009
In many of the books or online links that I have read, the act of observing and the affect of it [wavefunction collapse] are explained only in the context of electron/photon.
I wanted to know to what extant I can generalise this? I read in the book - Quark, Lepton & Big Bang - that this wave-fucntion collapse, observing affect would be applicable to a ball as well when it is thrown in the air... it is just that it is perceived as we perceive in our normal life.
I know this is an analogy. But, this brings out clearly what i am looking for. I want to know to what extent we can generalise this act of observing? Why is it not observed for an human being in the case of ball? What if i replace a suitable camera instead of me when observing something? etc.
Please recommend a book which expounds on this particular topic. - Varun Bhatta, 21st December 2009
Firstly, we have to define what we mean by "observation". Do we limit this to mean "conscious human observation"? Surely not. For example, a radioactive uranium nucleus buried in rock on a distant planet will decay to emit an alpha particle. It does not matter if a human observer looks at the rock or not. As Carver Mead agrees in this excellent American Spectator article: "That is probably the biggest misconception that has come out of the Copenhagen view. The idea that the (human) observation of some event makes it somehow more 'real' became entrenched in the philosophy of quantum mechanics. Even the slightest reflection will show how silly it is. An observer is an assembly of atoms. What is different about the observer's atoms from those of any other object? What if the data are taken by computer? Do the events not happen until the scientist gets home from vacation and looks at the printout? It is ludicrous!".
"I like to think that the moon is there even when I am not looking at it." - Albert Einstein.
Clearly, "measurements" must somehow be taking place all the time and do not require conscious observers. Instead, let us describe a "measurement" or "observation" as the process which produces a single property value from a state which was previously in quantum superposition, i.e., we now define a measurement to be the process of quantum decoherence which reduces the superposition state. In this case, any connection with the environment could produce a measurement. However, for all interference terms to disappear, i.e., for decoherence to be complete with the object no longer in a superposition state, the particle must make some macroscopic effect. This is described in the book Quantum Enigma: "Whenever any property of a microscopic object affects a macroscopic object, that property is 'observed' and becomes a physical reality". For example, if we use a macroscopic photon detector to detect the photon in the double slit experiment then that will destroy the interference pattern. So as long as there is a macroscopic effect from a quantum entity, that object can be considered to be "observed" or "measured" - no need for a conscious human observer.
To generalise the result, all particles behave according to the laws of quantum mechanics, so you can apply this result to all particles. Considering a human being observing a ball, we would consider the ball to be a "macroscopic" object (i.e., not a "microscopic" object) and so the effects of quantum mechanics would not be visible. Macroscopic objects are discussed on this page. - Andrew Thomas, 21st December 2009
By the way, you didn't answer my pt.2 in post#1. - Varun Bhatta, 21st December 2009
This is considered in the section on the Heisenberg Uncertainty Principle on the first page of this site. - Andrew Thomas, 21st December 2009
1. Consider that there is an electron and one one stray photon will hit it and bounce back.
1.1 If this happens to be a double slit experiment, some instrument to measure that photon would be there and we say that this causes the collapse.
1.2 But if this happens to be in the space, there being no observer or instrument to measure the bounced back photon, then wouldn't it be the same case - collapse of the wavefunction?
1.3 The point I trying to put forth is that - the act of an instrument trying to measure the photon bounced out of the electron doesn't causes the wavefunction collapse, but rather it is the photon trying to hit/interact with the electron. Right or wrong?
2.If whatever I have said is right, then i would be contradicting what I am aware. Because, in a double slit experiment, I should be knowing where the electron is in the first place to hit it with a photon.
Again, if directing me to a link would obviate you from writing again & save time, then please feel free. - Varun Bhatta, 22nd December 2009
In the double-slit experiment, if you attempt to detect the electron as it passes through the slits (using a photon or anything) then you obtain so-called "which way" information (which tells you which slit the electron passed through), and that also has the effect of destroying the interference pattern. - Andrew Thomas, 22nd December 2009
The big mistake of the MWI is that it considers particles as being isolated, point particles, isolated from the rest of the universe. But this can never be the case. As Niels Bohr wisely said "Isolated material particles are abstractions, their properties being definable and observable only through their interactions with other systems". By considering particles as isolated from the rest of the universe, then, yes, you do have to postulate crazy happenings like parallel universes (as particles can appear to behave very strangely in quantum mechanics). It is only when you realise that a particle is never truly isolated from the rest of the universe - in fact, it is the rest of the universe which defines the particle's property values - that sense can be made of quantum weirdness. And this point is well-explained on the next page on Reality Is Relative.
This was the great insight of Dieter Zeh in 1970 - the influence of the environment during the measurement process. But this insight is what was missing when the MWI was proposed in the 1950s. It then appeared that there was no choice but for all possible outcomes to be real (hence, bringing in the notion of the parallel universes). - Andrew Thomas, 10th January 2010
I do have a question about quantum decoherence which is unclear to me. This is how I understand it. I know that the observer is part of the system and cannot be properly described without it. If a macroscopic object (i.e. environment) observes a particle, they have interacted and therefore become entangled. Once observed, the wavefunction collapses into one - transition from quantum to classical. As simply stated as I can, my question is why does entanglement eliminate the interference effect? It has really eluded me and I am in need of you to shed some light on the subject. Thanks - Alex V, 10th January 2010