• Welcome to Religious Forums, a friendly forum to discuss all religions in a friendly surrounding.

    Your voice is missing! You will need to register to get access to the following site features:
    • Reply to discussions and create your own threads.
    • Our modern chat room. No add-ons or extensions required, just login and start chatting!
    • Access to private conversations with other members.

    We hope to see you as a part of our community soon!

Does consciousness change the rules of quantum mechanics?

PureX

Veteran Member
"Rules" are a function of cognition. So, yes, the rules can change relative to cognititove perspective.
 

Polymath257

Think & Care
Staff member
Premium Member
I'm assuming "sees" here means "considers"?

Better: 'measures'.


Wrong. First, in the realization linked to (Liefer calls this "Wigner's enemy") the friend was a bit of information that could be reversed or erased using e.g., weak measurements or partial measurements and something like spontaneous parametric down-conversion.

This is the scenario:

First, recall Bell's reformulation of Bohm's version of EPR: you have some (not necessarily quantum) two-level or bipartite system that becomes space-like and time-like separated. It could be two correlated particles with spin that decayed from a spin-0 particle, or two envelopes with notes saying "Yes" for one and "No" for the other. It doesn't matter. The idea is to have a common source for the "information" sent to two "labs" such that, when the "information" gets there Alice can open her envelope or measure polarization or whatever. So can Bob.

Now, Bell then assumes that there are parameters λi (e.g., λ1 and λ2) such that, whether or not we can determine what the parameters represents or if they can be measured or just about anything, we can determine that it is at least possible to explain the correlations that between Alice and Bob's measurements by a local source (the original system that generated the "information" sent in the form of envelopes or what have you).

Then pick a useful relation between the measurement outcomes for your purposes (or one determined empirically). See if it can be explained in terms of these hidden parameters. For the case even of many quantum systems, there is a way to reproduce the correlations classically. You can show that, in order to have no classical explanation, you must violate an inequality generated e.g., by a set of assumptions that include object definiteness, which is to say that while we may not know which envelope contains the card with "Yes" vs. "No" or |0> vs. |1>, the system had this property and the correlations are due to the original, local interaction.

Then Bell shows that using tripartite quantum spin systems one can violate such an inequality. In other words, no such λ's can exist.

Yes. Ultimately, the point is that measure theory is, essentially, commutative operator theory. Under the commutative assumption, you get inequalities that you do not get without it.

In the Bell-type Wigner's friend, or at least this one (Brukner's is a bit different, and Renner's is so different it doesn't involve Bell-type statistics at all), the friends Charlie and Debbie are the two systems that would correspond to the decayed atoms or envelops. The measurement settings Alice and Bob pick (x and y) are the local hidden variables. The outcomes A and B are the same as in the Bell set-up, corresponding to Alice's and Bob's measurements, respectively.

Now, you make the assumptions that the conditional probability of reversing/erasing the "friends" measurement is the same at least approximately the same as them not making measurements: the probability P_under reversal "undoing" the friends measurement_ (A, B|x,y) is roughly equal to the probability P_no Charlie or Debbie_(A,B|x,y)

That is, you assume that you can choose to use Charlie or Debbie's measurement or not, but if you choose not to allow them to measure (experimentally realized by not having the measured photon interact with "Debbie" or "Bob" via that path), then you should be able to treat this as if they didn't interact with the system. That is, if you don't measure the photon produced via no spontaneous parametric downconversion that takes the D or C route, or rather you erase the path/information such that it is as if you are simply making a standard measurement, then it shouldn't matter that D or C existed as a route at all. You should be able to assign truth values (akin to the object definitiveness from Bell experiments) to your own measurements. If your measurement uses information about the Charlie and/or Debbie path, then then you should still have (and will have) a definite output for that case (x & y both are 1) while for other values the choice is made to erase the photon from the SPD that interacted with the C & D path, measure the ones that didn't, and obtain a definite outcome consistent with this operational procedure.

You can't. It doesn't work.

Correct. But why would I assume that erasing something would be the same as it not happening at all? That, to me, is very counter-intuitive,specially in the context of QM.

You can't get disagreements about the actual results (which is why Rovelli is continuing to dig himself into this whole that I wish he would stop), which is why Rovelli's relativity analogy breaks down completely. You can't perform the measurements of the same system. In the classical Wigner scenerio, you get one result if you ask the friend, and another if you put the friend into a superposition state.

But the point is that you cannot get a contradiction from the same measurement. You can get what a classical observer would interpret as a contradiction, but why use classical theory? QM is the correct theory and it gives the correct results, up to giving correct correlations.

For many physicists, basically all measurements in QM that attempt to determine something like the state of the system in the sense discussed here are contradictions to QM. That's because in QM, evolution is unitary. The projection postulate, Born's rule, collapse, reduction, or even "update" are all non-unitary and are ad hoc. They contradict the predictions for the dynamics of all systems in QM. .We don't say, that of course (unless we subscribe to a no-collapse interpretation). We say that measurement involves a different process, and sweep under the rug the fact that the probabilities that we use when we claim that a measurement outcome is predicted by QM come from an ensemble of measurement degrees of freedom that can't (unlike classical ensembles) be decomposed even in principle into the statistics of single states/measurements.

In short, we have to use a series of ingenious methods and measurement schemes for each different system in order to be able to talk about the probabilities associated with it, but these are determined not by QM (which, again, describes systems via unitary evolution or in the more general operational approach in terms of CPTP maps, where we likewise replace the states with density operators and the projection-valued measurements with POVMs). So we have "predictions" QM makes that we determine by using QM right up until we extract information. Because we don't have a theory that accounts for measurements, we can't use QM without being able to talk about measurements, we get a contradiction if we treat the measuring device quantum-mechanically (that's what Wigner's friend is about, except that it is intended to be more drastic), so we simply tack on another type of state evolution to quantum theory.

Put more simply, we can pretend there is no contradiction, and then simply see what the measurement outcome would be if we treated the mesurement apparatus quantum mechanically the way we would if we treated it like one in the lab: in terms of the Hilbert ray we'd obtain from the product of the Hilbert spaces and rays corresponding to System (tensor product) Apparatus.
That's Schrödinger's cat and Wigner's friend. QM predicts something never seen. Hence, it contradicts itself.

That doesn't seem to be a contradiction to me.

Or we don't say that and we think about measurement as part of QM that we haven't worked out yet. One way to go about this is to try to think about how the measurement process works, generalize it to an operational framework that can be used without deciding on an interpretation, and then apply it in the development of no-go theorems and the like as well as their experimental realizations.



Depends. Firstly, if one is talking about the Frauchiger-Renner experiment, then it is about self-measurement (an extension of the Deutsch version). If it is the standard EWFS of Brukner, then the actual measurements will disagree as this version is about obtaining information from the friend that we can later compare (in principle). In the classic Wigner's friend, the only way we wouldn't get a contradiction is if two friends walked out of two labs with both outcomes (or, more friends, labs, and outcomes). In the Griffith version, the contradiction is in the ability to assert if an event is observed, it happened.

Yes, if the two friends walk out of the labs and compare, they will have the same results.

And in the second, observed by what? Happened according to what?
 

LegionOnomaMoi

Veteran Member
Premium Member
Yes. Ultimately, the point is that measure theory is, essentially, commutative operator theory. Under the commutative assumption, you get inequalities that you do not get without it.

If that were the real issue, then one could reasonably expect that the kind of statistics one gets from any sort of EPR-type experiment would produce non-classical correlations. But this isn't true. You can have use local resources to produce quantum correlations for the EPR and EPRB set-ups. Yet both cases involve noncommutative observable algebras.
Also, it's mathematical nonsense. Measure theory doesn't require a function space, or most of the other structures that operator theory does.

Finally, nobody cares about that aspect of it in this context for a very specific reason. That isn't the point at all. The noncommutativity was known long before Bell. EPR was nearly 30 years earlier. Even Kolmogorov had noted (but none gone much further) the difficulties with quantum probabilities and the measure spaces required for probability theory.

The issue was that Bohm had shown it is possible to have object definiteness, determinism, and the kind of explanation that von Neumann's no-go theorem had supposedly shown to be impossible. Bohm provided and then developed a counter-example.

One reason it was found to be distasteful was that it was horribly nonlocal and the view at the time was there wasn't any need for this kind of approach Bohr had already dealt with, QM was all that was needed and all the questions were basically answered as much as they could be.

Bell recast the EPRB scheme into the statistical set-up he did explicitly to show that quantum theory was nonlocal, regardless of the adoption of Bohmian mechanics or the pilot-wave approach. This was then promptly misunderstood in terms of a choice between hidden variables and locality. It was not.

However, although Bohm's reformulation of EPR was so much clearer, experimentally one ran into a problem: you can do this experiment over and over again and it won't matter in the slightest in terms of whether or not e.g., the spin-values measured had definite values prior to measurement. It was simply more or less decreed (for different reasons, depending upon how much one followed Bohr vs. Heisenberg vs. some growing textbook orthodoxy) that one didn't ask.

What Bell did, among other things, was cast the problem into a form that could be tested empirically AND in terms of those aspects of the theory which were present but we didn't ask about and they couldn't be answered anyway (and in some sense this was correct, as one can model bipartite entanglement correlations using classical statistics).



Correct. But why would I assume that erasing something would be the same as it not happening at all? That, to me, is very counter-intuitive,specially in the context of QM.

If you need something more elementary, think of it as being akin to the ability in these same sort of experimental arrangements that have been discussed and then discussed for about a century. The basic double-slit or which-path set ups can be extended to show how one can restore coherence, or even more basically that one can set-up and experiment to determine which path e.g., a photon takes but then, so long as the information is erased, instead get the interference "pattern". It's sort of a basic, fundamental component of entanglement and the logic underlying quantum theory. It's also a part of the more general issues related to contextuality.

But no matter. If you wish to know more about the importance of this sort of experimental arrangement, you are no doubt familiar with the delayed-choice and quantum eraser experiments, but perhaps could benefit from reviewing the somewhat more involved combination of these two. Or just skip over it and forget about the experimental details and recall what the point was: to show that measured outcomes can disagree, or more precisely to violate local friendliness and more specifically the Absoluteness of Observed Events.

The experimental arrangement is much more about the ways to allow for AOE than the contradiction (sort of; it's more about being able to do such an experiment, and secondarily, then, to allow for AOE).

The contradiction comes in an empirical realization of a thought experiment. Quantum theory prescribes that Wigner treat his friend in the lab the way we would couple a quantum system to another or to the environment or even to an apparatus if we moved the Heisenberg cut. In the lab, in practice, this isn't problematic because we always know who is measuring and we don't need to worry about the actual state of a system in a superposition. Hence the Peres dictum that quantum states don't reside in Hilbert spaces, they reside in the lab.

But we don't like this. And it was ignored for too long. With the advent of quantum communications, quantum information, the lack of a theory of quantum gravity, quantum computing, etc., a lot of people started going back to these issues and quantum foundations was born.

One of the challenges is testing assumptions and in particular figuring out how to realize ways in which to test different interpretations (or at least, what we must gain or lose if we accept one vs. another).

Wigner's thought experiment is a clear contradiction, as Wigner's friend cannot experience himself in a superposition, and even if this were possible the superposition that Wigner ascribes has two states (or more, depends on the set-up) but only one outcome. And his friend is in both states. In each of these two, the friend has a different outcome (in principle) that Wigner can't know about unless he asks, collapsing his friend into one state.

Since this isn't possible, the experiment in question was a proof of principle. Namely, we have the ability now to perform weak measurements on ancillary systems and other fancy methods that allow us to know partial information without destroying coherence. So it is theoretically possible to set up the experiment and find that the information from the "friends" measurement was known to be definite.



But the point is that you cannot get a contradiction from the same measurement
Firstly, that's trivially true because you can only measure once. Period. If you try to do so continually, you still won't get a contradiction but you will stop state evolution altogether.

And this is missing the point.

The point of Wigner's friend and EWFSs is that I only don't get a contradiction if I
1) Treat the system as obeying two types of contradicting state evolution and
2) I don't try perform measurements on systems that include subsystems capable of recording definite outcomes that I won't have access through standard projective measurements/collapse.


You can get what a classical observer would interpret as a contradiction, but why use classical theory? QM is the correct theory and it gives the correct results, up to giving correct correlations.
Not a classical physics. Classical statistics. As in probability theory.



Yes, if the two friends walk out of the labs and compare, they will have the same results.

Wrong. They can't. Recall that the two friends in this case are Schrödinger's cats with memory. But you could just as easily return to Schrödinger's cat and then realize you are asserting that a dead cat and alive cat would walk out of that lab agreeing that they are dead and alive.
The point is that Wigner describes a friend, not a cat. Treating this isolated system quantum mechanically requires treating Wigner's friend in terms of a superposition state of mutually exclusive outcomes (all outcomes, actually). So for a binary yes/no, heads/tails, or 0 and 1 type of outcome pair, Wigner treats his friend as a superposed state of, on the one hand, obtaining "heads" and, on the other, of obtaining "tails".
But his friend could only obtain one of these outcomes. So if the friend's event actually was definite (AOE), then Wigner's description is in contradiction with the friends AOE. ALSO, IMPORTANTLY, because the outcome was DEFINITE (the friend DID MEAURE AND GOT AND OUTCOME), Wigner's quantum mechanical treatment predicts probabilities that cannot occur and the state assignment cannot be correct as it allows for non-zero probabilities for impossible events.

So, for example, the experiment starts out with the friend having not performed the measurement yet, and Wigner likewise describing friend (tensor product) lab as some ray in the product space. Then the friend performs his measurement and obtains "HEADS", or "Spin-up", or "YES". Meanwhile, Wigner is describing a state in which HEADS/TAILS (or Spin-up/Spin-down, YES/NO, etc.) both occur. Let's stick with HEADS/TAILS. So Friend measures and gets "HEADs". Wigner describes his friend as in a superposition state of obtaining "HEADS" with some probability and "TAILS" with some probability, but this is not possible.

Wigner then performs his measurement, which will yield either "HEADS" or "TAILS" but not both. But, according to you, two friends walk out. Which means we have a contradiction, as one will say that the outcome was "HEADS" and the other that it was "TAILS."
 

Polymath257

Think & Care
Staff member
Premium Member
If that were the real issue, then one could reasonably expect that the kind of statistics one gets from any sort of EPR-type experiment would produce non-classical correlations. But this isn't true. You can have use local resources to produce quantum correlations for the EPR and EPRB set-ups. Yet both cases involve noncommutative observable algebras.
Also, it's mathematical nonsense. Measure theory doesn't require a function space, or most of the other structures that operator theory does.

Measures are, essentially, the Banach space dual of the collection of continuous functions (or continuous functions vanishing at infinity). That collection of continuous functions is a commutative C* algebra and all commutative C* algebras are of that form. Operator theory, in contrast, is essentially the study of C* algebras in general.

Operator theory also does not require a function space, merely a Hilbert space (often supposed to be separable). That Hilbert space *can* and often is described as the square integrable functions with respect to some measure (usually Lebesgue measure). But it does not have to be so described.

The operators are usually unbounded, but closed and densely defined, so have a very good representation theory.

So, yes, as I said, measure theory is essentially the commutative version of operator theory. And it is the non-commutativity in operator theory that leads to the violation of Bell's inequalities (which only hold in the commutative theory).

Finally, nobody cares about that aspect of it in this context for a very specific reason. That isn't the point at all. The noncommutativity was known long before Bell. EPR was nearly 30 years earlier. Even Kolmogorov had noted (but none gone much further) the difficulties with quantum probabilities and the measure spaces required for probability theory.

The issue was that Bohm had shown it is possible to have object definiteness, determinism, and the kind of explanation that von Neumann's no-go theorem had supposedly shown to be impossible. Bohm provided and then developed a counter-example.

But Bohmian mechanics doesn't generalize well to the relativistic setting, specifically when anti-matter interactions are involved. It also has real problems with spin, for example.

One reason it was found to be distasteful was that it was horribly nonlocal and the view at the time was there wasn't any need for this kind of approach Bohr had already dealt with, QM was all that was needed and all the questions were basically answered as much as they could be.

Actually, of course, QM *is* local, just not realist.

However, although Bohm's reformulation of EPR was so much clearer, experimentally one ran into a problem: you can do this experiment over and over again and it won't matter in the slightest in terms of whether or not e.g., the spin-values measured had definite values prior to measurement. It was simply more or less decreed (for different reasons, depending upon how much one followed Bohr vs. Heisenberg vs. some growing textbook orthodoxy) that one didn't ask.

What Bell did, among other things, was cast the problem into a form that could be tested empirically AND in terms of those aspects of the theory which were present but we didn't ask about and they couldn't be answered anyway (and in some sense this was correct, as one can model bipartite entanglement correlations using classical statistics).

If you need something more elementary, think of it as being akin to the ability in these same sort of experimental arrangements that have been discussed and then discussed for about a century. The basic double-slit or which-path set ups can be extended to show how one can restore coherence, or even more basically that one can set-up and experiment to determine which path e.g., a photon takes but then, so long as the information is erased, instead get the interference "pattern". It's sort of a basic, fundamental component of entanglement and the logic underlying quantum theory. It's also a part of the more general issues related to contextuality.

But no matter. If you wish to know more about the importance of this sort of experimental arrangement, you are no doubt familiar with the delayed-choice and quantum eraser experiments, but perhaps could benefit from reviewing the somewhat more involved combination of these two. Or just skip over it and forget about the experimental details and recall what the point was: to show that measured outcomes can disagree, or more precisely to violate local friendliness and more specifically the Absoluteness of Observed Events.

But that is only if you interpret from a classical perspective. The measured outcomes do NOT disagree unless you think that some path is definitely taken or that the spin has a definite value. Again, QM is local, but not realist.

The interference patterns or lack thereof strictly obey QM in all cases.

Wigner's thought experiment is a clear contradiction, as Wigner's friend cannot experience himself in a superposition, and even if this were possible the superposition that Wigner ascribes has two states (or more, depends on the set-up) but only one outcome. And his friend is in both states. In each of these two, the friend has a different outcome (in principle) that Wigner can't know about unless he asks, collapsing his friend into one state.

Why is it a contradiction for Wigner's friend to be in a superposition for Wigner? And, again, if Wigner asks, they both agree about all results. There is no contradiction unless you insist that the collapse for Wigner's friend means that there is a collapse for Wigner.

Firstly, that's trivially true because you can only measure once. Period. If you try to do so continually, you still won't get a contradiction but you will stop state evolution altogether.

And this is missing the point.

The point of Wigner's friend and EWFSs is that I only don't get a contradiction if I
1) Treat the system as obeying two types of contradicting state evolution and
2) I don't try perform measurements on systems that include subsystems capable of recording definite outcomes that I won't have access through standard projective measurements/collapse.

OK, and there is still no contradiction.

Not a classical physics. Classical statistics. As in probability theory.

Yes, exactly. You can't use measure theory (the commutative C* algebra theory) in analyzing quantum physics (because quantum physics uses the non-commutative C* algebra theory).

Wrong. They can't. Recall that the two friends in this case are Schrödinger's cats with memory. But you could just as easily return to Schrödinger's cat and then realize you are asserting that a dead cat and alive cat would walk out of that lab agreeing that they are dead and alive.

An experimenter that does not interact with the cats would still regard them in a superposition. So?

The point is that Wigner describes a friend, not a cat. Treating this isolated system quantum mechanically requires treating Wigner's friend in terms of a superposition state of mutually exclusive outcomes (all outcomes, actually). So for a binary yes/no, heads/tails, or 0 and 1 type of outcome pair, Wigner treats his friend as a superposed state of, on the one hand, obtaining "heads" and, on the other, of obtaining "tails".
But his friend could only obtain one of these outcomes. So if the friend's event actually was definite (AOE), then Wigner's description is in contradiction with the friends AOE.
No, it is not. Wigner's friend got a definite result. Wigner is ignorant of that result so his description uses a wave function that is still in a superposition.

ALSO, IMPORTANTLY, because the outcome was DEFINITE (the friend DID MEAURE AND GOT AND OUTCOME), Wigner's quantum mechanical treatment predicts probabilities that cannot occur and the state assignment cannot be correct as it allows for non-zero probabilities for impossible events.

And, as far as Wigner can tell, both probabilities are possible. Until he interacts with his friend and gets a measurement, he cannot know.

So, for example, the experiment starts out with the friend having not performed the measurement yet, and Wigner likewise describing friend (tensor product) lab as some ray in the product space. Then the friend performs his measurement and obtains "HEADS", or "Spin-up", or "YES". Meanwhile, Wigner is describing a state in which HEADS/TAILS (or Spin-up/Spin-down, YES/NO, etc.) both occur. Let's stick with HEADS/TAILS. So Friend measures and gets "HEADs". Wigner describes his friend as in a superposition state of obtaining "HEADS" with some probability and "TAILS" with some probability, but this is not possible.

Why not? As far as Wigner can know from his previous interactions, there are still two possibilities. And, at no time when he compares his results with those of his friend, do they disagree. After the measurement, the friend has a definite result, but Wigner does not know which. And both results are equally likely from what Wigner knows. And, in reproduction of the experiment, both results will happen with the probabilities Wigner computes.

Wigner then performs his measurement, which will yield either "HEADS" or "TAILS" but not both. But, according to you, two friends walk out. Which means we have a contradiction, as one will say that the outcome was "HEADS" and the other that it was "TAILS."

No, only one friend walks out. That friend is in a superposition according to Wigner until Wigner interacts with the results of the friends experiment (either the friend, or apparatus, etc). Once Wigner gets that information, his result will agree with that of his friend.

In repetition of the experiment, both possibilities do happen for Wigner with the correct probabilities. And, for the friend, both possibilities happen with the correct probabilities. And, when they consult with each other about the results, they always agree.

Where is the contradiction? That you want a collapse to be for everyone instantly? Sorry, no can happen.

In the EPR, for example, each leg detects the spin according to the correct probabilities as calculated by QM. When they get together, they agree that the correlation is precisely what QM predicts. And that happens even if the two measure at very different times *as long as neither interacts with the results of the other prior to measurement*.
 
Last edited:

LegionOnomaMoi

Veteran Member
Premium Member
Measures are, essentially, the Banach space dual of the collection of continuous functions
Completely false. That's like saying integrals are, essentially, the limit of rectangles under a curve or that vector spaces are, essentially, triples of real numbers. You've listed an example of measures that isn't even particularly relevant for many (if not most) of the relevant uses. Measures are defined on sets, not vector spaces (function spaces are no). Hence, in probability theory, it's extremely useful as we can do away with defining so-called "continuous random variables" vs. "discrete" and however one wishes to treat the embarrassing case of so-called "mixed" distributions or the like in elementary probability theory. We replace these with the full-fledged classical, axiomatic probability theory of Kolmogorov that is fundamentally rooted in measure theory.

A measurable space is defined on a set and a set of subsets of this set. This pair (if it satisfies the requisite properties, in particular that the subsets be a σ-algebra, or what probabilists often refer to as a σ-field). A measure space is a measurable space equipped with a special map called a measure.

Now, like topological spaces, a great many spaces have so much additional structure that, even though they are technically measure spaces we don't bother describing them as such any more than we describe them as topological spaces, because 1) the additional structure that makes them "special" or worthy of names like Hilbert space or Banach space or whatever would make reference to these as a measure space in general worthless or less than worthless (as calling a normed space a "measure space" is too suggest that there is something special about the measure, rather than that "measures are, essentially, the Banach space dual of blah blah blah [trivial example of measure space] and 2) such spaces can be equipped any number (or uncountably many) different measures without altering the essential structures of these spaces, s.t., drawing attention to these as "measure spaces" or referring to them in terms of measure theory because they are "the Banach space dual of the collection of continuous functions" or similar, singularly unhelpful characterizations of what must be some measure space or can be made into one without providing the actual measure, sigma-algebra, or anything else that would warrant "measure theory" until you need to in e.g., the properties and proofs of theorems regarding the spectral theory generalizations in infinite dimensions and with operators that are unbounded (i.e., may be bounded or not bounded, and thus proofs must hold for both cases).

Measures are set-theoretical. The sets need not be equipped with a vector space structure, still less a norm.

That collection of continuous functions is a commutative C* algebra and all commutative C* algebras are of that form. Operator theory, in contrast, is essentially the study of C* algebras in general.
I know what operator theory is. It has no relevance to your comment on measure theory, and no substantive relevance in the context of this experiment or one's like it. One could come closer to a relevant statement on measure theory by noting that any theory, quantum or otherwise, which would satisfy the Bell inequalities for some set of measurement outcomes that are imagined to be independent and objective must be described via the joint distribution of independent r.v.s on a probability space (which is built up, foundationally and fundamentally, out of a measure space where the total measure of the set is unity).
Operator theory is required for much of quantum theory and an avenue of research developed in particular for attempts at making QFT rigorous as well as (and at the same time as) making QM axiomatic. It is also vital to QM generally because of the spectral theorem and the fact that we cannot, in general, associate to a given observable "operator" in QM an eigenvector (and in the physics literature this is bypassed by using the delta function in some expression such as δ(x − λ), and claim that these distributions are the eigenvectors of the supposed operator in the infinite dimensional case where A is some self-adjoint operator over the square integrable space defined (or whose elements defined) over a real-valued, bounded interval, e.g., L^2([a,b]).
But this is sloppiness, not even of much heuristic value, and calling them generalized eigenvectors without defining them properly doesn't help. And things get trickier because perhaps the most important operators for QM in this case are unbounded, and therefor cannot even be both self-adjoint and defined on the whole space L^2([a,b]) or any other (but rather dense subspaces).

Operator theory also does not require a function space, merely a Hilbert space
A Hilbert space is a function space. What on earth are you talking about?
 

LegionOnomaMoi

Veteran Member
Premium Member
So, yes, as I said, measure theory is essentially the commutative version of operator theory.
You also said that a Hilbert space, which is actually essentially a function space (it's perhaps the prime example of one), need not be what it is (namely, a function space).

Meanwhile, your description of operator theory made no mention of measures, measurable spaces, sigma-algebras, or anything relevant. You've taken spaces that are sometimes assumed to have canonical measures (e.g., Lebesgue-Stieltjes, or just Lebesgue) and referred to these in terms of measure theory, which makes them less related to measure theory then the real number line.

Finally, and most importantly, non-commutativity is actually relevant in a particular way or at least a particular approach here. But as we need only deal with spin and therefore with matrices defined over the field of complex numbers, we don't really need much operator theory (unless you are also in the habit of calling linear algebra operator theory because of the trivial relationship).

And it is the non-commutativity in operator theory that leads to the violation of Bell's inequalities (which only hold in the commutative theory).
Wrong. Firstly, in the case of the statistical versions, it's the fact that one can't define independent random variables that satisfy the necessary conditions imposed by experiments (Bell's theorem does not need quantum theory, although it would be trivial so far as we know were it not for the violation of the inequality make possible by exploiting a quantum system as a shared resource). And this is related, in turn, to the fact that quantum events can't be embedded into a probability space because any such space (being a measurable space of generated by the sigma-field of subsets of the set Ω, and therefore because all such spaces can be decomposed into a Boolean one (where outcomes can, loselely speaking, be interpreted as events having values of 0 or 1, or, alternatively, truth values).

Since you can violate Bell's inequality using the algebra of observables that are not only finite but which are downright simplistic, only in the sense that matrices are non-commutative in general is it true that one finds a straightforward connection here.

But Bohmian mechanics doesn't generalize well to the relativistic setting, specifically when anti-matter interactions are involved. It also has real problems with spin, for example.
It doesn't have a real problem with spin. And anti-matter interactions are interpretations that were forced upon the failures of QM in a relativistic setting. It's not really fair to take the as yet non-rigorous interpretation of QFT that grew out of empirical necessity combined with some imaginative reinterpretations of what the outcomes of experiments were (not to mention what physical systems vs. their properties were) and complain about the Bohmian version without specifically pointing to how it fairs relative to the ad hockery in the standard cases (e.g., path integral or canonical), not to mention the various hand-waving (e.g., "we'll just pretend we can multiply these distributions that we hope exist..." or "Let's move along from the rigorous Gaussian and Wiener measures and pretend that we can apply such generalizations to the Path integral would-be measures...").


Actually, of course, QM *is* local, just not realist.

1) The Bell theorems do not depend upon QM. It depends upon sets of measurements. If quantum theory were replaced by something more fundamental, this wouldn't matter one bit when it comes to Bell's inequalities. Any theory that would explain these measurements (i.e., those of experiments in which Bell's inequality is violated) has to deal with the fact that violations of Bell's imply nonlocality. The whole "realist vs. local" is a common misperception even among physicists. It's a myth, and its poorly defined at that.
2) There is no way to regain locality in QM by assuming an anti-realist view. One has to do more. If one asserts simply that a physical system does not have defined properties until they are measured, one cannot suddenly factorize the space of events necessary to fit the data with a multivariate distribution that assumes nothing about the system to begin with (other than what is required to ensure that the preparation procedure took place in such a way and at such a time and that the subsequent measurements did so too that there is no way for information about the preparation to be transmitted or encoded to allow for local hidden variables to specify the properties measured).
Again, you can turn Bell's theorem into a game. You can violate the inequality easily with a telephone (signaling) or cheating or any number of ways that correspond to loopholes in the foundations literature. But simply asserting that the measure properties weren't there until they were measured doesn't explain the correlations, and the correlations aren't actually correlations in any strict sense because they cannot be defined in terms joint distributions of r.v.s from a probability space which is what correlations are.
 

LegionOnomaMoi

Veteran Member
Premium Member
Why is it a contradiction for Wigner's friend to be in a superposition for Wigner?
It is not, in general, at all easy to determine whether or not an experimental result is in contradiction with quantum theory. This is for several (not necessarily distinct) reasons:

1) Quantum mechanics makes no predictions. By this I do not mean that either QM or quantum theory more generally should be understood to consist only of the “physical” part (e.g., the unitary evolution of the SE, state vector, observables in the Heisenberg picture, etc.), as the no-collapse proponents would have it. Rather, I mean that it is more akin to probability theory than even statistical mechanics. You must first have some theory you input in order to get predictions as output. This is both theoretically and practically challenging, because it amounts to the fact that we too often rely on classical theories and classical descriptions that we then attempt to “quantize” (at least in practice), when we actually regard the quantum description we derived as the more fundamental one. In terms of experiments, this doesn’t matter so much for many of those that use Bell states and Bell-type experimental paradigms. Most of the time, the physical systems in question need only have spin, and therefore the difficulty is in building accurate enough devices, channels, etc., to obtain accurate enough statistics from a particular physical system (e.g., photons vs. electrons) rather than a theoretically justified representation of the system in the appropriate quantum framework such as QED.

However, even here this difficulty remains. How do we know that a system as spin? Through experiments, as spin (at least quantum mechanical spin) is a very simple quantum property. But if we were to find a contradiction in some experiment that we were able to trace to a prediction about measurements of the spin of some system, this wouldn’t contradict QM. It would mean the “quantization” scheme was wrong, and therefore the system was inappropriately described, rather than that the theory itself was contradicted.

2) There is no way to determine whether or not measurements are in contradiction with QM because there is no quantum theory of measurement. I do not mean by this that there is no universal agreement regarding interpretations of QM or solutions to the measurement problem. I mean that one can have any interpretation one pleases and still one cannot go to the lab and tell an experimentalist what counts as a “measurement” in any manner other than one that is ad hoc. This is a central point of Wigner’s original thought-experiment. Schrödinger used his infamous thought experiment about a cat to highlight the contradiction inherent in quantum theory, but it was too early and too easily swept under the rug (principally by Bohr). Wigner’s extension would perhaps have been much better had he not tried to interpret it in terms of consciousness or minds, but it remains an improvement.

The reason Wigner’s thought experiment is an improvement over Schrödinger’s is due to the way we actually use quantum theory. We have nothing other than trial and error and many years over a few generations of intuition to help us determine when we can and can’t use unitary evolution. But knowing that it will work for e.g., fiber optic cables or some similar experimental device, equipment, etc., doesn’t help us to explain why it does in these cases but not for PMTs or some other type of experimental equipment. More basically, it doesn’t allow us to use QM to say what is or isn’t a measurement. Of course, there is a very basic manner in which we have always known what counts as a measurement. When someone goes and looks at a dial, through a telescope, at a readout screen, etc., and sees some measured result. In other words, when we act as observers and “see” an observed result.

Wigner’s friend shows us how QM contradicts itself here, at least in principle. Because Wigner has us imagine that we can sufficiently isolate a lab with a friend in it so as to be able to use the unitary evolution that is the most basic, physical ingredient for quantum dynamics regardless of the formalism used (or even if we are talking QFT, where we’ve had to add all sorts of layers to the overall conceptual complications as well as a slew of mathematical difficulties at least in part due to the need to hold on to unitary evolution in a relativistic setting). In other words, we imagine we have a friend doing some measurement on some quantum system in a lab sufficiently isolated enough to treat quantum mechanically. Ok, no problem (in principle), we do this all the time in practice with composite systems and it is essential to measurements anyway. The lab becomes our schematic measurement “box” with two entagled systems, only instead of the typical class of probes used in practice we have a friend doing a measurement and reading an output.

Well, we know how to treat such a system. It’s how we make measurements and obtain predictions using QM in the first place. So we treat the entire friend+system as entangled in a space that accounts for the necessary additional degrees of freedom. For simple systems yielding binary outcomes upon measurement, the friend’s quantum mechanical state is, in fact, nothing more than a typical probe used in experiments for about a century now. It can be thought of as the “superposition” of the outcome of the friend measuring e.g., spin-up along some direction vs. spin-down. Which means we treat the friend as being in the state of having measured spin-up with some probability and also as being in the state of measuring spin-down with some probability.

So far, this is just Schrödinger’s cat. The crucial difference comes after we perform “our” measurement and let the friend out of the lab. She hands us a slip of paper with either spin-up or spin-down, but not both. This “collapses” our system’s state. Then we ask the friend what it was like to be in a superposition state only to “collapse” into having a definite outcome when we opened the door. But, unlike a dead cat, our friend can tell us that this never happened. The friend obtained a definite outcome in the lab. From her perspective, our description of the physics was laughably, ludicrously, and grossly wrong. It is a stark, blatant contradiction with what actually happened.

But we used QM, so where did we go wrong? That’s where disagreements start. For some, QM doesn’t describe physical states or properties, so there was no contradiction, we updated our state of knowledge appropriately. But for these individuals, we have no account of how physical systems appear to have definite values, how classical physics can ever emerge from quantum, etc. And quantum systems are inherently, absolutely subjective.

For others, there was no collapse. Ever. All outcomes were realized in different branches. Of course, apart from the fact that this “solution” involves positing that most of physical reality is in principle unobservable, it just moves the problem elsewhere. We still need the Born rule to make predictions, but have ruled out the possibility that it has any objective validity as well as the basis for any subjective (statistical) use (because, with the possible exception of equiprobable experiments for which we can appeal to derivations relying on a central limit theorem, we need the frequencies of empirical outcomes that we obtain by assuming each outcome is the only one that occurred).

And so on. There are many, many more classes of approaches and nuances within and among these.

3) Measurement outcomes are necessarily statistical. To see why this is a problem when it comes to whether or not experiments contradict quantum mechanical predictions, consider EPR. By this time, Einstein had given up on trying to devise a way to get beyond Heisenberg’s indeterminacy principle (as it should be called, and as he later called it). Instead of trying clever thought experiments that he believed would allow measurement, in principle, of both the momentum and velocity of a quantum system to arbitrary precision, he had found a way to use QM against itself (he thought). By using what would be called (by Schrödinger) entangled systems, he show even more: The measurement performed to determine the outcome for one observable could actually yield the necessary value for a canonically conjugate observable of the system without even requiring an experiment! Bye-bye Heisenberg’s indeterminacy principle, hello incompleteness of QM!

Bohm’s reformulation in 1951 made many of the problems with EPR go away, as he used much simpler systems. You have anti-correlated spins of systems prepared in a singlet state such that the measurement of one spin gives you the spin of the other. Done. Contradiction.


Or is it? Well, no, not really, for a number of reasons. Firstly, you can’t actually show anything from such an experiment. There is no way to determine whether the outcome you get differs substantially from the fact that, if you send two people two distant planets, each with a sealed envelope containing with a note reading “heads” or “tails”, the person opening either note instantly “knows” what the outcome of a performed or unperformed experiment is or would be on a distant planet. Bohr’s reply to EPR seemed to suggest something like this.

What Bell did was embed all such experiments in terms of the statistics of these measurements. If we assume that there exists some value like “head” that goes on one note and “tails” that goes on the other, then we can use a joint probability distribution and satisfy Bell’s inequality. If, however, we have a situation like that described first by Boole (before QM) and later derived independently by Bell (based on Bohm’s reformulation of EPR), in which the two persons on each planet use quantum system as a shared resource, then we can violate the inequality required for any theory in which we would have that the results obtained by two such experiments on distant planets would be decided the moment the notes were put into the enveloped.

In QM, this corresponds to the idea that atoms or subatomic particles have the measured properties they do independently of whether or not either of the two persons on distant planets actually performs an experiment to measure them (objective facts) AND the ability for the correlations to be actual correlations (that is, defined in terms of a joint distribution). One can’t simply say “abandon realism but retain locality” because, by itself, this doesn’t do anything. There is no reason that systems outside of any possible causal (local) influence can have yield joint measurement values that exceed the maximum allowed by correlations just because the properties measured were supposed to be indefinite.

The point, however, is that what Bell did was formulate, in a theory-independent way, a method of testing theoretical aspects of QM that lacked conceptual clarity. Single-shot experiments are meaningless in QM, but of course the manner in which physical systems are supposed to have states and properties that can be measured via a single experiment is crucial. Bell tied the two together. He provided a way to take thought experiments relying on single-shot experiments to be tested theory-independently and empirically.
 

LegionOnomaMoi

Veteran Member
Premium Member
OK, and there is still no contradiction.
What would a possible contradiction be when you allow for anti-realism to hold and don't bother with how quantum theory predicts anything? Indeed, how are you allowing for a contradiction to be possible regardless of realism when you haven't discussed how quantum theory can yield outcomes using measurements? Because the contradiction is rooted in the fact that QM is inherently self-contradictory. The collapse postulate and generalizations of it contradict the unitary evolution of quantum systems. Which might be perfectly acceptable, if we had some way of determining when a quantum system might obey the continuous, deterministic evolution described by quantum theory without observers and the rule that we employ ad hoc for observers.
Yes, exactly. You can't use measure theory (the commutative C* algebra theory) in analyzing quantum physics (because quantum physics uses the non-commutative C* algebra theory).
None of this makes sense. Are you seriously not aware of how absolutely essential measure theory is in noncommutative mathematics? Do you really think something that doesn't even require operations like commutativity, because it is built up out of set theory, is somehow impossible because you are dealing with a generalization of matrix algebra? Even quantum probability is measure-theoretic. It has to be. You can't even integrate without measure theory. What on earth are you talking about?
No, it is not. Wigner's friend got a definite result. Wigner is ignorant of that result so his description uses a wave function that is still in a superposition.
If you believe that wave functions are ways to make bets or are inherently subjective, then you are correct, such paradoxes pose no problems for your interpretation. Reality does, naturally. Or the quantum-to-classical transition. Or defining a way to make sense out of particle physics, or physics more generally.
But sure, there's no contradiction if (like the QBists) you believe that QM is a probability calculus (albeit not one that is as yet rigorously defined, as e.g., there is no standard way to decompose the space of quantum events to make quantum conditional probabilities consistent, esp. as the linear functionals that normally count as conditional expectations here are now somehow the events themselves).
But there's nothing physical, either. And you have the problem that Wigner, his friends, the measuring devices, etc., are all made up of quantum systems that are supposed to be methods of calculating probabilities in your interpretation.
That friend is in a superposition according to Wigner
If you view Wigner's use of QM as a way of calculating probabilities, then his friend isn't in a superposition. The state of space of outcomes are described using a superposition of states. But as this doesn't have anything to do with the physical Wigner or anything else physical, then it doesn't describe what the "friend is in".
In the EPR, for example, each leg detects the spin according to the correct probabilities as calculated by QM. When they get together, they agree that the correlation is precisely what QM predicts. And that happens even if the two measure at very different times *as long as neither interacts with the results of the other prior to measurement*.
There was no spin in EPR. That was Bohm's version.
 

Polymath257

Think & Care
Staff member
Premium Member
Completely false. That's like saying integrals are, essentially, the limit of rectangles under a curve or that vector spaces are, essentially, triples of real numbers. You've listed an example of measures that isn't even particularly relevant for many (if not most) of the relevant uses. Measures are defined on sets, not vector spaces (function spaces are no). Hence, in probability theory, it's extremely useful as we can do away with defining so-called "continuous random variables" vs. "discrete" and however one wishes to treat the embarrassing case of so-called "mixed" distributions or the like in elementary probability theory. We replace these with the full-fledged classical, axiomatic probability theory of Kolmogorov that is fundamentally rooted in measure theory.

Sigh. OK, now that I have a determination of what level of math you are familiar with, I can start to be a bit more explicit.

Yes, measures are defined on a collection of subsets of some set (usually a sigma-algebra). You are probably only familiar with positive measures, but it is possible to extend the definition to include signed or even complex valued measures. It turns out that every complex measure is the linear combination of 4 positive measures (real/imaginary parts, then positive/negative parts).

Measures allow for the definition of integration in a more general setting. In most cases, this is the central fact about them that is necessary: they give a linear functional on the collection of measurable functions. Usually, the underlying set also has a topology and all continuous functions are measurable.

This means that a measure automatically gives a linear functional on the collection of continuous functions. In other words, an element of the dual of the Banach space of continuous functions.

Moreover, under mild assumptions, *every* bounded linear functional on the Banach space of continuous functions is given by integration against some complex measure.

This is why we can identify the collection of measures in that context with the dual of the space of continuous functions.

A measurable space is defined on a set and a set of subsets of this set. This pair (if it satisfies the requisite properties, in particular that the subsets be a σ-algebra, or what probabilists often refer to as a σ-field). A measure space is a measurable space equipped with a special map called a measure.

Now, like topological spaces, a great many spaces have so much additional structure that, even though they are technically measure spaces we don't bother describing them as such any more than we describe them as topological spaces, because 1) the additional structure that makes them "special" or worthy of names like Hilbert space or Banach space or whatever would make reference to these as a measure space in general worthless or less than worthless (as calling a normed space a "measure space" is too suggest that there is something special about the measure, rather than that "measures are, essentially, the Banach space dual of blah blah blah [trivial example of measure space] and 2) such spaces can be equipped any number (or uncountably many) different measures without altering the essential structures of these spaces, s.t., drawing attention to these as "measure spaces" or referring to them in terms of measure theory because they are "the Banach space dual of the collection of continuous functions" or similar, singularly unhelpful characterizations of what must be some measure space or can be made into one without providing the actual measure, sigma-algebra, or anything else that would warrant "measure theory" until you need to in e.g., the properties and proofs of theorems regarding the spectral theory generalizations in infinite dimensions and with operators that are unbounded (i.e., may be bounded or not bounded, and thus proofs must hold for both cases).

You are completely missing my point. Look at some compact Hausdorff space. Look at the collection of complex valued continuous functions on that space. Any Borel measure on the compact Hausdorff space will give an element of the dual of the space of continuous functions via integration and ALL elements of the dual are of this form. So the space of all complex Borel measures is isomorphic to the dual of the space of continuous functions.

Measures are set-theoretical. The sets need not be equipped with a vector space structure, still less a norm.

Not what I said.

I know what operator theory is.
It's clear you know the basics. But it is also clear you don't know many of the more advanced topics.

There is a notion of a C* algebra. The collection of bounded operators on a Hilbert space give an example of a non-commutative C* algebra. Every commutative C* algebra is isomorphic to the collection of continuous functions on some compact Hausdorff space. So, in this sense, measure theory is the commutative version of operator theory.

Furthermore, the maps A-><Ax,x>, which are elements of the dual of the space of operators, correspond to the positive measures in the commutative case. More generally, the maps of the form A-><Ax,y> correspond to complex valued measures.

It has no relevance to your comment on measure theory, and no substantive relevance in the context of this experiment or one's like it. One could come closer to a relevant statement on measure theory by noting that any theory, quantum or otherwise, which would satisfy the Bell inequalities for some set of measurement outcomes that are imagined to be independent and objective must be described via the joint distribution of independent r.v.s on a probability space (which is built up, foundationally and fundamentally, out of a measure space where the total measure of the set is unity).

Exactly. In the commutative thoery (integration with respect to a measure), certain inequalities between correlations can be proved. The corresponding inequalities are false in the non-commutative case of operators on a Hilbert space.

Since QM is based on operators, it tends to fail the inequalities that are true for measure theory.

Operator theory is required for much of quantum theory and an avenue of research developed in particular for attempts at making QFT rigorous as well as (and at the same time as) making QM axiomatic. It is also vital to QM generally because of the spectral theorem and the fact that we cannot, in general, associate to a given observable "operator" in QM an eigenvector (and in the physics literature this is bypassed by using the delta function in some expression such as δ(x − λ), and claim that these distributions are the eigenvectors of the supposed operator in the infinite dimensional case where A is some self-adjoint operator over the square integrable space defined (or whose elements defined) over a real-valued, bounded interval, e.g., L^2([a,b]).
But this is sloppiness, not even of much heuristic value, and calling them generalized eigenvectors without defining them properly doesn't help. And things get trickier because perhaps the most important operators for QM in this case are unbounded, and therefor cannot even be both self-adjoint and defined on the whole space L^2([a,b]) or any other (but rather dense subspaces).

Mostly correct. Most of the operators involved are 'closed operators' when defined on those dense subspaces and that means that there is a strong notion of self-adjointness (and normality) that leads to a good version of the spectral theorem.

One way of regarding the spectral theorem is via projection valued measures. And yes, the notion of a generalized eigenvector is tricky and not as useful as the notion of invariant subspaces of the Hilbert space under the operators.

A Hilbert space is a function space. What on earth are you talking about?

No, L2 spaces give common examples of Hilbert spaces, but a Hilbert space is a more abstract concept than a function space. Operator theory does NOT require the Hilbert space be an L2 space.
 

Polymath257

Think & Care
Staff member
Premium Member
You also said that a Hilbert space, which is actually essentially a function space (it's perhaps the prime example of one), need not be what it is (namely, a function space).

The L2 space are good examples of Hilbert spaces, but a Hilbert space is a more abstract object. Saying they are all function spaces is similar to saying they are all sequence spaces because of the existence of an orthonormal basis.

Meanwhile, your description of operator theory made no mention of measures, measurable spaces, sigma-algebras, or anything relevant. You've taken spaces that are sometimes assumed to have canonical measures (e.g., Lebesgue-Stieltjes, or just Lebesgue) and referred to these in terms of measure theory, which makes them less related to measure theory then the real number line.

There are measures in far more general contexts than just on the real line, just like there are Hilbert spaces that are not function spaces.

The collection of bounded operators on a Hilbert space is a non-commutative C* algebra. The positive linear functionals correspond to elements of the underlying Hilbert space via A--> <Ax,x>. So the positive linear functionals of norm 1 correspond to normalized states.

Similarly, the collection of continuous functions on a compact Hausdorff space is a commutative C* algebra. The positive linear functional functionals on this C* algebra correspond to positive Borel measures on the compact space via integration. Those of norm 1 thereby correspond to the probability measures.

Bell's inequalities are done in the commutative theory (using random variables and measures) and are violated in the non-commutative theory (using operators and states).

Finally, and most importantly, non-commutativity is actually relevant in a particular way or at least a particular approach here. But as we need only deal with spin and therefore with matrices defined over the field of complex numbers, we don't really need much operator theory (unless you are also in the habit of calling linear algebra operator theory because of the trivial relationship).

Linear algebra is operator theory in the case of finite dimensional Hilbert spaces.

And yes, non-commutativity is essential here. It is crucial that the C* algebra be non-commutative in QM. That is why many ideas from (commutative) measure theory fail in QM.

Wrong. Firstly, in the case of the statistical versions, it's the fact that one can't define independent random variables that satisfy the necessary conditions imposed by experiments (Bell's theorem does not need quantum theory, although it would be trivial so far as we know were it not for the violation of the inequality make possible by exploiting a quantum system as a shared resource). And this is related, in turn, to the fact that quantum events can't be embedded into a probability space because any such space (being a measurable space of generated by the sigma-field of subsets of the set Ω, and therefore because all such spaces can be decomposed into a Boolean one (where outcomes can, loselely speaking, be interpreted as events having values of 0 or 1, or, alternatively, truth values).

That is one way of looking at it, yes. But the main difference is between the commutative theory of continuous functions and measures versus the non-commutative theory of operators and states. The embeddings (or lack thereof) follow from this.

Since you can violate Bell's inequality using the algebra of observables that are not only finite but which are downright simplistic, only in the sense that matrices are non-commutative in general is it true that one finds a straightforward connection here.

Agreed. And matrix algebras are non-commutative C* algebras.

1) The Bell theorems do not depend upon QM. It depends upon sets of measurements. If quantum theory were replaced by something more fundamental, this wouldn't matter one bit when it comes to Bell's inequalities. Any theory that would explain these measurements (i.e., those of experiments in which Bell's inequality is violated) has to deal with the fact that violations of Bell's imply nonlocality. The whole "realist vs. local" is a common misperception even among physicists. It's a myth, and its poorly defined at that.

Bell's inequalities depend on measure theory. That means they are essentially dealing with commutative C* algebras (integration with respect to measures gives the dual of such). The crucial aspect of QM is the non-commutative nature of the underlying operator algebra.

2) There is no way to regain locality in QM by assuming an anti-realist view. One has to do more. If one asserts simply that a physical system does not have defined properties until they are measured, one cannot suddenly factorize the space of events necessary to fit the data with a multivariate distribution that assumes nothing about the system to begin with (other than what is required to ensure that the preparation procedure took place in such a way and at such a time and that the subsequent measurements did so too that there is no way for information about the preparation to be transmitted or encoded to allow for local hidden variables to specify the properties measured).

Yes, you have to remain in the non-commutative domain.

Again, you can turn Bell's theorem into a game. You can violate the inequality easily with a telephone (signaling) or cheating or any number of ways that correspond to loopholes in the foundations literature. But simply asserting that the measure properties weren't there until they were measured doesn't explain the correlations, and the correlations aren't actually correlations in any strict sense because they cannot be defined in terms joint distributions of r.v.s from a probability space which is what correlations are.

Correct. They are NOT joint distributions because such are, in essence, defined from the commutative theory. QM is inherently non-commutative. Probability theory based on measures cannot model this behavior. But states in a Hilbert space and operators on such can and do model things very well.

You cannot do QM with commutative C* algebra theory (i.e, measures and random variables).
 

Polymath257

Think & Care
Staff member
Premium Member
It is not, in general, at all easy to determine whether or not an experimental result is in contradiction with quantum theory. This is for several (not necessarily distinct) reasons:

1) Quantum mechanics makes no predictions. By this I do not mean that either QM or quantum theory more generally should be understood to consist only of the “physical” part (e.g., the unitary evolution of the SE, state vector, observables in the Heisenberg picture, etc.), as the no-collapse proponents would have it. Rather, I mean that it is more akin to probability theory than even statistical mechanics. You must first have some theory you input in order to get predictions as output. This is both theoretically and practically challenging, because it amounts to the fact that we too often rely on classical theories and classical descriptions that we then attempt to “quantize” (at least in practice), when we actually regard the quantum description we derived as the more fundamental one. In terms of experiments, this doesn’t matter so much for many of those that use Bell states and Bell-type experimental paradigms. Most of the time, the physical systems in question need only have spin, and therefore the difficulty is in building accurate enough devices, channels, etc., to obtain accurate enough statistics from a particular physical system (e.g., photons vs. electrons) rather than a theoretically justified representation of the system in the appropriate quantum framework such as QED.

This is certainly the case, although we can use symmetries to limit the possible Lagrangians. If the underlying symmetry group has the right properties, we can select a Lagrangian without needing classical theory.

However, even here this difficulty remains. How do we know that a system as spin? Through experiments, as spin (at least quantum mechanical spin) is a very simple quantum property. But if we were to find a contradiction in some experiment that we were able to trace to a prediction about measurements of the spin of some system, this wouldn’t contradict QM. It would mean the “quantization” scheme was wrong, and therefore the system was inappropriately described, rather than that the theory itself was contradicted.

And if that keeps happening no matter what quantization scheme used, then we would need to look for something other than QM to describe things.

2) There is no way to determine whether or not measurements are in contradiction with QM because there is no quantum theory of measurement. I do not mean by this that there is no universal agreement regarding interpretations of QM or solutions to the measurement problem. I mean that one can have any interpretation one pleases and still one cannot go to the lab and tell an experimentalist what counts as a “measurement” in any manner other than one that is ad hoc. This is a central point of Wigner’s original thought-experiment. Schrödinger used his infamous thought experiment about a cat to highlight the contradiction inherent in quantum theory, but it was too early and too easily swept under the rug (principally by Bohr). Wigner’s extension would perhaps have been much better had he not tried to interpret it in terms of consciousness or minds, but it remains an improvement.

The reason Wigner’s thought experiment is an improvement over Schrödinger’s is due to the way we actually use quantum theory. We have nothing other than trial and error and many years over a few generations of intuition to help us determine when we can and can’t use unitary evolution. But knowing that it will work for e.g., fiber optic cables or some similar experimental device, equipment, etc., doesn’t help us to explain why it does in these cases but not for PMTs or some other type of experimental equipment. More basically, it doesn’t allow us to use QM to say what is or isn’t a measurement. Of course, there is a very basic manner in which we have always known what counts as a measurement. When someone goes and looks at a dial, through a telescope, at a readout screen, etc., and sees some measured result. In other words, when we act as observers and “see” an observed result.

When the result of the interaction is 'irreversibly' recorded, we have an observation. This usually happens because of strong interaction with the environment (including people).

So far, this is just Schrödinger’s cat. The crucial difference comes after we perform “our” measurement and let the friend out of the lab. She hands us a slip of paper with either spin-up or spin-down, but not both. This “collapses” our system’s state.
No. OUR state isn't collapsed until we look at the result.

Then we ask the friend what it was like to be in a superposition state only to “collapse” into having a definite outcome when we opened the door. But, unlike a dead cat, our friend can tell us that this never happened. The friend obtained a definite outcome in the lab. From her perspective, our description of the physics was laughably, ludicrously, and grossly wrong. It is a stark, blatant contradiction with what actually happened.

Yes, the friend obtained a definite outcome in her lab. But you don't know it yet and it isn't yet determined for you until you interact with your friend or the result. The wave function you use describes your uncertainties.

But we used QM, so where did we go wrong? That’s where disagreements start. For some, QM doesn’t describe physical states or properties, so there was no contradiction, we updated our state of knowledge appropriately. But for these individuals, we have no account of how physical systems appear to have definite values, how classical physics can ever emerge from quantum, etc. And quantum systems are inherently, absolutely subjective.

Each gets results. When the results are compared, the results agree with the QM predictions.

3) Measurement outcomes are necessarily statistical. To see why this is a problem when it comes to whether or not experiments contradict quantum mechanical predictions, consider EPR. By this time, Einstein had given up on trying to devise a way to get beyond Heisenberg’s indeterminacy principle (as it should be called, and as he later called it). Instead of trying clever thought experiments that he believed would allow measurement, in principle, of both the momentum and velocity of a quantum system to arbitrary precision, he had found a way to use QM against itself (he thought). By using what would be called (by Schrödinger) entangled systems, he show even more: The measurement performed to determine the outcome for one observable could actually yield the necessary value for a canonically conjugate observable of the system without even requiring an experiment! Bye-bye Heisenberg’s indeterminacy principle, hello incompleteness of QM!

Yes, when the results of the experiments are brought together, they correlate in a way predicted by QM. In the case of perfect correlation, each person can 'know' what the result obtained by the other will be when brought together.

And that is precisely what QM predicts.

What Bell did was embed all such experiments in terms of the statistics of these measurements. If we assume that there exists some value like “head” that goes on one note and “tails” that goes on the other, then we can use a joint probability distribution and satisfy Bell’s inequality.
Yes, if you assume a commutative theory, you can derive such a joint distribution. But that is violating QM and the essentially non-commutative aspect it has.

If, however, we have a situation like that described first by Boole (before QM) and later derived independently by Bell (based on Bohm’s reformulation of EPR), in which the two persons on each planet use quantum system as a shared resource, then we can violate the inequality required for any theory in which we would have that the results obtained by two such experiments on distant planets would be decided the moment the notes were put into the enveloped.

Yes, and the result predicted by QM will be correct. The commutative (measure theory and random variables) theory fails.

In QM, this corresponds to the idea that atoms or subatomic particles have the measured properties they do independently of whether or not either of the two persons on distant planets actually performs an experiment to measure them (objective facts) AND the ability for the correlations to be actual correlations (that is, defined in terms of a joint distribution). One can’t simply say “abandon realism but retain locality” because, by itself, this doesn’t do anything. There is no reason that systems outside of any possible causal (local) influence can have yield joint measurement values that exceed the maximum allowed by correlations just because the properties measured were supposed to be indefinite.

Correct. You cannot agree with QM while using measures and random variables. Joint distributions in the measure theory sense, are wrong.

The point, however, is that what Bell did was formulate, in a theory-independent way, a method of testing theoretical aspects of QM that lacked conceptual clarity. Single-shot experiments are meaningless in QM, but of course the manner in which physical systems are supposed to have states and properties that can be measured via a single experiment is crucial. Bell tied the two together. He provided a way to take thought experiments relying on single-shot experiments to be tested theory-independently and empirically.

And QM gives the correct results, right?

Once again, measure theory, random variables, and joint distributions are essentially assuming things are commutative. But QM works in operator theory which is inherently non-commutative. There *are* notions that correspond to 'measures, random variables, and joint distributions', but they are fundamentally different because of the non-commutativity of the operators.

And, if you use operator theory and those notions, QM gives the correct results.
 

George-ananda

Advaita Vedanta, Theosophy, Spiritualism
Premium Member
Does consciousness change the rules of quantum mechanics?

I'll go with 'Yes' for my answer as ultimately, I believe the entire material world is a derivative of consciousness from my Advaita Vedanta philosophy.

I can pretty well predict people's answer to this question based on their underlying philosophy:

Nondualism: Consciousness is primary and the material is a derivative of consciousness

Materialism: Matter is primary and consciousness is a derivative of matter
 

wellwisher

Well-Known Member
Consciousness is based on the second law; the entropy of the universe has to increase. Neurons will expend 90% of their metabolic energy lowering atomic entropy, by pumping and exchanging ions, concentrating and segregating different ions of opposite sides of the neuron membrane. This lowering of entropy sets an entropic potential; neuron will need to fire to satisfy the second law and restore the higher entropy. The second law is also at work as neurons interact and fire each other and ionic currents flow through the brain and into the spine; increasing complexity.

Entropy, in engineering, is a state variable, meaning any given state of matter will have a fixed amount of entropy. Water at 25C and 1 atmosphere will always have an entropy value of 188.8 Joules/(mole K).

Entropy is often explained and modeled as being connected to complexity and probability. However, since the entropy of any given state of matter is a constant, all that atomic and quantum randomness, such as assumed by quantum assumptions, adds to a constant entropy. There is something about the use of entropy, by consciousness, that allows it to create order within quantum randomness; state of mind. This is a good definition of consciousness.

One question to ask is why do we have a quantum universe? What does a quantum universe buy us compared to a continuous function universe? If you look at quantum states, such as the energy levels of atoms, these are not continuous functions, but have only distinct states with gaps between. The net result is a quantum universe saves time. If A and then B need to occur before we can get C, a quantum universe by limiting states, with gaps, allows C to occur faster, thereby saving time.

Consciousness is connected to entropy and time, both of which have a connecting to regulating randomness in the quantum universe. The deliberate nature of consciousness is able to overcome randomness through the hands of time. It is through time that even randomness expresses itself into a distinct state. We know exactly who won the lottery last night, by looking at the result in time. Time is where even random become deterministic; random settles into a unique entropic state.

With consciousness we are aware of what was, what is, and what may be in time by cycling through entropic states. The action of neurons brings us back to a lower entropy or earlier state, with the second law bringing our mind into the present and extrapolating the future. It is like a water fountain this is always the same but always different.

Our memories are atomic and molecular states of constant entropy with the second law making these more complex over time; wisdom.
 

LegionOnomaMoi

Veteran Member
Premium Member
Sigh. OK, now that I have a determination of what level of math you are familiar with, I can start to be a bit more explicit.
You're the one equating function spaces with L2 spaces, not me. A vector space is a function spaces- trivially, in exactly the same way any of your claims about non-commutative operator algebras being relevant would have to mean you are including trivial cases where the operators are matrices acting on finite-dimensional complex vector spaces, as that's all we need for Bell's theorem, Bell's inequality, and even the empirical realization of Bell tests and Bell states.

More importantly, you stated:

Yes. Ultimately, the point is that measure theory is, essentially, commutative operator theory. Under the commutative assumption, you get inequalities that you do not get without it.

Now, this is flat out wrong. Maybe I've assumed you know a lot more mathematics than you do. Honestly, this statement was so off I thought you must have made a typo the first time you made it, but you keep digging in. The problem is that measure theory is not (essentially or otherwise) "commutative operator theory".
Measure theory is more general. Operators are defined on vectors spaces. I am no longer going to assume that you are aware of what a vector space is, so I'll spell it out.

A vector space is a triple consisting of a set V together with two operations + and * forming a triple <V, *, +> defined over a field satisfying these axioms:
1) Associativity of + w.r.t. elements from the set V: v+(w+y) = (v+w)+y for all v,w, y ∈ V.
2) Commutativity of + w.r.t. elements from the set V: v +w = w +v for all v,w ∈ V.
3) Existence of and identity element in V, denoted 0, with the property that for all v ∈ V, v + 0 = v.
4) Existence of an (additive) inverse: For all v ∈ V, there exists an element -v ∈ V with the property that v + (-v) = 0.
And bringing in the field (denoted by F) as well as the operation "*" we also must have that
5) Distributivity of * operation (multiplication) by an element of the field (we'll call a scalar) over both vector and field addition, i.e., for all α, β ∈ F and all u, v ∈ V, we have that α*(u+v)= α*u + α*v (distribution over vector addition) and that (α + β)*v= α*v + β*v
6) Associativity of the * operation on elements from V and F: for all α, β ∈ F and all v ∈ V, (α*β)*v = α*(β*v)
7) Existence of an identity w.r.t. to * (i.e., an identity for scalar multiplication) : There exists an element of F, denoted 1, such that for all v ∈ V, 1*v=v

Now, since function spaces are generalizations of vector spaces, every vector space is necessarily a function space (trivially). And this is the bare bones necessary for any space to qualify as a vector space.

It is more than we need for a measure space or to define measure or to define measurable spaces. Hence, it cannot be the case that
Yes. Ultimately, the point is that measure theory is, essentially, commutative operator theory.

But I'll spell this out for you, using a very, very simple measure space that nonetheless does not even possess enough structure to be considered a vector space without adding additional axioms, operations, etc.

We start with a set: {“heads”, “tails”}. Note that i) this set has two elements, ii) the elements could by replaced by “off” and “elephant” or by “colorless green ideas” and “2” or any other two elements, iii) I cannot yet add, multiply, or perform any other similar operations to these elements as the set has no such operations defined on it (yet) nor is there even any “natural” operations one might assume must hold as one might were the elements (say) numbers.

In order to turn this into a measurable space, I need a σ-algebra of subsets. Because this will end up being a probability space, I’ll call it a σ-field (the two terms mean the same thing), and will simply use the power set, which is always a σ-field (the other extreme and even more trivial σ-algebra for any set being nothing but the set itself and the power set. For conciseness, I’ll use the following notaion: X= {“heads”, “tails”} where X is my set. Then my σ-field is the power set (X)= {∅, X, “heads”, “tails”}.

I now have a measurable space: (X, ). I still have no operations defined to e.g., multiply elements from the space. I can, of course (and already have, implicitly, by constructing a σ-field) take unions, intersections, etc. But to do more here I need my measure µ.

Now, measures in general need not be finite, let alone less than or equal to 1, but a probability space requires that the measure of the whole set be 1. So µ(X)=1. I also have, by definition, that µ(∅)= 0. And since the X={“heads”, “tails”} is the whole set, µ(X)= µ({“heads”, “tails”})=1.

I do not know what µ(“heads”) is (I could define it to be .5, but why bother?) nor do I know what µ(“tails”) is. But I know that their union µ(“heads”) + µ(“tails”) = 1. Note that, while I can add measure because the measure of elements or sets of a measurable space necessarily map into spaces with the requisite structures (such as ℝ), it makes no sense to talk about adding “heads” and “tails”.

As a probability space, I can interpret this in the obvious way: A coin (not necessarily fair) is tossed such that there is some probability greater than or equal to 0 of obtaining heads, and similar for tails, such that the probability of “heads” or it’s complement occurring is 1 (likewise for “tails”). It could be that the measure of “heads” is .5 and likewise for tails, but it could be that the measure for heads and tails is 1 and 0 (respectively), or any other combination of a, b ∈ ℝ ([0,1]) s.t. a+b=1.

So we have seen a real example of a measurable space, a measure, and a measure space. Note, though, that there is no vector space structure anywhere other than the codomain of the probability measure µ (measures are maps, and thus we can avail ourselves of the structures of the reals for real-valued measures of sets/elements of measure spaces even when it makes no sense to do so for the elements/sets themselves).

There is no multiplication of scalars, no norm, nor anything that somehow makes this even non-trivially related to operator algebras of any sort, let alone essentially.



Yes, measures are defined on a collection of subsets of some set (usually a sigma-algebra). You are probably only familiar with positive measures, but it is possible to extend
It is utterly irrelevant what one "can" do to extend measures or even how measures come into play in either noncommutative or commutative spaces (or operator algebras).

If you do not know enough mathematics to realize that you cannot claim that something like "measure theory" is "essentially" somehow something that requires more structure than measure theory actually does, then you don't know what you are talking about.
 

LegionOnomaMoi

Veteran Member
Premium Member
Measures allow for the definition of integration in a more general setting.
No kidding. You can stop back-peddling. The problem claim is not that measure theory isn't relevant, it is your claim that:
Yes. Ultimately, the point is that measure theory is, essentially, commutative operator theory.
This is patently false, absurd nonsense. You have confused an application of measure theory, or a use of measure theory, with what you claim measure theory to be, an "essentially" so.

You are wrong. Until you can show me how cannot have a notion of measure without operator theory or how every measure that can be constructed must necessarily be (at least trivially) an example of commutative operator theory (and then explain, while you are back-peddling anyway, how I get to to use measure theory anyway for non-commutative spaces and algebras and in fact how you have defined measure theory in a way that is incredibly restrictive and discounts its central uses in many fundamental formulations of QM), then you can go on about "measure theory" all you want. It doesn't appear you know enough about operator algebras, measure theory, or quantum theory to be relevant here.

When you are done back-peddling there, and have finished claiming on the one hand that Hilbert spaces aren't necessarily function spaces because they need not be L2 (never said they did; a function space is a generalization of a vector space, and a vector space is trivially a function space, so all Hilbert and Banach and even normed (linear) spaces are function spaces, even if only trivially so), while on the other hand claiming that your statements regarding operator algebras are relevant here even when all we need for Bell tests and the like is complex matrices (which I'll admit now, to spare you the trouble of stating this obvious fact later, that as operators are generalizations of entities such as matrices, complex matrices are trivially "operators" and the matrix algebra of complex matrices is necessarily, albeit trivially, a noncommutative operator algebra), then perhaps we can make progress.


This means that a measure automatically gives a linear functional on the collection of continuous functions. In other words, an element of the dual of the Banach space of continuous functions.

Stop back-peddling. I never claimed that linear functionals or duals or anything related to Banach, Hilbert, pre-Hilbert, or normed spaces of any sort weren't "measures" or that they couldn't be considered meausres.
It's the ridiculous claim
Yes. Ultimately, the point is that measure theory is, essentially, commutative operator theory.

that I have a problem with. It is not essentially this at all. It is much more general, hence it's power. Operators require much more structure than measures.


This is why we can identify the collection of measures in that context with the dual of the space of continuous functions.

Again, not the problem. I never disagreed that measure theory was relevant here. And you are back-peddling again. We've gone from

Ultimately, the point is that measure theory is, essentially, commutative operator theory.

to

we can identify the collection of measures in that context with the dual of the space of continuous functions



You are completely missing my point
On the contrary, you are now trying to talk about how measure theory is used in a particular context. That's great. It's not what you said. When you want to address the glaringly, obviously incorrect statements you initially made and stop back-peddling, great. I'm more than happy to agree with statements about measures in the context of normed spaces and linear functionals. I am not about to dismiss measure theory as somehow "essentially" boiling down to what is actually one application of the subject matter to functional analysis.

Since QM is based on operators, it tends to fail the inequalities that are true for measure theory.
Von Neuman developed Hilbert space and his algebras to do this. QM isn't based on it. Also, the inequality in question is not based on quatum mechanics.

A central point behind Bell's theorem is that the inequality doesn't need QM at all. It can work for any theory as it need deal only with the relative frequencies of experimental outcomes and corresponding experimental (device) settings.

In other words, the inequality is about probability spaces and (statistical independence). Actually, even this isn't entirely true as (not being happy with probabilistic versions) it has been reformulated in different ways that don't rely on correlations and even in ways that don't rely probability theory at all, but typically the point is still one of theory independence. Thus, QM is only relevant when one seeks non-trivial ways to violate the inequality, and a central point for Bell and others afterwards (and a central misunderstanding) is that even were QM to be replaced, since the inequalities don't require the theory, whatever replaced QM would still have to deal with the inequality violations. Those are empirical, and the inequalities violated are based on realizations of appropriate models for empirical results (first and foremost, statistical correlations, but also in more basic set-theoretical contraints with minimal probabilistic assumptions).



Most of the operators involved are 'closed operators' when defined on those dense subspaces

No, they aren't. How do I know? Because I am talking about unbounded opeartors specifically and how they they can't be treated in the ways they often are not only in the textbooks but also too often in the physics literature. I am not talking about how operators can or can't be defined generally, but about (in this case) problems related to the treatment operators and the spaces they act on in term of a disregard for rigor.
Since your approach to rigor lacks even this level of care, I am not really very concerned with what you want to regurgitate about operators when you can't seem to decide whether you know basic measure theory.

Luckily, most of this is irrelevant. Blather about l2 or L2 or other spaces is irrelevant to your mistaken claim about measure theory, and more importantly this entire discussion of non-commutative operators seems like grandstanding on your part that has horribly misled you. If your knowledge of measure theory and operator algebras is this abysmal, then you are in luck: we don't need any of this for the relevant systems in QM that violate Bell's inequality. And better still, since you don't understand the Bell's theorem either, you don't even need QM to understand the inequality!

The entire point was how EPR (as Bohm formulated it) could be put to an empirical test in a manner that would look, in a theory-independent manner, at what the space of hidden variable theories satisfying locality would have to look like. And the result and central thrust of Bell's argument was that
1) EPR implicitly show that locality implies (and requires) a theory to have some sort of structure or nature X
2) Bell shows explicitly, but building on EPR (hence the name of his 1964 paper) that whatever X is, it can be tested empirically.
3) QM's predictions violate X

2) & 3) have both turned out to be true (we knew 3) was true before Bell in one sense, as Bell didn't do much in the way of theoretical work on QM to get a violation of his inequality).

Hence, QM is nonlocal, because there is no way to explain the violations of the inequality that hold indepdently of any theory concerning measurement outcomes that is required for locality but is violated by experiment (and, also, by QM, but as importantly by any would-be replacement of QM).
 

LegionOnomaMoi

Veteran Member
Premium Member
Bell's inequalities depend on measure theory.
In the sense that, technically, anything involving integration or even summation trivially must. However, QM depends on measure theory. Anything with integrals does. Quantum measure theory is still measure theory. Non-commutative measure theory (basically the same thing) is measure theory. POVMs are measures, projection-valued measures are likewise measures, and QM depends on measure theory. So what?

A bigger deal is, in fact, noncommutativity, but that isn't relegant in the manner in which you've presented it. The algebras of observables are not actually measured in experiments, but are ways of formulating aspects of a physical theory related to experimental designs relevant to the manner of preparation of the system in terms of how it encodes degrees of freedom. It is possible, and indeed typical, for non-commutative models and theories to allow for joint distributions of measurement outcomes. And, in fact, this is true even of QM in very special cases where there exists a way to explain the measurement outcomes "locally" via (classical) random variables on a (classical) probability space.

Put simply, we can devise any number of theories or models which describe aspects of anything from spin to cognitve psychology or linguistic units in terms of noncommutative operator algebras. This does not mean that we will be able to use aspects of cognitive psychology or linguistic units or anything else as a shared resource to violate an inequality via nonlocal "correlations". Rather, most of these have and will (unless constructed for fun with the explicit purpose of violating said inqualities) fail to yield the necessary shared resource one can have via quantum entanglement.

It is not enough to point to one aspect of quantum theory as if that explains why it is impossible to explain measurement outcomes in terms of joint distributions. Firstly, because even in QM it often is possible, and secondly because QM has no theory of measurement that would enable the encoding of "observables" into a particular mathematical structure to be as fundamental as the results from Bell's theorem.

That means they are essentially dealing with commutative C* algebras (integration with respect to measures gives the dual of such). The crucial aspect of QM is the non-commutative nature of the underlying operator algebra.
It doesn't. At all. Unless you are now going to claim that you need to talk about C*algebras in order to describe a correlation between two independent random variables or to deal with joint distributions in classical probability theory. Because, again, the inequality doesn't rely on QM except insofar as it was used to inspire the way that classical random variables could yield a value of maximal correlation. Nobody would generally care about this, of course, without the fact that QM violates such an inequality (and generalizations and variations of it).

But you don't need C*-algebras to play this game with quantum systems that is patently absurdly unnecessary for experiments which require only those mathematical spaces and their linear maps that function spaces and operators in general (and c*-algebras in particular) are generalizations of.

Correct. They are NOT joint distributions because such are, in essence, defined from the commutative theory. QM is inherently non-commutative. Probability theory based on measures cannot model this behavior. But states in a Hilbert space and operators on such can and do model things very well.

1) That's not why
2) QM isn't inherently non-commutative. Non-commutativity plays an essential role in the mathematical representations of the observables that correspond to experimentally measured values of quantum systems. Physical systems in QM, as you well know (I assume) are not represented by non-commutative operators. Since we do not have a theory of measurement within QM, we are still testing (or rather, finally testing) what it is possible to "lose" in terms of mathematical structures and still have a quantum theory. I think it quite likely that non-commutativity plays a vital role in the difference between classical and quantum theories, but so do complex numbers and so does the fact that we cannot seem to formulat quantum theory in a manner that allows for the logical valuation of lattice or lattice-like propositional structures required for the expressing of facts of the world using QM.
3) Probability theory is based on measure theory. We use it to model all behavior in QM, as QM gives us predicts and we can't do anything with a theory that says if we measure X system in Y manner we will get result Z with probability -2.4.
4) States in Hilbert space utterly fail to model this behavior. So far as we can tell, states in Hilbert space encode experimental degrees of freedom. We use these together with a host of assumptions (some likely unknown), trial and error, loads of repeated preparations and measurements under equivalent conditions, and intuition, in order to associate to systems the necessary observables that are used to model how the system will yield outcomes (satisfied by probability measures, fyi) under particular conditions.


You cannot do QM with commutative C* algebra theory (i.e, measures and random variables).
Since when are random variables (which are necessarily real) commutative c*-algebras, again? Oh, and since when did we not require measures for QM? Are we still dealing with your arbitrary, imaginary distinction of a special measure theory you invented, the one where this:
Ultimately, the point is that measure theory is, essentially, commutative operator theory.
is somehow true? Or are you trying again to make sense out of the mathematics here?
 

LegionOnomaMoi

Veteran Member
Premium Member
The wave function you use describes your uncertainties.
Are you aware that this is considered one of the most radical interpretations of QM? It is far beyond the sort of indeterminacy or instrumentalism even attributed to Bohr and Heisenberg (mostly incorrectly) let alone relational QM, Healey's pragmatism, operationalist QM, the statistical interpretation (of Ballentine and others), the generalized probabilistic theory interpretations, the entire class of epistemic interpretations, etc.? That there exists basically two approaches to QM that hold this view to be true:
1) QM is incomplete or
2) QM doesn't describe anything physical or real

The possible exception would be QBism, which (so say its founders) is supposed to be somehow realist and subjective and be about an agents predictions.

Because if you really believe that the wavefunction or state vector or "system" in QM is only the set of probabilities an agent or observer would use to make predictions, then you can't have a contradiction, sure. You explain nothing, and you have no basis for the predictions (which use assumptions that violate this position and a central goal in programs that ascribe to less radical views as well as the QBists is the attempt to derive QM without doing so; they haven't succeeded), but as you've essentially ruled out any way to contradict QM by reducing it to subjective probabilities that can be blatantly wrong and still be "correct" because of your subjective epistemic state, then sure. No contradiction with such solipsism.
Of course, one wonders why you'd take issue with any of the most radical claims about QM or consciousness as the "cause" of collapse in your description depends on your state of knowledge and has nothing to do with reality apart from what you are aware of.
 

LegionOnomaMoi

Veteran Member
Premium Member
Exactly. In the commutative thoery (integration with respect to a measure), certain inequalities between correlations can be proved. The corresponding inequalities are false in the non-commutative case of operators on a Hilbert space.

Since QM is based on operators, it tends to fail the inequalities that are true for measure theory.


The collection of bounded operators on a Hilbert space is a non-commutative C* algebra. The positive linear functionals correspond to elements of the underlying Hilbert space via A--> <Ax,x>. So the positive linear functionals of norm 1 correspond to normalized states....

Similarly, the collection of continuous functions on a compact Hausdorff space is a commutative C* algebra. The positive linear functional functionals on this C* algebra correspond to positive Borel measures on the compact space via integration. Those of norm 1 thereby correspond to the probability measures.
...

And yes, non-commutativity is essential here. It is crucial that the C* algebra be non-commutative in QM. That is why many ideas from (commutative) measure theory fail in QM.
...

Correct. They are NOT joint distributions because such are, in essence, defined from the commutative theory. QM is inherently non-commutative. Probability theory based on measures cannot model this behavior. But states in a Hilbert space and operators on such can and do model things very well.

You cannot do QM with commutative C* algebra theory (i.e, measures and random variables).

Once again, measure theory, random variables, and joint distributions are essentially assuming things are commutative. But QM works in operator theory which is inherently non-commutative. There *are* notions that correspond to 'measures, random variables, and joint distributions', but they are fundamentally different because of the non-commutativity of the operators.

And, if you use operator theory and those notions, QM gives the correct results.
FYI- In addition to misrepresenting the use of C*-algebras in QM and operator algebras more generally, you've confused and conflated to different approaches with different definitions of states that actually matter. So your "norm 1" is true, for example, in the standard case but doesn't make sense in the algebraic case formulated using C*-algebras. The discussion of measure is almost entirely wrong. So is the descriptions of Bell's theorem and QM. But much of that I've already covered. What I want to do now is show you a little bit of what the actual algebraic appraoch involves in contrast to Hilbert state approach as opposed to the convoluted mixing of fundamental notions (with a bunch of trivial, unnecessary aspects of functional analysis that are mostly trivial and not relevant thrown in).


So, here are some notes, in case you are interested in how *algebras actually work for those of us interested in the algebraic approach to QM, vs. how you seem to describe QM (i.e., in terms of some standard version of Hilbert spaces of states and observables) without the garbled nonsense about probability measures not being possible due to non-commutativity or whatever it is you are so fundamentally confused about:

Standard QM, the kind that continues to be used in labs today, described in the literature, and discussed in conferences, symposia, etc., all over the world, tends to be based around states in Hilbert spaces in something like the following manner:
1) A (pure) state is a unit ray [ᴪ] := {λᴪ : λ ∈ unit complex number field} of a (unit) element ᴪ ∈ H.
2) Each observable is a self-adjoint operator A on the Hilbert space H

That much you can probably get from Hollywood movies. What it means, how it works operationally, how it connects to probabilities, and how it differs from the C*-algebra operator approach one has with linear functionals and other things you've mixed up starts with the second "axiom"

The second "axiom" is important to the spectral theorem in the following way (albeit expressed over-simplistically and too concisely):

First, we can use the spectral theorem (seems trivial, but most of the time this is done in practice without adequate justification; rather the theorem is appealed to when it doesn't apply, but because somebody else already did the work it could be made rigorous, hence the fact that we can use the spectral theorem is vitally important).

Second, we can use it because it implies that there exists a (unique) PVM (projection-valued measure, as in "measure theory") we call the "measure" of A and (anticipating expectation values) we denote by (⋅) on the σ-algebra of ℝ and which yields the spectral measure of A.
This in turn gives us the predictions of QM. The probabilities are encoded in the spectral measures:
μ[ᴪ](·) := <(⋅)ᴪ, ᴪ> where μ is (again) a measure in the measure-theoretic sense.
Moreover, it is a probability measure that depends upon (and only upon) the unit ray [ᴪ].
(That's the norm you've mentioned before)

But how? This measure μ[ᴪ](M) where M is a (Borel) set of ℝ, is the probability that the measurement outcome of observable modeled by A in the state ᴪ (or better, [ᴪ]) lies in M.
It then provides the real tool in the us of QM: <Aᴪ, ᴪ> becomes the expectation value, and further one can examine more observables (commuting and non-commuting on different states to get things like transition probabilities and the like). You've used the symbols already, but you've placed them in the wrong context, made inaccurate statements repeatedly about them, and most importantly confused them with the algebraic approach.
So, note that first, no C*-algebra is invoked here (that comes in a bit) and that we use probability measures despite non-commuting operators, and note that the state of the system isn't non-commuting or involved in anything non-commuting. Rather, the mathematical models of the state of the system's supposed probabilties of observing particular outcomes for observables are non-commuting.

Now, what about C*-algebras?

Different structure, different approach, and while I like it more it is not completed nor fully understood and for these and other reasons has proved difficult to use in actual experiments involving quantum systems or devices requiring quantum theory.

Here, the corresponding "axioms" are
1) Observables A are Hermitian elements a = a^+ of the *-algebra A.
2) States are those linear functionals you love to refer to. They are functionals f on A s.t. f(a^+ a) is greater than or equal to 0 for all a ∈ A
2b) f(1) = 1 (the *-algebra A must be unital, and in addition equipped with the appropriate involution to allow that a → a^+ )

Now the expectation is given by f(a), and the state is a function (actually, a linear functional) on a complex unital algebra (and additional constraints such as hermiticity, the necessary involution, nonnegativity, etc., apply).
What we don't see in this C*-approach are states in QM given by elements (rays) of a Hilbert space. Nor do we use self-adjoint operators.
But in standard QM, we do use Hilbert space states and model observables as operators acting on these. And we do this using probability measures. And yet it is still non-commutative, despite the probability measures.
I'll leave as an exercise how to work out the generalized state (density matrix formalism) and the generalized measurements (POVMs) for the standard QM version and let you decide what the appropriate algebraic QM equivalents should be.
But if you want to talk about C*-algebras so much, you should have the very basic knowledge about how states and observables work in standard QM with states in a Hilbert space and the C-*algebras in Algebraic quantum theory where states are functionals of the operators, not elements or rays in Hilbert spaces and your norm=1 doesn't apply.
 

wellwisher

Well-Known Member
The neural matrix of the brain is a medium that allows both quantum affects as well as consciousness. At this internal neural level, it is very likely they are connected to each other.

Most of the arguments against the connection between mind and the quantum universe is based on what is outside the brain, that the eyes see; reference, while ignoring the obvious internal connections. This is because science is more extroverted and expects the answers to reality to be outside itself, whereas an introverted approach; inside world of imagination and self awareness would make this easier to see, being more self contained.

The hydrogen proton, which is connected to hydrogen bonding in water, for example, quantum tunnels in entangled pairs. This happens in all aspects of life, including the brain.

Entropy is the key. Entropy is considered a state variable, meaning for any given state of matter, there will be a constant entropy value. What this means is the random and the quantum aspects, that are used to model the microscopic aspects of any given state, need to add to a constant entropy. What appears to be random is actually controlled by the determinism of the constant entropic state.

The entropy of the state forms a deterministic rule, which then causes the needed entanglements between consciousness, chemistry and quantum physics, since these all need to add to a constant. Consciousness works based on the second law. The neurons via ion pumping lower ionic entropy setting a potential with the second law; neurons will need to fire and ionic currents need to disperse. Consciousness and life both need to evolve to higher complexity; 2nd law driven. It does so in quantum jumps, into states of constant entropy; with new entanglements appearing, upon each steady state, so the state remains constant.

The life sciences still overuse a statistical approach since they leave out the deterministic nature of the water in life. The water helps to mediate the constant entropic state entanglements. Water controls the shapes of things inside the cell, so any given volume of water can form a constant entropic state. Getting the quantum ducks in a row is the nature of life.
 

Subduction Zone

Veteran Member
The neural matrix of the brain is a medium that allows both quantum affects as well as consciousness. At this internal neural level, it is very likely they are connected to each other.

Most of the arguments against the connection between mind and the quantum universe is based on what is outside the brain, that the eyes see; reference, while ignoring the obvious internal connections. This is because science is more extroverted and expects the answers to reality to be outside itself, whereas an introverted approach; inside world of imagination and self awareness would make this easier to see, being more self contained.

The hydrogen proton, which is connected to hydrogen bonding in water, for example, quantum tunnels in entangled pairs. This happens in all aspects of life, including the brain.

Entropy is the key. Entropy is considered a state variable, meaning for any given state of matter, there will be a constant entropy value. What this means is the random and the quantum aspects, that are used to model the microscopic aspects of any given state, need to add to a constant entropy. What appears to be random is actually controlled by the determinism of the constant entropic state.

The entropy of the state forms a deterministic rule, which then causes the needed entanglements between consciousness, chemistry and quantum physics, since these all need to add to a constant. Consciousness works based on the second law. The neurons via ion pumping lower ionic entropy setting a potential with the second law; neurons will need to fire and ionic currents need to disperse. Consciousness and life both need to evolve to higher complexity; 2nd law driven. It does so in quantum jumps, into states of constant entropy; with new entanglements appearing, upon each steady state, so the state remains constant.

The life sciences still overuse a statistical approach since they leave out the deterministic nature of the water in life. The water helps to mediate the constant entropic state entanglements. Water controls the shapes of things inside the cell, so any given volume of water can form a constant entropic state. Getting the quantum ducks in a row is the nature of life.

Oooh!! Sciency word salad.

Crispy!
 
Top