• Welcome to Religious Forums, a friendly forum to discuss all religions in a friendly surrounding.

    Your voice is missing! You will need to register to get access to the following site features:
    • Reply to discussions and create your own threads.
    • Our modern chat room. No add-ons or extensions required, just login and start chatting!
    • Access to private conversations with other members.

    We hope to see you as a part of our community soon!

Does consciousness change the rules of quantum mechanics?

Polymath257

Think & Care
Staff member
Premium Member
You're the one equating function spaces with L2 spaces, not me. A vector space is a function spaces- trivially, in exactly the same way any of your claims about non-commutative operator algebras being relevant would have to mean you are including trivial cases where the operators are matrices acting on finite-dimensional complex vector spaces, as that's all we need for Bell's theorem, Bell's inequality, and even the empirical realization of Bell tests and Bell states.

More importantly, you stated:
Yes. Ultimately, the point is that measure theory is, essentially, commutative operator theory. Under the commutative assumption, you get inequalities that you do not get without it.

Now, this is flat out wrong. Maybe I've assumed you know a lot more mathematics than you do. Honestly, this statement was so off I thought you must have made a typo the first time you made it, but you keep digging in. The problem is that measure theory is not (essentially or otherwise) "commutative operator theory".
Measure theory is more general. Operators are defined on vectors spaces. I am no longer going to assume that you are aware of what a vector space is, so I'll spell it out.

Please don't start to be insulting. I am very aware of what a vector space is. I am very aware of what a Hilbert space is. I am very aware of what operator theory is. I can even guarantee I have studied it deeper than you have.

Now, since function spaces are generalizations of vector spaces, every vector space is necessarily a function space (trivially). And this is the bare bones necessary for any space to qualify as a vector space.

Nope. This is wrong. Vector spaces do NOT need to be function spaces. Function spaces do not need to be vector spaces. For example, R^n (or C^n) are vector spaces over R (or C) but are not usually regarded as function spaces (they are isomorphic to function spaces with a finite domain, but they are not themselves function spaces). The collection of continuous functions from the unit interval to the unit interval is a function space that is not a vector space.


That said, in the operator context, it is typical for function spaces to be limited to vector spaces of functions using pointwise operations.

It is more than we need for a measure space or to define measure or to define measurable spaces.

Snipped for space reasons.

There is no multiplication of scalars, no norm, nor anything that somehow makes this even non-trivially related to operator algebras of any sort, let alone essentially.

OK, a few things.

1. You assumed a positive measure. It is possible to talk about real or complex valued measures. I assume all measures here are finite valued, though.

2. When that is done, the collection of measures on a set with a sigma algebra (a measurable space) can be given a vector space structure via the obvious operations ( (u+v)(A)=u(A)+v(A), (cu)(A)=c u(A) ).

3. For each real (or complex) valued measure, there is a positive measure, called the total variation measure (written as |u|) such that |u(A)|<=|u|(A).

4. Defining a norm via ||u||=|u|(X) gives a Banach space of measures. This is usually called M(X) (or M(X,A) if you want to be explicit about the sigma algebra A).

5. If you start with a set X with a compact Hausdorff topology , the collection of continuous real (or complex) valued functions is a Banach space as well. In fact, it is a Banach algebra, and specifically a *commutative* C* algebra.

6. For a compact Hausdorff space X, we can let the sigma algebra above be the collection of Borel sets. This leads to a space of measures M(X).

7. If f is in C(X) and u is in M(X), we can define <u,f> to be the integral of f with respect to the measure u.

8. Doing this gives a bounded linear functional on C(X), so each element of M(X) corresponds to an element of the dual of C(X).

9. The converse is true as well. Every element of the dual of C(X) is, in fact, given as integration against a unique element of M(X). This identification is even norm preserving and so we can identify M(X) as the Banach space dual of C(X).

This is the commutative theory because C(X) is a *commutative C* algebra).

It is utterly irrelevant what one "can" do to extend measures or even how measures come into play in either noncommutative or commutative spaces (or operator algebras).

Sorry, I thought you knew a lot more than you do. I will attempt to give the outlines.

If you do not know enough mathematics to realize that you cannot claim that something like "measure theory" is "essentially" somehow something that requires more structure than measure theory actually does, then you don't know what you are talking about.[/QUOTE]


I have a fairly good bead on what you know about this subject and you are missing some crucial aspects relevant to what I am saying. First, look over what I said about complex valued measures and C(X). I will link this to operator theory in my next post.
 

Polymath257

Think & Care
Staff member
Premium Member
No kidding. You can stop back-peddling. The problem claim is not that measure theory isn't relevant, it is your claim that:

This is patently false, absurd nonsense. You have confused an application of measure theory, or a use of measure theory, with what you claim measure theory to be, an "essentially" so.

You are wrong. Until you can show me how cannot have a notion of measure without operator theory or how every measure that can be constructed must necessarily be (at least trivially) an example of commutative operator theory (and then explain, while you are back-peddling anyway, how I get to to use measure theory anyway for non-commutative spaces and algebras and in fact how you have defined measure theory in a way that is incredibly restrictive and discounts its central uses in many fundamental formulations of QM), then you can go on about "measure theory" all you want. It doesn't appear you know enough about operator algebras, measure theory, or quantum theory to be relevant here.

I clearly know more of the mathematics involved than you. I thought you had a bit more background, but that is clearly not the case.

I might suggest reading a *good* operator theory book, like Kadison+Ringrose volume I. Rudin's book on Functional Analysis also does a good job with the basics (like a proof of the spectral theorem for normal operators and a treatment of closed unbounded operators).

When you are done back-peddling there, and have finished claiming on the one hand that Hilbert spaces aren't necessarily function spaces because they need not be L2 (never said they did; a function space is a generalization of a vector space, and a vector space is trivially a function space, so all Hilbert and Banach and even normed (linear) spaces are function spaces, even if only trivially so), while on the other hand claiming that your statements regarding operator algebras are relevant here even when all we need for Bell tests and the like is complex matrices (which I'll admit now, to spare you the trouble of stating this obvious fact later, that as operators are generalizations of entities such as matrices, complex matrices are trivially "operators" and the matrix algebra of complex matrices is necessarily, albeit trivially, a noncommutative operator algebra), then perhaps we can make progress.

You are wrong about vector spaces all being function spaces. Not even close. But that is irrelevant to the main discussion.

Good. Algebras of matrices are (finite dimensional) operator algebras. And that is NOT trivial since, as you even point out, the crucial aspects like violation of Bell's inequalities occur at this level.

Stop back-peddling. I never claimed that linear functionals or duals or anything related to Banach, Hilbert, pre-Hilbert, or normed spaces of any sort weren't "measures" or that they couldn't be considered meausres.
It's the ridiculous claim

You were clearly not understanding my point thought. See my last post.


that I have a problem with. It is not essentially this at all. It is much more general, hence it's power. Operators require much more structure than measures.
[/QUOTE]

Yes, I see where your confusion arises.

Let's go over some basics.

A Banach space is a vector space with a norm making it into a complete metric space. If you need definitions, I can provide them.

A Hilbert space is a vector space with a positive definite inner product that is a Banach space under the induced norm. Again, if you need definitions, I can give them.

A Banach algebra is a Banach space with an associative multiplication (and usually an identity) that distributes over addition and commutes with scalar multiplication. In addition, we assume the morm is submultiplicative.

A C* algebra is a Banach algebra with an involution x->x* such that the norm of xx* is the same as the square of the norm of x. For convenience, I assume all C* algebras are complex vector spaces.

Are we good here? Or do you need me to give more detail?

Examples of C* algebras:

1. The collection of bounded operators on a Hilbert space (do I need to define this?). If H is the Hilbert space, this C* algebra is denoted by B(H) and the * operation is the adjoint map. This algebra is non-commutative if H has dimension more than 1.

2. If X is a compact Hausdorff space, the set of continuous complex valued functions C(X) is a C* algebra with usual multiplication and complex conjugation as the * operation. This is a commutative C* algebra. One crucial fact is that ALL commutative C* algebras are isomorphic to one like this.

3. If H is a Hilbert space, the collection of compact operators (do I need to define these for you?) is, as a subspace of B(H), a C* subalgebra. For finite dimensional H, this is identical to B(H), but they differ a lot in the infinite dimensional setting.

4. The collection of all bounded sequences of complex numbers is a commutative C* algebra with appropriate operations. This is another commutative C* algebra.

Now, in any C* algebra, we can define an element to be self-adjoint if x*=x and normal if x*x=xx*. Obviously every element of a commutative C* algebra is normal.

More subtly, we can define an element to be positive if it is of the form x*x. It turns out that elements of this form are closed under addition and multiplication by positive real numbers (they form a cone). Every element can then be written as a linear combination of positive elements with 4 terms.


Again, not the problem. I never disagreed that measure theory was relevant here. And you are back-peddling again. We've gone from
Ultimately, the point is that measure theory is, essentially, commutative operator theory.

to

we can identify the collection of measures in that context with the dual of the space of continuous functions

Which is the essence of the identification.


On the contrary, you are now trying to talk about how measure theory is used in a particular context. That's great. It's not what you said. When you want to address the glaringly, obviously incorrect statements you initially made and stop back-peddling, great.

As relevant to discussions of QM, measure theory is essentially commutative operator theory. Yes, you can define measures in more generality, but the most relevant cases are those where the collection of measures is the dual of the space of continuous functions. Operator theorists tend to see measure theory as the commutative version of their topic. Get off your high horse and maybe learn a different perspective.

Von Neuman developed Hilbert space and his algebras to do this. QM isn't based on it. Also, the inequality in question is not based on quatum mechanics.

I didn't say it was. It is based on the fact that operators on a Hilbert space form a non-commutative C* algebra. Bell's inequality is what is found in the commutative setting with integration against a measure.

A central point behind Bell's theorem is that the inequality doesn't need QM at all. It can work for any theory as it need deal only with the relative frequencies of experimental outcomes and corresponding experimental (device) settings.

It works for any probability space.

Thus, QM is only relevant when one seeks non-trivial ways to violate the inequality, and a central point for Bell and others afterwards (and a central misunderstanding) is that even were QM to be replaced, since the inequalities don't require the theory, whatever replaced QM would still have to deal with the inequality violations.

Agreed. And *why* does QM violate these inequalities? Because of the non-commutativity of the operators and the way they operate on the states.

So, for the commutative setting (where the elements of the C* algebra are continuous functions on some compact Hausdorff space), a 'state' is integration with respect to a probability measure. So, for example, if 0<=|f|,|g|,|h|,|k|<=1, then (using <f,u> for the integral of f with respect to u).

<fh+fk+gh-gk, u> <= 2

In the non-commutative setting, the elements of the C* algebra are operators and the 'states' are the functionals of the form A--><Ax,x> with x a ray. The corresponding inequality for *operators* 0<= A,B,C,D <= I

<(AC+AD+BC-BD)x,x> <= 2

Fails.


Most of the operators involved are 'closed operators' when defined on those dense subspaces
No, they aren't. How do I know? Because I am talking about unbounded opeartors specifically and how they they can't be treated in the ways they often are not only in the textbooks but also too often in the physics literature. I am not talking about how operators can or can't be defined generally, but about (in this case) problems related to the treatment operators and the spaces they act on in term of a disregard for rigor.

Yes, I am also talking about unbounded operators. They are defined on dense subsets of the Hilbert spaces, but have the crucial property that their graph is closed (which is why they are called closed operators). In particular, differential operators have this property.

And, for closed operators, we can get a good version of the spectral theorem assuming the 'right' notion of self-adjointness (some caution is required, I will admit).

Again, a decent source for this development is Rudin's book on Functional Analysis.

Since your approach to rigor lacks even this level of care, I am not really very concerned with what you want to regurgitate about operators when you can't seem to decide whether you know basic measure theory.

Again, no need to insult. I guarantee I know more measure theory than you. This type of thing was my research specialty.

If your knowledge of measure theory and operator algebras is this abysmal, then you are in luck: we don't need any of this for the relevant systems in QM that violate Bell's inequality. And better still, since you don't understand the Bell's theorem either, you don't even need QM to understand the inequality!

Yes, I understand the definitions you are using. I am saying that they only give a very small part of the picture. By the way, it isn't non-commutative operators--it is non-commutative operator *algebras*. And, again, no need for insults when your own understanding is not at the level I was expecting.

1) EPR implicitly show that locality implies (and requires) a theory to have some sort of structure or nature X
2) Bell shows explicitly, but building on EPR (hence the name of his 1964 paper) that whatever X is, it can be tested empirically.
3) QM's predictions violate X

This is based on the assumption that you use measure theory for the relevant correlations. But that is wrong in the context of QM, where the correlations are computed not with integration of a function against some measure, but via an inner product of an operator applied to a state and that state.

And yes, Bell is correct. No measure on a measure space can reproduce the results of states on operators.

2) & 3) have both turned out to be true (we knew 3) was true before Bell in one sense, as Bell didn't do much in the way of theoretical work on QM to get a violation of his inequality).

Hence, QM is nonlocal, because there is no way to explain the violations of the inequality that hold indepdently of any theory concerning measurement outcomes that is required for locality but is violated by experiment (and, also, by QM, but as importantly by any would-be replacement of QM).

That is only true if you assume the underlying states are described via a probability space. And that is precisely what fails in QM.
 
Last edited:

Polymath257

Think & Care
Staff member
Premium Member
So, here are some notes, in case you are interested in how *algebras actually work for those of us interested in the algebraic approach to QM, vs. how you seem to describe QM (i.e., in terms of some standard version of Hilbert spaces of states and observables) without the garbled nonsense about probability measures not being possible due to non-commutativity or whatever it is you are so fundamentally confused about:

Standard QM, the kind that continues to be used in labs today, described in the literature, and discussed in conferences, symposia, etc., all over the world, tends to be based around states in Hilbert spaces in something like the following manner:
1) A (pure) state is a unit ray [ᴪ] := {λᴪ : λ ∈ unit complex number field} of a (unit) element ᴪ ∈ H.
2) Each observable is a self-adjoint operator A on the Hilbert space H

That much you can probably get from Hollywood movies. What it means, how it works operationally, how it connects to probabilities, and how it differs from the C*-algebra operator approach one has with linear functionals and other things you've mixed up starts with the second "axiom"

The second "axiom" is important to the spectral theorem in the following way (albeit expressed over-simplistically and too concisely):

First, we can use the spectral theorem (seems trivial, but most of the time this is done in practice without adequate justification; rather the theorem is appealed to when it doesn't apply, but because somebody else already did the work it could be made rigorous, hence the fact that we can use the spectral theorem is vitally important).

Yes, if the relevant operators are unbounded, some extra care is required. The crucial requirement is that the operator be closed: in other words, the graph is a closed set. Fortunately, this is usually true.

Second, we can use it because it implies that there exists a (unique) PVM (projection-valued measure, as in "measure theory") we call the "measure" of A and (anticipating expectation values) we denote by (⋅) on the σ-algebra of ℝ and which yields the spectral measure of A.
This in turn gives us the predictions of QM. The probabilities are encoded in the spectral measures:
μ[ᴪ](·) := <(⋅)ᴪ, ᴪ> where μ is (again) a measure in the measure-theoretic sense.
Moreover, it is a probability measure that depends upon (and only upon) the unit ray [ᴪ].
(That's the norm you've mentioned before)

I'd have a quibble about your notation for defining the measure. You want \mu[\psi](M)=< E_M \psi, \psi> where E_M is the projection associated with the Borel set M.

Also though, you need the identity

<f(A) \psi, \psi> = int f d\mu(\psi)

where f is any continuous function on the spectrum of A.

This links the operator to the measures

(sorry, I need to learn how to get symbols going in this forum).

But how? This measure μ[ᴪ](M) where M is a (Borel) set of ℝ, is the probability that the measurement outcome of observable modeled by A in the state ᴪ (or better, [ᴪ]) lies in M.
It then provides the real tool in the us of QM: <Aᴪ, ᴪ> becomes the expectation value, and further one can examine more observables (commuting and non-commuting on different states to get things like transition probabilities and the like). You've used the symbols already, but you've placed them in the wrong context, made inaccurate statements repeatedly about them, and most importantly confused them with the algebraic approach.

The difficulty is that if you have two operators that don't commute there is not a common PVM that 'diagonalizes' both of them. So the measures used are dependent on the observable to be measured. If a common PVM were possible, Bell's inequality would be satisfied for those operators. But that only happens if the operators commute.

So, note that first, no C*-algebra is invoked here (that comes in a bit) and that we use probability measures despite non-commuting operators, and note that the state of the system isn't non-commuting or involved in anything non-commuting. Rather, the mathematical models of the state of the system's supposed probabilties of observing particular outcomes for observables are non-commuting.

Actually, you are using C* algebras here. That is how the PVM are proven to exist in the first place (the fact that the *-algebra generated by A can be thought of as the collection of continuous functions on the spectrum of A--the dual then gives the required measures). Also, you don't need self-adjointness for the PVM to exist, you need normality.

Now, what about C*-algebras?

Different structure, different approach, and while I like it more it is not completed nor fully understood and for these and other reasons has proved difficult to use in actual experiments involving quantum systems or devices requiring quantum theory.

Here, the corresponding "axioms" are
1) Observables A are Hermitian elements a = a^+ of the *-algebra A.
2) States are those linear functionals you love to refer to. They are functionals f on A s.t. f(a^+ a) is greater than or equal to 0 for all a ∈ A
2b) f(1) = 1 (the *-algebra A must be unital, and in addition equipped with the appropriate involution to allow that a → a^+ )

Now the expectation is given by f(a), and the state is a function (actually, a linear functional) on a complex unital algebra (and additional constraints such as hermiticity, the necessary involution, nonnegativity, etc., apply).
What we don't see in this C*-approach are states in QM given by elements (rays) of a Hilbert space. Nor do we use self-adjoint operators.

On the contrary, if the C* algebra is B(H), then the states as defined in this way are precisely the linear functional of the form A--><Ax,x> for a unique state in the other description. if you are not using all of B(H), then things do get trickier.

But in standard QM, we do use Hilbert space states and model observables as operators acting on these. And we do this using probability measures. And yet it is still non-commutative, despite the probability measures.

Only because the *-subalgebra generated by the single self-adjoint operator is commutative. This is why it is enough to have a normal operator (which also generates a commutative subalgebra). If you try this with two non-commutive operators, the required PVM no longer exists. BUT, if you have any commutative *-subalgebra, the PVM will exist for representing anything in that subalgebra.

Again, this is why measure theory, especially in this context, is essentially commutative operator theory.

I'll leave as an exercise how to work out the generalized state (density matrix formalism) and the generalized measurements (POVMs) for the standard QM version and let you decide what the appropriate algebraic QM equivalents should be.
But if you want to talk about C*-algebras so much, you should have the very basic knowledge about how states and observables work in standard QM with states in a Hilbert space and the C-*algebras in Algebraic quantum theory where states are functionals of the operators, not elements or rays in Hilbert spaces and your norm=1 doesn't apply.

Each ray gives rise to a state of the C* algebra in the required way and, with some assumptions on the C* algebra, every state is obtained by some ray.
 

Polymath257

Think & Care
Staff member
Premium Member
Are you aware that this is considered one of the most radical interpretations of QM? It is far beyond the sort of indeterminacy or instrumentalism even attributed to Bohr and Heisenberg (mostly incorrectly) let alone relational QM, Healey's pragmatism, operationalist QM, the statistical interpretation (of Ballentine and others), the generalized probabilistic theory interpretations, the entire class of epistemic interpretations, etc.? That there exists basically two approaches to QM that hold this view to be true:
1) QM is incomplete or
2) QM doesn't describe anything physical or real

Well, the terms 'physical' and 'real' are defined as being what QM describes. The classical notions of both are, however, wrong.

The possible exception would be QBism, which (so say its founders) is supposed to be somehow realist and subjective and be about an agents predictions.

Because if you really believe that the wavefunction or state vector or "system" in QM is only the set of probabilities an agent or observer would use to make predictions, then you can't have a contradiction, sure. You explain nothing, and you have no basis for the predictions (which use assumptions that violate this position and a central goal in programs that ascribe to less radical views as well as the QBists is the attempt to derive QM without doing so; they haven't succeeded), but as you've essentially ruled out any way to contradict QM by reducing it to subjective probabilities that can be blatantly wrong and still be "correct" because of your subjective epistemic state, then sure. No contradiction with such solipsism.
Of course, one wonders why you'd take issue with any of the most radical claims about QM or consciousness as the "cause" of collapse in your description depends on your state of knowledge and has nothing to do with reality apart from what you are aware of.


Which subjective probabilities are blatantly wrong? I use QM to make predictions of probabilities and correlations. You use it to do the same. When we bring our results together, they are always consistent.

For example, in Aspects experiment, I make measurements with my devices. You make measurements with yours. Each of us can compute the expected correlations when the results are brought together. And, when those results are brought together, the calculations are correct. That is an objective fact.

If you do an experiment and make predictions about it using QM, your observations will agree with the predictions. The same is true if I do the experiment and make the observations. That is an objective fact.

The objective facts of QM are the probabilities and correlations as computed using operators on a Hilbert space as opposed to measures on a probability space of properties.

The problems come when you want to say that things have definite properties at all times and that those correlations should be computed with some probability measure on some space describing those properties. That is the classical viewpoint and is simply wrong.
 

LegionOnomaMoi

Veteran Member
Premium Member
Please don't start to be insulting. I am very aware of what a vector space is. I am very aware of what a Hilbert space is. I am very aware of what operator theory is. I can even guarantee I have studied it deeper than you have.
I don't believe you. This is really basic stuff, and you keep getting it wrong. Over and over again.


Nope. This is wrong. Vector spaces do NOT need to be function spaces. Function spaces do not need to be vector spaces. For example, R^n (or C^n) are vector spaces over R (or C) but are not usually regarded as function spaces (they are isomorphic to function spaces with a finite domain, but they are not themselves function spaces).
The bolded part is all that matters here. Yes, they are not usually regarded as function spaces, but likewise 2x2 complex matrices (all one needs for violations of Bell's theorems and a slew of other foundations work) in terms C*-algebras. You did that to save face. And, technically, you are correct. Because such matrices can be shown to satisfy the necessary axioms. Likewise, until you can show that a familiar, Euclidean 3D vector space from primary school cannot satisfy the necessary conditions of a function space, then you are just applying a double standard. But ok. If you want, you can consider any set of functions a "function space". Your blatant errors, distortions, and downright misrepresentations of your would-be knowledge involve far more serious errors I actually feel more than a little betrayed for trusting that you had any idea of what you were talking about in post after post over the years.
1. You assumed a positive measure. It is possible to talk about real or complex valued measures. I assume all measures here are finite valued, though.
No duh. It's a probability space. Of course, I know longer trust that you have a clue what measures are or measure theory. And you've missed the point.

You can't talk about measure theory "failing" due to noncommutativity because this requires more structure than a measure space has by design. That's why we have different measures and integrals for different kinds of mathematical objects in different spaces (e.g., the Haar measure). This is basic stuff, and you want to talk about how "measure theory" is somehow commutative? How did you put it? Oh, yeah:

Ultimately, the point is that measure theory is, essentially, commutative operator theory. Under the commutative assumption, you get inequalities that you do not get without it.

This is complete bunk. It's utter nonsense, in every single sense. I'll get to how wrong you are in every possible way with respect to the inequalities, but you are so absurdly, horribly wrong about measure theory it is literally painful to read such statements.

Let me hold your hand a bit more than I have, since you insist on backing yourself into a corner here.
First, let's do some trivial mathematics with a probability space (which, fyi, is necessarily a measure space).

Recall (jeez, I can't even say that you now, as I honestly don't know how much more mathematical familiarity you've been faking). Ok, don't recall, but "realize", that since measures are defined on the level of sets, noncommutativity isn’t a problem for measure theory. There are no operations that need to be defined to turn a set into a measurable space, so there is no possibility of either losing or imposing noncommutative structures by requiring that an operator be commutative or noncommutative. Nor is it an issue of noncommutative geometries (no metric) or noncommutative spaces.
Rather, the issue tends to be more related to the fact that measure theory is too general. One can’t even readily address issues like commuting vs. non-commuting operators without imposing a vast amount of additional structure.
But it can certainly be done. Moreover, it can be done in much the same way one builds upon classical probability spaces with the additional structures and spaces one obtains by defining random variables over such a space (products of such spaces), yet in a manner that 1) yields noncommutativity via a classical probability measure built out of a classical probability space and 2) can be easily generalized into a C*-algebra.
To give you a sense of how one can do this, consider a generic “classical” probability triple <Ω, ℱ, P>. Now, instead of defining random variables on this space in the typical way (measurable functions from the space that satisfy in addition to measurability the conditions specific to random variables), one uses the same space and measurable functions to create “random variables” as follows: Define random variables X(ω) to be bounded functions in ℒ∞ (read “el-infinity) over the triple <Ω, ℱ, P>. Define the expectation operator E to be (as in the classical probability theory case) a linear functional of the random variables calculated similarly (i.e., using the Lebesgue-Stieltjes integral or similar integral and the probability measure from the triple). That is, we take the tuple { ℒ∞( <Ω, ℱ, ℙ>), E }, where the expectation E is defined for elements X (bounded functions) of ℒ∞( <Ω, ℱ, ℙ>) as the integral of X(ω) over Ω and with respect to the measure dℙ(ω).
In case you hadn’t noticed, this tuple { ℒ∞( <Ω, ℱ, ℙ>), E } can be generalized, and perhaps more importantly can be regarded through an “algebraic” lens in terms of a tuple (A, l), where is unital (complex) algebra and is a linear map from the algebra to the complex number with the constraint that it takes the unital element 1 of the algebra to 1, i.e., l(1A)=1.
And there you have a noncommutative probability space by imposing additional structure on a classical probability space and with measure theory “amazingly” left intact.
 
Last edited:

LegionOnomaMoi

Veteran Member
Premium Member
I clearly know more of the mathematics involved than you.

No, you are clearly capable of using words you don't understand. You are capable of such fundamental misunderstanding of basic, beginning graduate level mathematics that I seriously question whether you can do more than produce generic statements I've seen students quote mine from wikipedia and a different textbook then that assigned.

Like this:
Examples of C* algebras:
I could use google to. The difference is I don't need to.

As relevant to discussions of QM, measure theory is essentially commutative operator theory.
Utterly wrong. Completely wrong. It is so fantastically wrong that you should have used google a bit more carefully by now. Just because you get hit that says "noncommutative probability" or something or "quantum measure" doesn't indicate that 1) these don't rely on measure theory, 2) that standard QM isn't completely based on measure theory (um, you know, that whole "square integrable" space wave mechanics is defined on? Integrable with respect to what? Oh, right, the freaking Lebesgue measure, practically the most elementary and first development in measure theory) or 3) that these are actually talking about quantum theory itself rather than an approach to understanding it (that, yes, still involves measure theory).

Also, the whole, tired commutative nonsense should be over now. I built you a counterexample that shows you how to construct a measure-theoretic probability space that is 1) non-commutative and 2) built around the same sorts of functions we (not you, apparently) find in operator algebras.

Noncommutativity is not some “deep” mathematical result that shed light on quantum theory the way that group theory did (after initial resistance, as indicated by the term “gruppenpest”, which Wigner attributed to Pauli and which Pauli clearly used even if he didn’t coin the term, but which was in wider usage among despairing and disparaging physicists). Rather, it originated in the radical philosophical approach Heisenberg adopted to physics’ methodologies that he saw as akin to that of (early) Einstein. Namely, one deals only with observable quantities (it is ironic that Einstein later expressed incredulity at the idea that this is possible, and after Heisenberg tried to defend his approach by asking if it wasn’t the very one adopted by Einstein himself with special relativity, received a reply that it indeed may have been Einstein’s approach once, but it was wrong then and wrong generally; this was eventually to be Heisenberg’s view, at least in part: the theory dictates what you can observe).

So Heisenberg wrote down certain known results and observations in an array, throwing away not only basically all his mathematical knowledge but also most of existing theory, in order to explore certain properties and their relations. These “arrays” were rows and columns that all ended in “…” because the terms could go off to infinity, but he could nonetheless play around with the quantities in these arrays using arithmetic, being completely uninterested in mathematical rigor as in any case he was unaware of any mathematical object that consisted of something like an array of quantities.
This was how he derived the first version of the commutation relations that became the CCR and CAR that are at the heart of much of QM. Some sort of unfamiliar multiplication operations with these arrays yielded the famous iħ results. Heisenberg didn’t think of his arrays as matrices mostly because he couldn’t: He didn’t know what matrices were. Max Born had to tell him. And it was Bohr, inline with his overall philosophy (then developing) of complementarity who really derived the indeterminacy relation or the interpretation of incompatible observables.

It is undeniable that von Neumann’s work was innovative, brilliant, nuanced, elegant, ingenious, and more. He was able to recast both Heisenberg’s matrix mechanics and Schrödinger’s wave mechanics into a single framework by developing the necessary mathematics (Hilbert space) and mathematically representing observables in terms of an algebra of operators on this state space. But the brilliance of his work wasn’t some “deep” mathematical discovery of noncommutativity. That was implicit in Heisenberg’s discovery, the very first formulation of modern QM (previous quantum theory was too ad hoc and goes under the moniker “old quantum theory”). It wasn’t initially even present in Wave Mechanics. Nor was it generally accepted.
Moreover, the mathematics as generally presented in quantum mechanics textbooks, as well as in the literature, often glosses right over certain blatant, obvious contradictions.

First, the commutation relation (setting ħ=1) is usually given as something like [QP-PQ]=i*1, where i is the imaginary number and 1 is the appropriate unit element. In textbooks and elsewhere one too often finds these relations in the context of operators on a Hilbert space, possibly along other statements that hold for bounded operators. The obvious problem is that there are no non-trivial bounded operators that can satisfy this relation. You can check this using a proof-by-contradiction. Simply assume that there are. Then use that same assumption to do a proof by induction, with the n=1 case being the commutation relation. You want to have that, for all n in ℕ, Q^n*P-P*Q^n= inQ^(n-1) and this is not equal to 0 . If you need your hand held here, let me know and I'll walk you through the basics. Hint: You can use the norm of both sides of the result yielded by induction alongside repeated use of the triangle inequality to show that n||Q^(n-1)|| is less than or equal to ||Q^(n-1)|| * ||Q||, then use the fact that this isn’t 0 to show that this in turn implies that n is less than or equal to 2||Q ||*||P||.

I wrote it all out but it’s a nightmare to type and its trivial. I’m sure you can google it.

Texts that attempt at least a modicum of rigor will not gloss over this but will instead state that no bounded operators satisfy the CCR. They typically then proceed to “fix” this in the simplest way possible that, while it works in special cases, won’t be of much use if one wants to actually work with quantum theory (excepting perhaps in the finite-dimensional case, which these days means whole worlds of applications and a lot of foundational work too). The first step in “fixing” this problem depends on whether or not one follows Dirac’s approach (made more rigorous in two approaches based on Dirac’s, that of Gelfand’s so-called rigged Hilbert space or the route of Schwartz who developed the theory of distributions which grounded Dirac’s delta function in rigorous mathematics). If one opts to stay in Hilbert space, following von Neumann, then one has to somehow figure out how to determine the appropriately dense domains for operators to make sense out of claims that hold only for bounded operators unless e.g., convergence over the appropriately restricted domain can be established (or in some other fashion, often in actual treatment the approach is by fiat). Following Dirac, the general route taken is to abandon rigor altogether. For those who want more, then one has the Gelfand triple and the rigged Hilbert space approach (which is a misnomer on several grounds, the most pertinent being that one wishing to make true statements has to leave the Hilbert space altogether).

Eventually you may even get into the actual physically relevant situations, and then find that the whole thing was largely a waste because in experiments you can’t typically relate domains or bound functions of operators in the ways you would mathematically. So you end up using positive-operator values measures (and possibly the idea of pure states represented by elements in a Hilbert space, which isn’t general enough even in the finite-dimensional case).
 

LegionOnomaMoi

Veteran Member
Premium Member
Bell's inequality is what is found in the commutative setting with integration against a measure.
Which brings us to your fundamental misunderstanding of Bell's inequality, Bell's theorem, and most of the rest of the nonsense you've stated about QM and probability theory.

I'm going to really dumb this down (not because I think you aren't capable of understanding, but because I honestly don't trust you to represent your knowledge here and I want to be clear on the important points, which definitely include a central, core piece of quantum foundations).

That you don’t understand Bell’s theorem or the relevant concepts from quantum theory is fairly obvious from the assertion that the results can be attributed to noncommutativity of operators (or anything similar). This holds independently of your blatantly incorrect assertions about measure theory being “ultimately” and “essentially” a matter of commutative c*-algebras or similar nonsense.
If one asserts that Bell’s theorem or violations of Bell’s inequality can be interpreted as, attributed to, seen as resulting from, or more generally be viewed as rooted somehow in the non-commutative of observable algebras (still less the somehow “commutative” nature of your imaginary version of measure theory and equivalently irrelevant, inaccurate statements about probability theory), then one has necessarily missed what Bell did as well as those whose work built on his.

It's trivial to see how. Bell’s theorem concerns hidden variables and measured outcomes (where outcomes are detector “clicks” or on-screen display or similar instrument read-outs). It is quite specifically designed to show not what quantum theory predicts, but almost exactly the opposite. Bell’s theorem is about supplements to or replacements of quantum theory, described in terms of a statistical parameter that can be related to measurement outcomes and theory-independent assumptions. Quantum theory enters into the derivations of inequalities and the various formulations of Bell’s theorem mainly in two ways. First, one uses known predictions of certain experimental scenarios to generate “rules” for the measurement outcomes (more on this later). Second, and relatedly, one can then show that the theory-independent inequality is violated by quantum theory.
Bell’s theorem is often cast in terms of game theory, or something akin to it (e.g., a sort of trivialized version that can be useful for thinking about quantum encryption or quantum communication). We can imagine that Alice and Bob are well separated labs.

I’m not sure where the example I’m going to use originated from. I heard it from Rob Spekkens. But there are a ton of these types of scenarios out there.

Alice and Bob will be entering two separated, isolated labs. Both labs are connected via secure channels to a system which can generate signals according to certain preparation schemes. Depending upon the scheme chosen, the game Alice and Bob play changes. In a very simple case, we can imagine that the system generates two bits sent along the channels to Alice’s and Bob’s respective labs (once they are inside). For both labs, the bit encodes either a 0 or a 1, but which one of these is entirely uncertain and the generation completely random. Inside the lab is a display screen, which will display either a 0 or a 1 depending upon which input is received. So, once in their labs, Alice and Bob will both receive randomly generated input that will consist of either a 0 or a 1. They know that these are the possibilities, but do not and can not know which possibility they will actually find upon being sealed in the lab.

After receiving their input, they must output one of two possible signals: up or down. There are various sets of rules for different games, but they all involve having Alice and Bob’s output “match” in some sense that depends upon their input. To make this concrete:
Suppose this is the game (i.e., the “rules”): If Alice and Bob both receive 1’s or both receive 0’s, their output must agree (i.e., they must both output either UP or DOWN). If, however, their input doesn’t agree (i.e., one receives 1 while the other receives 0) then their output must disagree (i.e., one must output UP while the other must output DOWN).

The “game” part is that they can fix a strategy ahead of time. So, for example, Alice might suggest that if either one of them receives 1 as input, they should output UP, while if they receive 0, they should output DOWN. This is a winning strategy. No matter what input either receives, following this strategy will guarantee a win. If Alice receives a 1 she will output UP, which means that Bob’s output must be DOWN if his input is 0 and 1 if his input is UP. But that’s the strategy they’ve agreed on, so they can win every time.

This game can be one locally. It corresponds to a Bell test or experiment in which a local hidden variable model can reproduce the experimental statistics of quantum theory.

In other cases (i.e., other games for which the rules and winning conditions differ), there is only perhaps an optimal strategy that maximizes winning frequencies. The strategy or strategies that maximize the chances of winning in the same way as above are local. They correspond to Bell inequalities, the violations of which entail nonlocality. So far as anybody knows, the only possible physical implementation that allows for these violations (the only nonlocal shared resources) are those that can reproduce the experimental outcomes of quantum theory. Bohmian mechanics, for example, does this, and Bell’s original motivation for Bell’s theorem as well as his continued motivation which built on his earlier work was a defense of Bohmian mechanics and, more generally, of the nonlocal nature of QM.

So, where does non-commutativity come in? Nowhere, really. At least not in an explanatory sense. It would be hard for it to matter because the entire point here is to assume local realism, generate rules that allow one to empirically test the frequencies of experimental outcomes via an inequality, and then see if one can violate said inequality using the appropriate quantum preparation procedure as a shared (nonlocal) resource. One certainly uses quantum theory to build the experimental devices, the procedure, to make predictions, etc. But for nonlocal “strategies” to succeed, the necessary condition is entanglement, not necessarily non-commuting observables. And one certainly need not leave Hilbert space, as these experiments can and have been performed using finite-dimensional systems (or the quantization of some physical system’s degree(s) of freedom that yields a finite-dimensional state space), nor make any special use of C*-algebras other than trivially.
 

LegionOnomaMoi

Veteran Member
Premium Member
Agreed. And *why* does QM violate these inequalities? Because of the non-commutativity of the operators and the way they operate on the states.
As I already explained, the non-commutativity of operators was von Neumann's attempt to formulate an empirical result that wove together existing quantum theory into one framework that even he became dissatisfied with.
Also, for somebody who has asserted the failure of measure theory in spaces that are quite literally defined by the equivalence classes of square-integrable functions with respect to the first "real" measure one generally learns in an intro to measure theory (you'd find it in Rudin, naturally, which you may want to take your own advice on), you are banking quite a lot on a distinction that is especially irrelevant here.
States are entangled, and although one can take as primitive elements (or functions on/of them) from an operator algebra as primitives, this is not necessary and not typical (we use c*-algebras for quantum statistical mechanics and QFT, where we lose important theorems that supposedly guarantee uniqueness we loose by generalizing to operator algebras anyway.).
I'm not going to assume you know anything about measures of entanglement in either NRQM or QFT. However, surely you are familiar with logic. And I know you are familiar with basic set theory and I assume some combinatorics. You clearly aren't familiar rigorous probability theory (not just because of your ignorance of measure theory), but we don't even need that. Bell's inequality is set up to explore classes of theories that would replace/supplement QM. It is not about QM per se, and Bohmian mechanics (which is determinate, and in which it is clearly not the observables that matter because the particles are defined at all times along trajectories, without any collapse). It is about the kinds of set-theoretical, statistical, and logical formulations that began with the statistical case of looking at the space of theories that could satisfy what was explained above in terms of local strategies (and more generally with terms oft-misunderstood like "local realism" or slightly better "local hidden variables").
So, for the commutative setting (where the elements of the C* algebra are continuous functions on some compact Hausdorff space), a 'state' is integration with respect to a probability measure. So, for example, if 0<=|f|,|g|,|h|,|k|<=1, then (using <f,u> for the integral of f with respect to u).
Wrong. You can stop the google word grab.
One doesn't need states for Bell inequality. In fact, can't use them. But in order to get predictions or theories that violate them, then one can use, of course, ordinary NRQM. And in this case you are doing an awful lot of pointless work trying to save face with c*-algebra nonsense in order to get this:
<fh+fk+gh-gk, u> <= 2
a failed attempt to understand the basics and skirt over how we actually get an inequality like this. A state is not integration with respect to a probability measure firstly because this is just nonsense, and secondly because if you integrate with respect to a probability measure you get a number. Google "probability measure". It might help you. Then maybe read Rudin. You are conflating and confusing too many different mistakes and errors to track.
Bell's theorem is relatively simple, and doesn't really involve quantum theory so much. And the necessary quantum theory typically does nothing more than sometimes trivially involve C*-algebras. As a general rule, nobody really cares so much about the algebras of obervables in general unless they are working in infinite-dimensional state spaces in QFT (where nothing is rigorous enough anyway) or are playing around with the foundational elements and seeing what can be dropped and what can't to try to better understand the necessary structures vs. those that can, at least in principle, be considered secondary.
Yes, I am also talking about unbounded operators. They are defined on dense subsets of the Hilbert spaces, but have the crucial property that their graph is closed (which is why they are called closed operators). In particular, differential operators have this property.
The problem, of course (beyond the fact that these operators are measurable, that measure theory isn't somehow magically commutative because you stole some words from various sources you can't understand, and so forth), is that those of us who actually have to use this theory can't rely on definitions of "unbounded" obtained from google searches. We have experimentalists looking for the correct Hamiltonian for some measurement scheme given some preparation procedure in the real world, and all this domain stuff tends to go right out the window.
Theoretically, it's remains problematic because even in theory one has to at least have that the observables make sense in dense domains, not assume that they do. That's an outstanding problem to this day. Not pragmatically, of course. But then, the whole of QFT is built on mathematical objects and their manipulations in ways that are clearly unjustified, so not that many people (outside of foundations, anyway, and even here it isn't a central field of study) are concerned with the domains of actual observables in a way that would justify the textbook presentations you've neatly trashed by apparently attempting to sound more knowledgable with references to operator algebras. Can't believe I actually bought that at some point.
And yes, Bell is correct. No measure on a measure space can reproduce the results of states on operators.
Bell never said this. It also isn't true, but he never said it. Actually his probability theory was more than a little lax from a mathematical perspective. But others have improved it and its been cast into other forms, so it doesn't matter. The point is that the operators are largely irrelevant, that this non-commutativity of measure theory you harp on about has repeatedly been shown to be balderdash (one wonders what you'd think about random matrix theory, and how far you'd go to make outlandish claims about commutativity here), and operators act on states, not the reverse.
Maybe add "Bloch sphere" to your list of google terms you'll need to continue to sound as if you you have the faintest idea of what you are talking about here. It won't help much, but it's extremely useful.
That is only true if you assume the underlying states are described via a probability space. And that is precisely what fails in QM.
Wrong. Firstly, in ordinary QM probability theory is used to calculate outcomes and it works, what doesn't work well is how we get at these probabilities from the states (not the observables) nor is the issue non-commutativity (I already showed you how to use classical probability theory to get a noncommutative algebra of random variables with a measure-theoretic expectation operator, and you can do this in at least three ways that I know of for QM at a fundamental level, not just as probability theory is used). Secondly, states are given by probabilities in most cases, including the original paper mentioned in this thread, as in real life we don't tend to use rays in Hilbert space, particularly not in foundations, we use density operators with CPTP maps and what we call POVMs even though the measures are too trivial to warrant the name in most cases. And probability theory works find via the trace. Thirdly, the operators are mathematical representations (usually obviously non-rigorous and flawed and often hypothesized but not known to be accurate) of associated observables for a given preparation procedure.
 

LegionOnomaMoi

Veteran Member
Premium Member
I use QM to make predictions of probabilities and correlations. You use it to do the same. When we bring our results together, they are always consistent.
They aren't. Because if instead of you doing an experiment in a box, that can and may later conflict with my result that has you entagled with the system your are experimenting with, you are in a lab with a vial of poison that is triggered by atomic decay, then I describe you as in a state of being dead and alive and you describe something or nothing (because you died). Either way, my prediction is statistical, and yours isn't a prediction at all.

But the problem was shown all the way back in 1935, and involved non-commuting observables. If you assume that these are indeed indeterminate, and you assume that QM is complete (that the information, or probabilities, encoded in the state are the relevant information to characterize the system), you get a contradiction. At least you do if you assume locality. You don't save this with some mythical choice about "realism" vs. "locality", because people familiar with what Bell and others did know that at best "realism" here means "local realism" means "local hidden variables".



For example, in Aspects experiment, I make measurements with my devices. You make measurements with yours. Each of us can compute the expected correlations when the results are brought together. And, when those results are brought together, the calculations are correct. That is an objective fact.

In an operational sense, what we can do is associate a density operator to a equivalence class of preparation procedures and a generalization of unitary evolution to CPTP maps with generalized measurements in order to calculate expected outcomes for the degrees of freedom encoded in the state. Sure.

What we can't do is explain how it is possible for quantum evolution to take place at all (why does the state evolve when we don't look?), why we can never observe the postulates concerning the state update/evolution that most of QM revolves around, or even what it is that we are measuring.
Nor will they agree.
Your talk of "the wave function collapses for wigner's friend" only makes sense if we ignore the fact that every experiment can be considered in terms of one of two contradictory schemes and the only way we know of to differentiate the two is practical limitations: We have to look, and we never see the states predicted by QM, so we invented an ad hoc add on we can't even begin to justify because it brings in primitives to the theory that remain undefined to this day.
Hence experiments like the Nature paper. We're now trying to see what we can realize experimentally concerning different takes and classes of interpretations.

If you do an experiment and make predictions about it using QM, your observations will agree with the predictions.
Wrong. That was the first way around EPR. Also, my predictions can be changed with a change of basis (this isn't as trivial as it sounds, although it should be trivial). Finally, my predictions can be wrong if I haven't adequately accounted for the post-selection paradox and related conundrums.
Then there's the fact that the predictions tell us nothing about how the quantum systems that make up our measurement devices are classical. Because we use classical physics to do quantum experiments. Because quantum theory doesn't make predictions without an inconsistent add-on that we've finally realized we can no longer justify with "shut up and calculate" for a variety of practical and theoretical reasons.


The objective facts of QM are the probabilities and correlations as computed using operators on a Hilbert space as opposed to measures on a probability space of properties.

This isn't how we use QM, or anybody else uses QM for that matter. It can't be, as the probabilities are computed using operators on a Hilbert space. And the probabilities are computed using measure theory (we need to integrate, or sum, after all; see Rudin, or any real analysis text). And the operators are associated to states in a Hilbert space that (in wave mechanics or for the wavefunction) consists of..wait for it...elements literally defined via measure theory. Square-integrable functions. Again.
Also, the computations aren't done using the operators. That's a textbook simplification of a theorists perspective of trivial cases that you've slightly bungled. The state encodes the relevant degrees of freedom that are used to compute the probabilities. The observables are used to characterize the properties accordingly for the system. And we use measure theory for all of this, and classical probability theory for the actual measurement predictions (it would be nice if you were closer to correct and we could make sense out of complex, possibly negative probabilities so that we needn't use distributions and densities we obtain indirectly from, in the simplistic textbook case, the mod square of elements of measure-theoretic entities).

But sure, it's because measure theory fails that wave mechanics (where the state are literally defined as equivalence classes of measure-theoretic entities with respect to the Lebesgue measure) involves a non-separable state space that is due to the operators irrelevant to this...

The problems come when you want to say that things have definite properties at all times and that those correlations should be computed with some probability measure on some space describing those properties. That is the classical viewpoint and is simply wrong.[/QUOTE]
 

Polymath257

Think & Care
Staff member
Premium Member
I don't believe you. This is really basic stuff, and you keep getting it wrong. Over and over again.

No, I don't. But I see why you think so. I'm guessing you have had the first course in measure theory with an emphasis on probability theory. Fair enough.

But that is the first level.

Are you aware of the Riesz Representation Theorem? No, not the (rather trivial) Hilbert space version, but the measure theory version? The one that identifies positive linear functionals on C(X) as integration against positive Borel measures?

Or the version with complex measures, where bounded linear functionals on C(X) are identified with integration against a complex measure?

THOSE are the fundamental results I am relying upon.

The bolded part is all that matters here. Yes, they are not usually regarded as function spaces, but likewise 2x2 complex matrices (all one needs for violations of Bell's theorems and a slew of other foundations work) in terms C*-algebras. You did that to save face. And, technically, you are correct. Because such matrices can be shown to satisfy the necessary axioms. Likewise, until you can show that a familiar, Euclidean 3D vector space from primary school cannot satisfy the necessary conditions of a function space, then you are just applying a double standard. But ok. If you want, you can consider any set of functions a "function space". Your blatant errors, distortions, and downright misrepresentations of your would-be knowledge involve far more serious errors I actually feel more than a little betrayed for trusting that you had any idea of what you were talking about in post after post over the years.

So you admit that vector spaces are NOT all function spaces? Nor vice versa (although it is very common to require function spaces to be vector spaces).

No duh. It's a probability space. Of course, I know longer trust that you have a clue what measures are or measure theory. And you've missed the point.

Nope, you have. The point is that integration against a probability measure is a state in the C* algebra of continuous functions.

Read that again. If you don't understand what a state for a C* algebra is, just ask. But this is why probability measures are considered to be states.

You can't talk about measure theory "failing" due to noncommutativity because this requires more structure than a measure space has by design. That's why we have different measures and integrals for different kinds of mathematical objects in different spaces (e.g., the Haar measure). This is basic stuff, and you want to talk about how "measure theory" is somehow commutative? How did you put it? Oh, yeah:
Ultimately, the point is that measure theory is, essentially, commutative operator theory. Under the commutative assumption, you get inequalities that you do not get without it.
Now, you are, again, missing the point.

This is complete bunk. It's utter nonsense, in every single sense. I'll get to how wrong you are in every possible way with respect to the inequalities, but you are so absurdly, horribly wrong about measure theory it is literally painful to read such statements.

Yes, I realize that you understand measure theory as being about positive measures defined on sigma-algebras and the corresponding integration theory. I understand that you limit yourself to probability measures.

Let me hold your hand a bit more than I have, since you insist on backing yourself into a corner here.
First, let's do some trivial mathematics with a probability space (which, fyi, is necessarily a measure space).

Yes, a probability space is a measure space with a probability measure.

Recall (jeez, I can't even say that you now, as I honestly don't know how much more mathematical familiarity you've been faking). Ok, don't recall, but "realize", that since measures are defined on the level of sets,

More specifically, they are countably additive functions from some sigma algebra to either [0,1] (for probability measures), R^+ (for positive measures), R (for signed measures), or C (for complex measures). It is possible to define measures with values in more general spaces, but these are the main ones.

noncommutativity isn’t a problem for measure theory. There are no operations that need to be defined to turn a set into a measurable space, so there is no possibility of either losing or imposing noncommutative structures by requiring that an operator be commutative or noncommutative. Nor is it an issue of noncommutative geometries (no metric) or noncommutative spaces.
Rather, the issue tends to be more related to the fact that measure theory is too general. One can’t even readily address issues like commuting vs. non-commuting operators without imposing a vast amount of additional structure.

You are missing the point. Integration against a measure (of any of the types above) gives a linear functional on the space of continuous complex valued functions (I am assuming Borel measures). And C(X) is a commutative C* algebra.

This is what links measure theory to operator theory.

<snip for space>

To give you a sense of how one can do this, consider a generic “classical” probability triple <Ω, ℱ, P>. Now, instead of defining random variables on this space in the typical way (measurable functions from the space that satisfy in addition to measurability the conditions specific to random variables), one uses the same space and measurable functions to create “random variables” as follows: Define random variables X(ω) to be bounded functions in ℒ∞ (read “el-infinity) over the triple <Ω, ℱ, P>. Define the expectation operator E to be (as in the classical probability theory case) a linear functional of the random variables calculated similarly (i.e., using the Lebesgue-Stieltjes integral or similar integral and the probability measure from the triple). That is, we take the tuple { ℒ∞( <Ω, ℱ, ℙ>), E }, where the expectation E is defined for elements X (bounded functions) of ℒ∞( <Ω, ℱ, ℙ>) as the integral of X(ω) over Ω and with respect to the measure dℙ(ω).

Given your notation, I'm guessing you use Loeve's book?

It is more common among operator theorists to use the greek mu for measures (or lambda for Lebesgue measure). The use of P is antiquated in mathematical treatments these days.

In case you hadn’t noticed, this tuple { ℒ∞( <Ω, ℱ, ℙ>), E } can be generalized, and perhaps more importantly can be regarded through an “algebraic” lens in terms of a tuple (A, l), where is unital (complex) algebra and is a linear map from the algebra to the complex number with the constraint that it takes the unital element 1 of the algebra to 1, i.e., l(1A)=1.
And there you have a noncommutative probability space by imposing additional structure on a classical probability space and with measure theory “amazingly” left intact.

Good start. Now, consider a compact Hausdorff space X and consider the C* algebra of continuous complex valued functions on X. By all accounts, this is a function space (it is even a vector space).

Any positive Borel measure on X (in other words, the sigma algebra involved is that generated by the open sets) gives rise to a positive linear functional on C(X). In fact, C(X) is even a subspace of L-infty for any positive Borel measure on X.

Now, and this is the crucial part, *every* positive linear functional on C(X) arises in this way. Also, *every* bounded linear functional on C(X) arises by integration against some complex measure. This is the content of the Riesz Representation Theorem (RRT, for short).

In this way, measure theory (of Borel measures on a compact Hausdorff space) becomes the study of bounded linear functionals on the commutative C* algebra C(X).

THIS IS WHERE COMMUTATIVITY ENTERS IN.

Now, for *non-commutative* C* algebras, like B(H) (the space of bounded operators on a Hilbert space H), we can similarly talk about positive linear functionals. And, what we find is that they are all of the form A--> <Ax,x> for some x in the Hilbert space, unique up to a complex unit multiple. If the functional has norm 1, so does x. And so, these are called *states* of the C* algebra B(H).

Here is the whole point:

The states (unit vectors) of the C* algebra B(H) are analogous to the probability measures acting on the commutative C* algebra C(X).

THIS is how measure theory is the commutative version of operator theory.

Furthermore, if you have a normal operator (AA*=A*A), and let X be the spectrum of A, then C(X) has a natural representation as operators and the corresponding states give *measures* on the spectrum. This is the basis of the spectral theorem for normal operators.

But you can get a bit more generality. ANY commutative C* subalgebra of B(H) has a structure space that is compact and the space of continuous functions on this structure space gives a spectral decomposition of H. This is usually thought of as 'simultaneous diagonalization' of the operators in that subalgebra.
 

Polymath257

Think & Care
Staff member
Premium Member
No, you are clearly capable of using words you don't understand. You are capable of such fundamental misunderstanding of basic, beginning graduate level mathematics that I seriously question whether you can do more than produce generic statements I've seen students quote mine from wikipedia and a different textbook then that assigned.

Like this:

I could use google to. The difference is I don't need to.

And I did not for that list. I'm keeping things a bit generic because of the limitations of this forum. I'm used to using LaTeX for writing symbols. and that isn't really an option here.

Utterly wrong. Completely wrong. It is so fantastically wrong that you should have used google a bit more carefully by now. Just because you get hit that says "noncommutative probability" or something or "quantum measure" doesn't indicate that 1) these don't rely on measure theory, 2) that standard QM isn't completely based on measure theory (um, you know, that whole "square integrable" space wave mechanics is defined on? Integrable with respect to what? Oh, right, the freaking Lebesgue measure, practically the most elementary and first development in measure theory) or 3) that these are actually talking about quantum theory itself rather than an approach to understanding it (that, yes, still involves measure theory).
SIGH.

OK, let lambda be measure on some Euclidean space. The space L2 of square integrable functions is then a Hilbert space. It is *this* Hilbert space in which most of QM is done. What are the states? The elements of L2 of norm 1, identified when a unit complex multiple. These also give the norm 1 positive linear functionals on B(H).

Observables are operators (usually unbounded, but also usually closed) on this Hilbert space.

So, yes, measure theory is used to define the appropriate Hilbert space, BUT THAT ISN"T MY POINT.

The point is that, with H=L2(lambda), the collection of bounded operators, B(H), is a non-commutative C* algebra.

Every *commutative* C* algebra is isomorphic to one of the form C(X) where X is a compact Hausdorff space. The states on the commutative C* algebras then are integration against probability measures on X.

In this way, measure theory becomes the commutative version of operator theory.

Also, the whole, tired commutative nonsense should be over now. I built you a counterexample that shows you how to construct a measure-theoretic probability space that is 1) non-commutative and 2) built around the same sorts of functions we (not you, apparently) find in operator algebras.

Well, you look at a specific Hilbert space L2. I tend to look at a more abstract level with a generic Hilbert space H. the non-commutative algebra is then the space of all bounded operators on that Hilbert space, B(H).

First, the commutation relation (setting ħ=1) is usually given as something like [QP-PQ]=i*1, where i is the imaginary number and 1 is the appropriate unit element. In textbooks and elsewhere one too often finds these relations in the context of operators on a Hilbert space, possibly along other statements that hold for bounded operators. The obvious problem is that there are no non-trivial bounded operators that can satisfy this relation. You can check this using a proof-by-contradiction. Simply assume that there are. Then use that same assumption to do a proof by induction, with the n=1 case being the commutation relation. You want to have that, for all n in ℕ, Q^n*P-P*Q^n= inQ^(n-1) and this is not equal to 0 . If you need your hand held here, let me know and I'll walk you through the basics. Hint: You can use the norm of both sides of the result yielded by induction alongside repeated use of the triangle inequality to show that n||Q^(n-1)|| is less than or equal to ||Q^(n-1)|| * ||Q||, then use the fact that this isn’t 0 to show that this in turn implies that n is less than or equal to 2||Q ||*||P||.

I wrote it all out but it’s a nightmare to type and its trivial. I’m sure you can google it.

It is easier to realize that the spectrum of PQ would be the same as that for QP (this happens in any Banach algebra). But both are compact subsets of C and the CCR gives sp(QP)=sp(PQ)+i. This is a contradiction.

Why would you think I would need to google such a basic result? Note that the result is true for any Banach algebra: in no Banach algebra are there elements x and y with xy-yx=1.

Texts that attempt at least a modicum of rigor will not gloss over this but will instead state that no bounded operators satisfy the CCR. They typically then proceed to “fix” this in the simplest way possible that, while it works in special cases, won’t be of much use if one wants to actually work with quantum theory (excepting perhaps in the finite-dimensional case, which these days means whole worlds of applications and a lot of foundational work too). The first step in “fixing” this problem depends on whether or not one follows Dirac’s approach (made more rigorous in two approaches based on Dirac’s, that of Gelfand’s so-called rigged Hilbert space or the route of Schwartz who developed the theory of distributions which grounded Dirac’s delta function in rigorous mathematics). If one opts to stay in Hilbert space, following von Neumann, then one has to somehow figure out how to determine the appropriately dense domains for operators to make sense out of claims that hold only for bounded operators unless e.g., convergence over the appropriately restricted domain can be established (or in some other fashion, often in actual treatment the approach is by fiat). Following Dirac, the general route taken is to abandon rigor altogether. For those who want more, then one has the Gelfand triple and the rigged Hilbert space approach (which is a misnomer on several grounds, the most pertinent being that one wishing to make true statements has to leave the Hilbert space altogether).

Nice summary. Another aspect of the rigged Higged Hilbert space technique is that the required spaces aren't even Banach spaces, but are convex TVS's. The advantage is that (often) the 'eigenvectors' are actually elements of the spaces considered.

I generally prefer the use of unbounded operators on the original Hilbert space, but that does have issues with eigenvectors, etc. The wave solutions tend not to be in the Hilbert space.

Eventually you may even get into the actual physically relevant situations, and then find that the whole thing was largely a waste because in experiments you can’t typically relate domains or bound functions of operators in the ways you would mathematically. So you end up using positive-operator values measures (and possibly the idea of pure states represented by elements in a Hilbert space, which isn’t general enough even in the finite-dimensional case).

Well, you also won't get QP-PQ=i in a finite dimensional setting since ALL operators on a finite dimensional space are bounded.

But, yes, those operator valued measures (usually projection valued) are the essence of the spectral theorem (a version of which is still true for closed operators).
 
Last edited:

Polymath257

Think & Care
Staff member
Premium Member
This isn't how we use QM, or anybody else uses QM for that matter. It can't be, as the probabilities are computed using operators on a Hilbert space.

Yes, via expressions like <Ax,y>.

And the probabilities are computed using measure theory (we need to integrate, or sum, after all; see Rudin, or any real analysis text). And the operators are associated to states in a Hilbert space that (in wave mechanics or for the wavefunction) consists of..wait for it...elements literally defined via measure theory. Square-integrable functions. Again.

Ahhh....you see measure theory as only entering in when defining the Hilbert space.

Sure, you use integration to evaluate <Ax,y> because you are using L2 as the underlying Hilbert space.

But, for example, if you are working with a basis of eigenvectors of A (say Ax_n = c_n x_n ), then you can equally well compute <Ax,y> using the expansion of x and y in that basis. You will NOT have to go back to the integrals to compute your probabilities and correlations.

Just to let you know: Rudin's Real and Complex Analysis is a really good reference book, but not so good to teach from. His Functional Analysis book is quite nice for the next level up. Yes, I have taught those courses.

Also, the computations aren't done using the operators. That's a textbook simplification of a theorists perspective of trivial cases that you've slightly bungled. The state encodes the relevant degrees of freedom that are used to compute the probabilities. The observables are used to characterize the properties accordingly for the system. And we use measure theory for all of this, and classical probability theory for the actual measurement predictions (it would be nice if you were closer to correct and we could make sense out of complex, possibly negative probabilities so that we needn't use distributions and densities we obtain indirectly from, in the simplistic textbook case, the mod square of elements of measure-theoretic entities).

Well, if A is a positive operator and x is a state, <Ax,x> will be positive.

But sure, it's because measure theory fails that wave mechanics (where the state are literally defined as equivalence classes of measure-theoretic entities with respect to the Lebesgue measure) involves a non-separable state space that is due to the operators irrelevant to this...

Well, if you restrict to continuous functions, you don't need to take equivalence classes, but then you lose the Hilbert space (although you can regard it as the metric space completion of the space of continuous square integrable functions).

The problems come when you want to say that things have definite properties at all times and that those correlations should be computed with some probability measure on some space describing those properties. That is the classical viewpoint and is simply wrong.

YES! And that is my point! You need to go beyond probability spaces and do operator theory.

The essential difference is that operators are not commutative while L-infinity is.
 

Polymath257

Think & Care
Staff member
Premium Member
Also, for somebody who has asserted the failure of measure theory in spaces that are quite literally defined by the equivalence classes of square-integrable functions with respect to the first "real" measure one generally learns in an intro to measure theory (you'd find it in Rudin, naturally, which you may want to take your own advice on), you are banking quite a lot on a distinction that is especially irrelevant here.

I would suggest looking at Theorem 6.19 of Rudin's Real and Complex Analysis. That is the crucial result that relates measure theory and the dual of continuous function spaces.

I would point out that Rudin's treatment has some flaws, especially when dealing with measurability (he only looks at Borel sets and does not deal with complete measures). A more thorough treatment can be found in Cohn's book on Measure Theory or Hewitt&Stromberg's book on Abstract Real Analysis.

I have a fondness for Hewitt&Stromberg since Stromberg was my PhD advisor.

Also, not all Hilbert spaces are L2 spaces of measures. The sequence spaces, for example, are very useful since any basis will identify a Hilbert space as a sequence space. You might also want to look into Sobolev spaces, for example, which are useful for anylysis of linear PDEs.

For Functional Analysis and Operator Theory, I generally prefer Kadison&Ringrose vol 1 (mainly because I learned out of it). I might suggest Chapter 4 (Elementary C* algebra theory) as a beginning treatment of the basic ideas. In particular, section 4.4 on the abelian C* algebras is relevant to what I have been saying.

That said, Rudin's treatment of the spectral theorem in his Functional Analysis book is quite nice. However, he doesn't really do the GNS construction, which is an important result. You might want to think about Thm 11.32 in the Functional book and its relation to my comments.

In other post, you mentioned Haar measures. You do realize that they are not, in general, probability measures (if the underlying group is not compact), right? I also wonder if you have looked at them in any case where the group is not a Lie group. If not, look at Hewitt&Ross two volume treatment of Abstract Harmonic Analysis. The second volume has a pretty complete treatment of non-abelian compact groups.

So, yes, I understand where you are coming from. The *definition* of a measure space only requires a sigma algebra on a set and a countably additive positive measure on that sigma algebra. You can then look at various spaces (not just L2 and L-infinity) of spaces of measurable functions. And, in particular, L-infinity is a *commutative* C* algebra.

But this is only the first level of understanding of what measure theory gives us. Its impact on duality theory is much more important and is the basis of such results as the spectral theorem.

The way you focus on the set function definition of a measure seems like claiming that group theory has nothing to do with symmetries because the definition of a group doesn't mention symmetries. While true, it misses some crucial aspects of the objects of study. In the context of operator theory, complex measures 'are' the bounded linear operators on the commutative C* algebras. Positive measures 'are' the positive linear functionals. Probability measures 'are' the states of said algebras and the point masses 'are' the pure states.
 
Last edited:

Polymath257

Think & Care
Staff member
Premium Member
They aren't. Because if instead of you doing an experiment in a box, that can and may later conflict with my result that has you entagled with the system your are experimenting with, you are in a lab with a vial of poison that is triggered by atomic decay, then I describe you as in a state of being dead and alive and you describe something or nothing (because you died). Either way, my prediction is statistical, and yours isn't a prediction at all.

Let's keep everyone alive and just say I either detected a decay or not. So, in my description, I have detected a decay. Since I made an observation, I have a definite value for the result. And I regard myself as being in a definite state about having detected the result.

In your description, the atom is still in a superposition of 'decayed' and 'not decayed'. I am also in a superposition of 'detected a decay' and 'not detected a decay'. Your descriptions will continue to be in superposition until you make an observation. You get a statistical prediction of what the results of that observation will be.

Now, when you actually make an observation, the result you get (decayed or not) will be the same as what I got. The results are consistent. There will never be an observation where I say the atom decayed and you have an observation and say it did not.

So, yes, our descriptions will factor in our not yet having made observations. So, I may know that the atom decayed and you might not. Our descriptions will reflect that. For me, the state will be either |decayed> or |not decayed> while for you it will be some combination of the two.
 

George-ananda

Advaita Vedanta, Theosophy, Spiritualism
Premium Member
Perhaps to a Solipsist who doesn't know the Cosmos had a very long time of zero conscious observation going on in it.
Or Consciousness created the cosmos. And quantum mechanics just begins to be a suspicious observation that everything could just be an unfolding of consciousness.

Remember there are two distinct schools of thought:

Materialism: Matter is primary, and consciousness is a derivative of matter.

Vedic (Hindu), Theosophical: Consciousness is primary, and matter is a derivative of Consciousness,
 

Shadow Wolf

Certified People sTabber
Or Consciousness created the cosmos.
That, of course, requires extraordinary evidence and it is not at all what quantum mechanics suggests.
Materialism: Matter is primary, and consciousness is a derivative of matter.

Vedic (Hindu), Theosophical: Consciousness is primary, and matter is a derivative of Consciousness,
Only if you want to insist Materialism and Hindu are it.
 

rational experiences

Veteran Member
If biological human men.

Born by humans as other two baby adult human lives told any truth....

His life is owned in biology only by two humans who had sex. Humans survive until they die.

Consciousness. Biology. Your bio life presence of utmost importance first. Not even considered relevant.

Instead you philosophise thinking into unnatural incorrect human beliefs.

Your own. Organised for human cult behaviours the organisation.

You theoried in a human minds thought of man's use design. Lots of designs via human thoughts to put together a built machine.

God man of your machine you then impose intricate machine details to any bio thesis you choose.

Knowing you took earths mass historic rocks dusts yourselves back to melt to obtain the machine.

Is how your mind became entrapped in a historic bio choice to take earths rock mass dusts into metals yourself.

Which you like to ignore as being relevant to human man's ego incorrectness of a living breathing life mind bio body in water and oxygenation.

Lying every moment in theory.
 

LegionOnomaMoi

Veteran Member
Premium Member
I guarantee I know more measure theory than you. This type of thing was my research specialty.

If you are an expert in measure theory, then things become rather simple. You claim:

Yes. Ultimately, the point is that measure theory is, essentially, commutative operator theory. Under the commutative assumption, you get inequalities that you do not get without it.
Prove it.
And let me be crystal clear, because you can clearly obfuscate and backtrack quite easily, as you already have. I don't want to see a bait and switch proof where you copy and paste steps from some source showing how to construct a commutative c*-algebra or any other commutative structures at all, as this proves absolutely nothing.

I don’t mean construct a commutative algebra in some trivial way and claim that this somehow says anything about the intrinsic commutative nature of measure theory.

I don’t mean copy and paste some version of the Bell inequality and call it “commutative”, then copy and paste some formula for a quantum violation of Bell’s inequality and call it “non-commutative” without referring to measure theory at all. I don’t mean obfuscate by copying and pasting irrelevant definitions.

No more weaseling your way out of this. You’re an expert in measure theory (you claim) and measure theory is, by definition and design, extremely general. For the benefit of anyone and everyone, I’ve included a short description of the conditions one must have to use measure theory.

For measurable spaces, you need only a set and your choice of sigma-algebra. Measure theory is the application of measures—which are, and are of necessity, set functions, because measure theory assumes no additional operations or structures—to (sub)sets of measurable spaces. This generality is, again, by design so that measures spaces can be applied to highly abstract spaces having little structure as well as to incredibly rich spaces with level after level of interrelating structures. Finally, the measurable space tuple together with the measure constitute a measure space.

Of course, as an expert you already know this. And as an expert you would be familiar with the ways Hilbert spaces can be defined by taking equivalence classes of square-integrable functions with respect to a particular measure, how spectral measures are used for operators on such spaces, how operators have their own integration measures and how e.g., positive operator measures can be used here as well as in quite different contexts.

More importantly, as an expert you are aware that measure spaces do not come equipped with any operations that are either commutative or non-commutative. You must equip such spaces with additional structure in order to have any commutative properties. It is thus absolutely ludicrous to claim that measure theory is somehow “essentially” commutative (or non-commutative), because to even refer to commutativity requires specifying the additional structure you are equipping your measure space with.

And, once more, just to be crystal clear, no more attempting to weasel your way out of this by switching from the claim about measure theory being commutative and this being how measure theory fails to your newer attempt to make it about noncommutativity (along with some pointless way of defining a commutative space, which is what? your "proof by it's possible to do something different"?).

So, as an expert, you can start to show that your ridiculous claim is somehow sensible by showing why measure theory is suddenly commutative, although this is just a matter of interest. You've already "described" (largely copied) how one gets a commutative space, but this is wholly irrelevant. By this method of "proof", C*-algebras fail here because there exist commutative c*-algebras. What you claimed is that measure theory itself here was somehow essentially commutative and thus that's where the difference is, that's where Bell's inequality comes from (and then later, that's where the violations come from), and so forth.

What you have yet to do is show that measure theory is necessarily commutative (because this is, of course, patent nonsense).

Even if you actually start at the level measure theory does, you’d have to equip a measure space with more structure in order to refer to commutativity which means you aren’t proving anything about measure theory, but about your choice of a particular measure space you’ve decided to use for a space over which you define a commutative algebra, and not that measure theory is necessarily commutative and you cannot use measure theory for the non-commutative algebras of QM.

You can’t prove that measure theory is “essentially” commutative. It’s difficult even to make sense out of your claim in a manner that makes it clear how you could prove it. But you are the expert, so there must be some way you can prove that

1) QM contains algebraic structures which are essential to it and are used to do things like calculate probabilities via operators acting on quantum states in e.g., a Hilbert space. To be clear, what I’m really getting at by this condition is that you don’t attempt to sidestep this matter by constructing some non-measurable function or other and hoping I won’t notice this would make QM unusable.

2) Any choice of measure/measures together with an associated measurable space (i.e., measure spaces) that one attempts to use to work with the noncommutative mathematics required in QM will fail, and this is due to the commutative nature of measure theory itself.

Of course, you’d be left with the problem of how, in such a pathological space, you could get anything like a physical theory. Then you’d have to explain how every mathematician and scientist in the world whose field concerns or who have ever dealt with quantum mechanics has failed to miss the fact that the operators defined on a measure-theoretic state space with a norm defined via the Lebesgue integral and measure and operators that function in this theory in a manner that crucially depends on spectral measures from measure theory have all somehow failed to see what you have:
That the mathematics underlying quantum theory necessarily fails because measure spaces have a hidden commutative structure you’ve discovered that nobody else knows about.

Until I can see an actual proof, that actually deals with measure theory itself, then you simply aren't worth talking to. You are a disappointment (and really I am disappointed in myself for not realizing it earlier), your blatant misrepresentations reflect poorly on these forums, and I am not entirely sure I would like to be a member of any discussion board that has such charlatan as admin.
 

rational experiences

Veteran Member
If you state human laws. Legal precedence against human behaviour that once decided to do whatever it wanted.

It would involve measurable assertions in life.

Two human adult parents. Everyone's.
Baby girl baby boy.

Babies grew into human adults.

Position mind changes to cause a human to not think in unity.

Unit. One means one purpose.

The measure..

Conscious change to assert agreement with non unit a unit any unit anywhere of any type.

Assertion..family unified who still owned original behaviour.

The changed mind. As behaviour asserts to have what only a changed unit wanted. It was by taught defined force.

For a man baby it was men agreed.

Therefore it would qualify one original father was no longer present.

To assert that murdering and threat of murder and cruelty inflected would cause the larger population to yield. By behavioural differences only.

As humans survival Instinct was not to murder self nor cause harm to self human.

Hence behaviour human consciousness men today assert was a common factor of first scientific belief.

To create was an assertion to cause beyond self body a procreation of one...human species by two to form a baby bio life beyond self.

Men built a machine. How was the machine created? By destructive technique considered removing time by value a burning gas.

So first a mass isn't burning.

Therefore measure in science was falsified.. as practice science involves looking at dead and destroyed things to produce an assessment of what it isn't.

Not what it is.
 
Top