• Welcome to Religious Forums, a friendly forum to discuss all religions in a friendly surrounding.

    Your voice is missing! You will need to register to get access to the following site features:
    • Reply to discussions and create your own threads.
    • Our modern chat room. No add-ons or extensions required, just login and start chatting!
    • Access to private conversations with other members.

    We hope to see you as a part of our community soon!

Any Defenses of Materialism?

LegionOnomaMoi

Veteran Member
Premium Member
The experiments support the ontic-wavefunction models. The specific statement you highlight is not about restricting their analysis to merely ontological models, but rather it is them saying that their experimental verification does not make any kind of special assumptions that restrict the validity of the applicability of their results.

The last part is merely their speculations about the various ontic wavefunction models that might be the correct descriptor of reality..
Wrong. The "last part" is the admission concerning the possibilities if the assumptions built into their experiments (i.e., the assumption of ontic states) don't hold true or that they may not hold true as assumed. You just provided an "experiment" which, as I quoted, explicitly stated that the entirety of their results depended upon the assumption of ontic wave-functions. You provided a paper I already had, and all it showed was that if we assume a form of your view is correct, then we can find it is.
The experiments support the ontic-wavefunction models
"Crucially, our theoretical derivation and conclusions do not require any assumptions beyond the ontological model framework"
By assuming an ontological model, we find the possibility of an ontological wavefunction. Wow. Amazing. Given A, then A.
The ontic-wavefuntion is assumed.
 

sayak83

Veteran Member
Staff member
Premium Member
How can you? You possess no self to disagree or agree with anything, according to you. Your actions responsible for what you typed were not conscious but were the inevitable result of your realization in some particular universe (in others, according to you, you agreed with me) or were in some other sense determined such that you can't agree or disagree at all, as your agreement is simply the product of objective laws that act on external systems which, taken with the systems that compose your brain, ensure that your disagreement is simply due to some initial conditions and that there really isn't any "you" capable of doing anything other than be configuration states of elementary particles, the laws governing which ensured that you would make the post you did.
What evidence exists for such a view? The assumption that as observers we are free to design experiments and measure them the way we do, and then to assume that we neither design them nor measure them.
A chess playing computer can accept or reject various possible alternative playing strategies it computes in its silicon substrate. That entire process is reducible to physical states electrical currents and voltage potentials. I see no essential difference between me accepting and rejecting various possibilities using neural firing and the computer doing the same using voltage fluctuations.
 

sayak83

Veteran Member
Staff member
Premium Member
Wrong. The "last part" is the admission concerning the possibilities if the assumptions built into their experiments (i.e., the assumption of ontic states) don't hold true or that they may not hold true as assumed. You just provided an "experiment" which, as I quoted, explicitly stated that the entirety of their results depended upon the assumption of ontic wave-functions. You provided a paper I already had, and all it showed was that if we assume a form of your view is correct, then we can find it is.

"Crucially, our theoretical derivation and conclusions do not require any assumptions beyond the ontological model framework"
By assuming an ontological model, we find the possibility of an ontological wavefunction. Wow. Amazing. Given A, then A.
The ontic-wavefuntion is assumed.
No, you are completely misreading what they are saying. Its absolutely obvious that you are doing this. In fact you are quote mining. The full quote explicitly refers to specific restrictive assumptions that previous experiments had that their experiments overcome making it capable of ruling out more general groups of epistemic models. Here:-
upload_2016-9-28_2-55-52.png
 

LegionOnomaMoi

Veteran Member
Premium Member
I would also state that I don't buy into an epistemic model of QM, and do subscribe to an ontic-model of aspects of QM in the sense that I believe QM not to be irreducibly statistical. I simply think it idiotic to attempt to enforce the obviously flawed and inaccurate view of systems as ideally isolated and observations negligible such that experiments which always exclude observers should describe observers despite the fact that in order to develop any physical framework observers must exert freedom of will in order to logically infer anything from the results of experiments assumed to be freely designed and the relevant systems freely measured.
 

sayak83

Veteran Member
Staff member
Premium Member
I would also state that I don't buy into an epistemic model of QM, and do subscribe to an ontic-model of aspects of QM in the sense that I believe QM not to be irreducibly statistical. I simply think it idiotic to attempt to enforce the obviously flawed and inaccurate view of systems as ideally isolated and observations negligible such that experiments which always exclude observers should describe observers despite the fact that in order to develop any physical framework observers must exert freedom of will in order to logically infer anything from the results of experiments assumed to be freely designed and the relevant systems freely measured.
None of this makes any sense to me, as I consider that an external reality exists regardless of whether an observer is observing it or not, and hence the task of science is to describe this reality through inference based on observations that certainly do perturb that reality (like any other interaction). Finally consciousness has nothing whatsoever to do with the act of observation. The ONLY connection with what we observe outside are physical:- light, sound, heat, touch etc. These physical signals are interpreted internally by the mind/brain completely separate from the reality from which these signals come (the basic independence is established when they are out of sync, like dreams and hallucinations). So basically the idea that consciousness is somehow involved in this is just confused philosophy. Either go for solipsism or go for objective reality with peripheral role of consciousness in shaping that reality. The middle thing is incoherent.
 

LegionOnomaMoi

Veteran Member
Premium Member
A chess playing computer can accept or reject various possible alternative playing strategies it computes in its silicon substrate.
No, it can't. It can react to a system (a living system) capable of self-determining states via an algorithm that is deterministic. I have a chess app that I use to discover various ways in which I can always win because the app can't accept or reject anything, it can only react to input that I provide.
That entire process is reducible to physical states electrical currents and voltage potentials. I see no essential difference between me accepting and rejecting various possibilities using neural firing and the computer doing the same using voltage fluctuations.
It is reducible only so long as one ignores what determines the states. Actually, it provides a fundamental argument for non-reductive, immaterialism in physics (see attached).
 

Attachments

  • Recognizing Top-Down Causation.pdf
    146.5 KB · Views: 57

LegionOnomaMoi

Veteran Member
Premium Member
No, you are completely misreading what they are saying. Its absolutely obvious that you are doing this. In fact you are quote mining. The full quote explicitly refers to specific restrictive assumptions that previous experiments had that their experiments overcome making it capable of ruling out more general groups of epistemic models. Here:-
My descriptions are based largely on my conversations with them (well, not all; mostly with Ringbauer & Duffus). And I'm not quote mining, I'm demonstrating the ways in which fundamental assumptions are required and used in the interpretation of empirical investigations. After all, the paper asserts to find evidence for at least two fundamentally different conceptions of reality (Bohmian and the MWI). You, however, are cherry-picking studies you hope will support your view. They don't, not because your view is wrong or unsupportable in principle, but because it is in practice. Yet you seem to assert that the interpretations of practice motivated by ideologies are to be taken to be ontological so long as they accord with your preconceptions.
 

LegionOnomaMoi

Veteran Member
Premium Member
No, you are completely misreading what they are saying. Its absolutely obvious that you are doing this. In fact you are quote mining. The full quote explicitly refers to specific restrictive assumptions that previous experiments had that their experiments overcome making it capable of ruling out more general groups of epistemic models. Here:-
Your quotation is exactly what I stated: they depend upon the ontological model. Your quotation says this. How can you pretend otherwise? How do you interpret "any assumptions beyond the ontological model"?
 

LegionOnomaMoi

Veteran Member
Premium Member
None of this makes any sense to me, as I consider that an external reality exists regardless of whether an observer is observing it or not
Of course it does. The question is whether it exists as experienced or indeed describable by the observer, and moreover whether or not the descriptions are reducible to fundamental physics in some reductive manner.
Finally consciousness has nothing whatsoever to do with the act of observation.
How many empirically supported theories of physics have been developed via unconscious processes?
The ONLY connection with what we observe outside are physical:- light, sound, heat, touch etc.
Heat isn't really physical, we don't observer light (what was the wavelength of the last photon you saw?), and proprioception excludes the actual experience of touch.
“The only reality is mind and observations”
Henry, R. C. (2005). The mental universe. Nature, 436(7047), 29-29.
I don't buy this, but as even Kant realized we experience reality through sensory modalities that structure "reality" independently of "reality". We are conscious agents, and our descriptions of reality are limited by the ways in which we can transcend our limitation as observers.


These physical signals are interpreted internally by the mind/brain completely separate from the reality
How dualistic.
 

sayak83

Veteran Member
Staff member
Premium Member
Your quotation is exactly what I stated: they depend upon the ontological model. Your quotation says this. How can you pretend otherwise? How do you interpret "any assumptions beyond the ontological model"?

That their results validate and hold for the entire generic class of ontological models and are not restricted to specific subsets of ontological models that further assumes that other things like preparation independence, continuity or symmetry are true as well. An epistemic model that does not believe that preparation independence is true cannot be ruled out if the results simply validated a model in which both objectivity of wavefunction as well as preparation indepence was considered true. Since the results validate a generic objective wavefunction model that has no additional assumptions about the nature of states or preparation or measurement, it rules out a bigger class of epistemic models.

The theory is explained in detailed here.
http://quanta.ws/ojs/index.php/quanta/article/view/22/43
 

LegionOnomaMoi

Veteran Member
Premium Member
That their results validate and hold for the entire generic class of ontological models and are not restricted to specific subsets of ontological models that further assumes that other things like preparation independence, continuity or symmetry are true as well.
All ontological models are ontological and therefore cannot support epistemic or other interpretations of QM. Yes, there are many classes of ontological models, but as the orthodox interpretation, the ensemble interpretation, and most interpretations of QM (and, according to some, QM itself) are not ontological, this doesn't matter.

An epistemic model that does not believe that preparation independence is true cannot be ruled out if the results simply validated a model in which both objectivity of wavefunction as well as preparation indepence was considered true.
1) The orthodox interpretation and most variants aren't really epistemic, and thus this statement is meaningless and/or pointless
2) The orthodox interpretation and other non-ontological models DO assume preparation independence.
3) The interpretations in which preparation independence cannot be true are mostly ensemble interpretations, which Bohr and I both reject.

Since the results validate a generic objective wavefunction model that has no additional assumptions about the nature of states or preparation or measurement, it rules out a bigger class of epistemic models.
The results do no such thing. Wavefunctions generally encode measurement outcomes which cannot be realized. There exists no possible method to validate a model in which measurement outcomes which never occur support a theory. This is one reason why your latest "support" of your ontological interpretation must assume an ontological interpretation.
 

sayak83

Veteran Member
Staff member
Premium Member
Of course it does. The question is whether it exists as experienced or indeed describable by the observer, and moreover whether or not the descriptions are reducible to fundamental physics in some reductive manner.
Of course it does not exist as experienced. Otherwise we won't spend millions of dollars on physics and physicists to infer the objective mind independent reality that does exist through clever use of experiments and maths.
I am agnostic about reduction.

How many empirically supported theories of physics have been developed via unconscious processes?

More coming soon as computers are beginning to get there. They have already proved several math proofs and doing a lot of coding themselves nowadays.

Heat isn't really physical, we don't observer light (what was the wavelength of the last photon you saw?), and proprioception excludes the actual experience of touch.
I said connections to. Mind is a conglomeration of neural activity only indirectly connected to photons and sound waves through the sense organs.

“The only reality is mind and observations”
Henry, R. C. (2005). The mental universe. Nature, 436(7047), 29-29.
I don't buy this, but as even Kant realized we experience reality through sensory modalities that structure "reality" independently of "reality". We are conscious agents, and our descriptions of reality are limited by the ways in which we can transcend our limitation as observers.



How dualistic.
No more than wood-fire.
 

LegionOnomaMoi

Veteran Member
Premium Member
An epistemic model that does not believe
I have to apologize, for this is simply unfair, but..."epistemic...not believe"? Seriously? Epistemology concerns beliefs. Ontology that which exists. An epistemic model that doesn't describe what its proponents believe is a contradiction in terms. I guess, though, as you hold that incompatible states can be physical because if one assumes that ontological interpretations are accurate than we can conclude ontological interpretations, you may as well describe an epistemic model that "does not believe" (i.e., is not epistemological).
 

LegionOnomaMoi

Veteran Member
Premium Member
Of course it does not exist as experienced. Otherwise we won't spend millions of dollars on physics and physicists to infer the objective mind independent reality that does exist through clever use of experiments and maths.
You mean, we haven't fundamentally reformulated foundational physics over and over again because (as it turned out) that we couldn't really consider isolated systems as real, so we spend millions of dollars because we wouldn't if all of our theories which have shown that our attempts to formulate any fundamental theory independent of observers that (as you yourself described) in any manner that is even mathematically understood? That's what we did. Objective reality in our most fundamental physics consists of the renormalization you described in terms that suggest inadequacy and in terms of non-physical processes that can't be distinguished from physical. The only truly formulation even of quantum mechanics that isn't dependent upon observers requires that the probabilities for measurements of a single system result in infinitely many universes in which it is impossible to determine what the probabilities of outcomes actually are.

More coming soon as computers are beginning to get there. They have already proved several math proofs and doing a lot of coding themselves nowadays.
1) Computers were beginning to get there before they existed. AI has been right around the corner since Turing, and yet we can do nothing with modern computers that we couldn't with computers simpler than the ENIAC.
2) There exist several proofs that living systems must have non-computable models
3) There is no evidence that we are "beginning to get there", as there cannot even in principle be any syntactic model of conceptual systems, and there is no evidence that conceptual processing is reducible to syntactic.

I said connections to. Mind is a conglomeration of neural activity only indirectly connected to photons and sound waves through the sense organs.
My doctoral thesis was on the irrelevancy of QM to consciousness. This was an obvious mistake, of course, as neither is sufficiently understood, but the "mind" is not a conglomeration of anything. It is at least in some senses fundamentally unitary.
 

sayak83

Veteran Member
Staff member
Premium Member
All ontological models are ontological and therefore cannot support epistemic or other interpretations of QM. Yes, there are many classes of ontological models, but as the orthodox interpretation, the ensemble interpretation, and most interpretations of QM (and, according to some, QM itself) are not ontological, this doesn't matter.
It was proved that a generic ontological model (without further restrictions) predict X. The generic epistemic models predict Y. The experiment showed that X was being observed. Thus the epistemic models were ruled out. It is that simple. Read the paper.

They are saying that the assumptions in the PBR no-go theorem that restricted its generic applicability in ruling in ontological models and ruling out epistemic models have been surmounted.



1) The orthodox interpretation and most variants aren't really epistemic, and thus this statement is meaningless and/or pointless
2) The orthodox interpretation and other non-ontological models DO assume preparation independence.
3) The interpretations in which preparation independence cannot be true are mostly ensemble interpretations, which Bohr and I both reject.
The orthodox interpretation is not an interpretation. Its shut up and calculate.


The results do no such thing. Wavefunctions generally encode measurement outcomes which cannot be realized. There exists no possible method to validate a model in which measurement outcomes which never occur support a theory. This is one reason why your latest "support" of your ontological interpretation must assume an ontological interpretation.
It does not do so in any way shape or form. It merely assumes that an objective reality exists.
 

sayak83

Veteran Member
Staff member
Premium Member
You mean, we haven't fundamentally reformulated foundational physics over and over again because (as it turned out) that we couldn't really consider isolated systems as real, so we spend millions of dollars because we wouldn't if all of our theories which have shown that our attempts to formulate any fundamental theory independent of observers that (as you yourself described) in any manner that is even mathematically understood? That's what we did. Objective reality in our most fundamental physics consists of the renormalization you described in terms that suggest inadequacy and in terms of non-physical processes that can't be distinguished from physical. The only truly formulation even of quantum mechanics that isn't dependent upon observers requires that the probabilities for measurements of a single system result in infinitely many universes in which it is impossible to determine what the probabilities of outcomes actually are.
I consider physics to have been extremely successful in what its doing. All its reformulations have been extensions and generalizations as they expanded their domain of observation. The successful old models have not been discarded at all, but has found their place within the more general theory as excellent approximations within their domain of applicability. Renormalization too will be considered as approximations within a more general theory that is applicable over larger scales (Planck scale, high energy density etc.) I find nothing wrong with physics at all. Doing very well actually. Just like neuroimaging.


1) Computers were beginning to get there before they existed. AI has been right around the corner since Turing, and yet we can do nothing with modern computers that we couldn't with computers simpler than the ENIAC.
2) There exist several proofs that living systems must have non-computable models
3) There is no evidence that we are "beginning to get there", as there cannot even in principle be any syntactic model of conceptual systems, and there is no evidence that conceptual processing is reducible to syntactic.
You can't find the evidence. Meanwhile computers win chess matches, solve math problems, drive cars etc.
 

LegionOnomaMoi

Veteran Member
Premium Member
If the math says its possible then its possible. Nothing that can be coherently written in mathematics can be illogical (almost by definition).

The more I think about this claim, the more ludicrously, obviously absurd it appears to be. No doubt this is because of a lack of constraints and implicit ambiguities, but I can’t quite fathom what the actual claim entails.

Even assuming that you mean the mathematical framework of quantum mechanics, then the claim immediately runs into serious issues, foremost perhaps being the equivalence myth (foremost at least historically). It is usually claimed that Schrödinger’s wave mechanics was proven to be equivalent to Heisenberg’s matrix mechanics. This should be obviously wrong (and in actual analyses of the so-called “proofs”, it is either argued or grudgingly admitted to be wrong in at least most senses), as the two frameworks utilize mathematical structures and objects that differ so fundamentally that the distinction serves to differentiate (and indeed define) vast swathes of fields in mathematics. One is fundamentally discrete (and a discretized calculus is appropriate for a quantized mechanics), while the other is continuous. Wavefunctions are nothing new. The abstract, partial differential equations that govern such periodic, oscillatory, and continuous behavior were formalized long ago and the tools (such as the Fourier transform) to solve the requisite equations likewise developed long ago (and required that mathematicians fundamentally examine the nature of continuity and the topology of the real number line). The wavefunction describes systems that are continuous, and continuous behavior requires that changes in behavior be less than infinitesimal (i.e., it is insufficient to consider continuous changes in a system’s dynamics to be represented by property values that are infinitely close; rather, it requires that between infinitely close values, there exist a larger infinite set of closer values).
Linear/matrix algebra, however, involve mathematical structures and objects underlie most of discrete mathematics. They do not describe continuous, periodic, oscillatory anything. Neither did matrix mechanics.
But let as assume that the proofs of matrix and wave mechanical equivalence are valid. How? Well, clearly because at best they yield the same results from experiments. But how? Because the wave mechanical description requires that the wavefunction never describe any system apart from the probability of measured outcomes. After all, “quanta” are by definition particulate/discrete phenomena, and practically the first formulation of a truly quantum mechanical description was Einstein’s hypothesis that the one thing which we knew clearly to be a “wave” (light) was actually particulate. So why should we expect a mathematical framework based on ontological assumptions which run contra to the very foundations (and even the name) of quantum mechanics to describe actual, physical systems?
We shouldn’t, and don’t. But more importantly, if one states that whatever is in the math is possible, then one has to contend with the fact that there exist clearly different mathematical formulations of quantum mechanics which are not mathematically equivalent and are often not at all equivalent.
Moreover, most interpretations of QM do not involve any different mathematical structures or objects, but take the most widely used formulations of QM and claim the math means fundamentally different things. The Copenhagen interpretation was supposed to be an interpretation that simply took the math literally. But wave mechanics only works with if we never detect the objects it describes and the mathematics involves the application of operators to wavefunctions that ensure we will never find any system actually described by wavefunctions, this particular attempt to simply depend upon the math was unsatisfactory to many.
Then Everett decided to do something similar, and just take the math at face value, but this time supplemented by the assumption of the reality of Born’s collapse postulate. That is, Everett assumed a probabilistic description and the collapse of the wavefunction, but described a non-collapse interpretation via the realization of all possible collapses realized in different branches of reality/the universe.
Of course, if there is no collapse, then what is our basis for a probabilistic description for predicting the values obtained via such a collapse? Well…nothing, really, which is the problem with the relative state interpretations.
Bohm, however, created an alternative mechanics via hidden variables. Supposedly, these were proven to be impossible by von Neumann, and according to the incredibly important text Quantum Theory there are no valid hidden variable theories, the author of this text formulated one anyway. In it, waves are taken as guiding mechanisms for the only truly ontological element of physical reality: particles. It is non-local and almost dualistic (it is dualistic, actually, at least in some sense), but it is obtained by simply taking the math to represent reality.
In short, if we go with the idea that whatever is described in the math is possible, we obtain several incompatible realities that currently exist by those who have already attempted to do this. We can add more indefinitely, and indeed reformulate the mathematics in ways that reproduce experimental outcomes but allow for experimental outcomes that can’t be obtained in other formulations of quantum mechanics.
Finally, the whole thing is rendered moot because it turned out the relativistic wave equation was inadequate, and modern physics is based upon developments following Dirac’s equation, not Klein-Gordan.

So quite apart from the fact that you use coherency in a sense that makes no sense as it stands, a simple reflection of what the "math says", in terms of various formulations (either those that are actually used all the time by physicists or alternatives) as well as the plethora of interpretations over what the "math says", can't seriously be regarded as anything other than nonsense.
1) The "math" changes fundamentally according to the various formulations "equivalent" in their ability to be used by physicists (and, for the sake of simplicity, those that have and are used so). The one connecting factor is that none of mathematical frameworks/formulations describe states of physical systems. They encode the probabilities of measurement outcomes.
2) What the "math says" is...nothing. What does it mean for it to be possible that a complex-valued vector in Hilbert space to possess the structure it does such that it behaves in the way it does when acted upon by an operator which, geometrically, projects it onto a complex 1D subspace?
 

LegionOnomaMoi

Veteran Member
Premium Member
It was proved that a generic ontological model (without further restrictions) predict X. The generic epistemic models predict Y. The experiment showed that X was being observed. Thus the epistemic models were ruled out. It is that simple. Read the paper.
I read it. The problem is that I've been reading papers like this for many years and they continue to disagree with one another. In fact, your paper is based upon a "theorem" that was proven false according to many and was initially a proof that no theory like Bohmian mechanics could be formulated. It was. Von Neumann's "no-go" theorem started out wrong because it turned out he made assumptions that could be and were surmounted, developing the only serious contender to an ontological theory of quantum physics.
You, however, found a particular paper and insist that some experiment is supposed to determine what various proofs have either shown to be necessarily true or necessarily false almost since the inception of QM. Congratulations. You've re-discovered the state of physics of the late 20s.

They are saying that the assumptions in the PBR no-go theorem that restricted its generic applicability in ruling in ontological models and ruling out epistemic models have been surmounted.
Do you even know what models are epistemic vs. ontological? After all, you have expressed your adherence to consistent histories, yet your interpretation of singlet states is fundamentally at odds not only with the entirety of consistent and decoherence histories interpretations, but that of the founder as clearly expounded in the text you said you obtained from Stanford's library. Also, you've now claimed:
The orthodox interpretation is not an interpretation. Its shut up and calculate.
This is patent nonsense. "Shut up and calculate" was a phrase used by Mermin which he subsequently regarded as patently childish and nonsense that was falsely attributed to Feynman. It never described either Bohr or Heisenberg's interpretations (the so-called Copenhagen interpretation) which was actually based upon a mixture of positivism and novel philosophical developments made principally by Bohr. Bohr and Heisenberg, the founders of the Copenhagen interpretation, not only formulated the very mathematics you claim encapsulates and describes the possible (in terms of QM), but did so in a manner so as to make whatever the "math says" possible. The math, which they developed (in matrix mechanics, which was incorporated into a more general framework by von Neumann and Dirac, as neither Heisenberg's nor Schrödinger's mechanics involved operators or Hilbert space), said that states predicted possible measurement outcomes and rendered questions such as "where was the system before it was detected?" meaningless.

It does not do so in any way shape or form. It merely assumes that an objective reality exists.
So do many epistemic interpretations as well as many philosophical worldviews in which there exists an objective reality that we cannot access (and are therefore limited to descriptions of observed and interpreted components of some objective reality).
 

LegionOnomaMoi

Veteran Member
Premium Member
You can't find the evidence.
no formal system is able to generate anything even remotely mind-like. The asymmetry between the brain and the computer is complete, all comparisons are flawed, and the idea of a computer-generated consciousness is nonsense.” (emphasis added)
Torey, Z. (2009). The crucible of consciousness: An integrated theory of mind and brain. Cambridge: MIT press.

Living systems are all non-computable
Louie, A. H. (2005). Any material realization of the (M, R)-systems must have noncomputable models. Journal of integrative neuroscience, 4(04), 423-436.

"the computer metaphor is incomplete, since we are seldom told how such “computations” are carried out – for instance, what are the algorithms for feeling surprise or fear, for falling in love or down the stairs, for asking questions or criticizing answers. Such imprecision is characteristic of an immature science. It is similar to the molecular biologist who assured us that DNA molecules “specify” proteins (or “instruct” about their synthesis), instead of exhibiting the corresponding chemical reactions." p. 232
Bunge, M. (2010). Matter and Mind: A Philosophical Inquiry (Boston Studies in the Philosophy of Science Vol. 287). Springer.

“The brain is not a computer, nor is the world an unambiguous piece of tape defining an effective procedure and constituting “symbolic information.” Such a selectional brain system is endlessly more responsive and plastic than a coded system.”
Edelman, G. M. (1999). Building a Picture of the Brain. Annals of the New York Academy of Sciences, 882(1), 68-89.

“We have demonstrated, for the first time to our knowledge, that computations performed and shaped by the dynamics of charges are radically different than computations in digital computers.”
Aur, D., & Jog, M. S. (2010). Neuroelectrodynamics: Understanding the Brain Language (Biomedical and Health Research Vol. 74). IOS Press.

“To understand why neurons and computers are fundamentally different, we must bear in mind that modern computers are algorithmic, whereas the brain and neurons are not.”
Tse, P. (2013). The Neural Basis of Free Will: Criterial Causation. MIT Press.

“The free will theorem supports a powerful challenge to the scientific credentials of determinism, by showing, on certain well-supported assumptions, that two cornerstones of contemporary science, namely (1) acceptance of the scientific method as a reliable way of finding out about the world, and (2) relativity theory’s exclusion of faster-than-light transmission of information, taken together, conflict with determinism in both its versions. Belief in determinism may thus come to be seen as notably unscientific.”
Hodgson, D. (2012). Rationality + Consciousness = Free Will (Philosophy of Mind). Oxford University Press.

"In order to establish whether minds are or operate on the basis of the same principles that govern computing machines, however, it is necessary to accomplish three tasks. First, discover the principles that govern computing machines. Second, discover the principles that govern human minds. And, third, compare them to ascertain whether they are similar or the same. That much should be obvious. But while leading computationalists have shown considerable ingenuity in elaborating and defending the conception of minds as computers, they have not always been attentive to the study of thought processes themselves. Their underlying attitude has been that no theoretical alternative is possible...The essays collected here are intended to demonstrate that this attitude is no longer justified."
Fetzer, J. H. (2001). Computers and cognition: Why minds are not machines (Studies in Cognitive Systems Vol. 25). Springer.

“The view that the brain does not compute Turing-computable-functions is still a form of wide mechanism in Copeland’s sense, but it is more encompassing than Copeland’s, because it includes both Copeland’s hypercomputationalism and the view that mental capacities are not explained by neural computations but by neural processes that are not computational. Perhaps brains are simply not computing mechanisms but some other kinds of mechanisms. This view fits well with contemporary theoretical neuroscience, where much of the most rigorous and sophisticated work assigns no explanatory role to computation”
Piccinini, G. (2007). Computationalism, the Church–Turing thesis, and the Church–Turing fallacy. Synthese, 154(1), 97-120.

“Referring to the ‘widespread belief ... in many scientific circles ... that the brain is a computer,’ neurobiologist Gerald Edelman (2006) insists that ‘this belief is mistaken,’ for a number of reasons, principal among which are that ‘the brain does not operate by logical rules’ (p. 21). Jerome Bruner (1996), a founder of cognitive science itself, yet, coincidentally, a key figure in the emergence of narrative psychology, challenges the ability of ‘information processing’ to account for ‘the messy, ambiguous, and context-sensitive processes of meaning-making’ (p. 5). Psychologist Daniel Goleman (1995), author of the popular book Emotional Intelligence, asserts that cognitive scientists have been so ‘seduced by the computer as the operative model of mind’ (pp. 40f.) that they have forgotten that, ‘in reality, the brain’s wetware is awash in a messy, pulsating puddle of neurochemicals’ (p. 40f.) which is ‘nothing like the sanitized, orderly silicon that has spawned the guiding metaphor for mind’ (pp. 40–41).”
Randall, W. L. (2007). From Computer to Compost: Rethinking Our Metaphors for Memory. Theory & psychology, 17(5), 611-633.

“Semantic ambiguity exists in real-world processes of life and mind...Thus, it is feasible to rationally investigate a real-world semantic process, such as the interaction between synaptic communication and NDN, by placing the process into a modeling relation with an impredicative model, such as a hyperset process, and learn novel (albeit qualitative rather than quantitative) things about the real-world process by asking questions about the model.
What is not feasible is serious investigation of such processes by algorithmic computation. Algorithms disallow internal semantics, and specifically prohibit ambiguity. In other words, in a fundamental manner, the entailment structures of algorithms differ from the entailment structures of processes of life and mind. Thus, algorithmic descriptions of such processes are superficial, capturing the incidental syntax but not the essential semantics...
No computer program, no matter how cleverly designed, has an entailment structure like a mind, or even a prion.” (emphasis added)
Kercel, S. W. (2003, June). Softer than soft computing. In Soft Computing in Industrial Applications, 2003. SMCia/03. Proceedings of the 2003 IEEE International Workshop on (pp. 27-32). IEEE.

“Today’s programs—at best—solve specific problems. Where humans have broad and flexible capabilities, computers do not.
Perhaps we’ve been going about it in the wrong way. For 50 years, computer scientists have been trying to make computers intelligent while mostly ignoring the one thing that is intelligent: the human brain. Even so-called neural network programming techniques take as their starting point a highly simplistic view of how the brain operates.”
Hawkins, J. (2007). Why Can't a Computer be more Like a Brain?. Spectrum, IEEE, 44(4), 21-26.

“there is no evidence for a computer program consisting of effective procedures that would control a brain’s input, output, and behavior. Artificial intelligence doesn’t work in real brains. There is no logic and no precise clock governing the outputs of our brains no matter how regular they may appear.”
Edelman, G. M. (2006). Second nature: Brain science and human knowledge. Yale University Press.

"the brain is not a computer, yet it manipulates information...while von Neumann and others invented computers with mimicking the brain in mind (von Neumann 1958), the brain does not appear to behave as a Turing Machine "
Danchin, A. (2009). Information of the chassis and information of the program in synthetic cells. Systems and synthetic biology, 3(1-4), 125-134.

“Why has the traditional separation of grammar/syntax and semantics proven to be so troublesome? The problem lies in another general fallacy in thinking. Maybe the fast advances in science and technology have taught us too much of a mechanistic approach. It is like the viewpoint that ‘a machine is defined by the sum total of its parts’…A meal or the definition of a soup, for instance, cannot be the sum total of its ingredients. We could not drink a cup of water, consume raw vegetables and some meat, then ingest salt, peppercorns, pimento, parseley, etc. and claim to have eaten a soup. The processing that creates a new definition of each ingredient in their relationship to others results in a completely different meaning from the sum total. The recipe explains some of the interrelationships of processing and ingredients, but it can neither be identified with the soup nor with the experience of cooking it, nor with its consumption. The interrelationship of human mind, concepts and language can be similarly defined, except that it is incomparably more complicated, and we don't really have a compendium of recipes yet. We don't really know how our mind works when it thinks and/or creates language, especially since our mind has to read itself, while it is working.”
Schmidt, K. M. (2014). Concepts and Grammar: Thoughts about an Integrated System. In N. Dershowitz & E. Nissan (Eds.). Language, Culture, Computation: Computational Linguistics and Linguistics: Essays Dedicated to Yaacov Choueka on the Occasion of His 75th Birthday, Part III (Lecture Notes in Computer Science Vol. 8003). Springer.

"...we do not use brains as we use computers. Indeed it makes no more sense to talk of storing information in the brain than it does to talk of having dictionaries or filing cards in the brain as opposed to having them in a bookcase or filing cabinet. (Hacker, 1987, p. 493)
Hacker, P M S (1987). Languages, minds and brains. In Mindwaves: Thoughts on Intelligence, Identity and Consciousness (ed. C Blakemore and S Greenfield), pp. 485–505. Blackwell.
See also the paper on the manner in which software supports the idea of immaterial non-reductive reality I attached to a previous post.
 

sayak83

Veteran Member
Staff member
Premium Member
The more I think about this claim, the more ludicrously, obviously absurd it appears to be. No doubt this is because of a lack of constraints and implicit ambiguities, but I can’t quite fathom what the actual claim entails.

Even assuming that you mean the mathematical framework of quantum mechanics, then the claim immediately runs into serious issues, foremost perhaps being the equivalence myth (foremost at least historically). It is usually claimed that Schrödinger’s wave mechanics was proven to be equivalent to Heisenberg’s matrix mechanics. This should be obviously wrong (and in actual analyses of the so-called “proofs”, it is either argued or grudgingly admitted to be wrong in at least most senses), as the two frameworks utilize mathematical structures and objects that differ so fundamentally that the distinction serves to differentiate (and indeed define) vast swathes of fields in mathematics. One is fundamentally discrete (and a discretized calculus is appropriate for a quantized mechanics), while the other is continuous. Wavefunctions are nothing new. The abstract, partial differential equations that govern such periodic, oscillatory, and continuous behavior were formalized long ago and the tools (such as the Fourier transform) to solve the requisite equations likewise developed long ago (and required that mathematicians fundamentally examine the nature of continuity and the topology of the real number line). The wavefunction describes systems that are continuous, and continuous behavior requires that changes in behavior be less than infinitesimal (i.e., it is insufficient to consider continuous changes in a system’s dynamics to be represented by property values that are infinitely close; rather, it requires that between infinitely close values, there exist a larger infinite set of closer values).
Linear/matrix algebra, however, involve mathematical structures and objects underlie most of discrete mathematics. They do not describe continuous, periodic, oscillatory anything. Neither did matrix mechanics.
But let as assume that the proofs of matrix and wave mechanical equivalence are valid. How? Well, clearly because at best they yield the same results from experiments. But how? Because the wave mechanical description requires that the wavefunction never describe any system apart from the probability of measured outcomes. After all, “quanta” are by definition particulate/discrete phenomena, and practically the first formulation of a truly quantum mechanical description was Einstein’s hypothesis that the one thing which we knew clearly to be a “wave” (light) was actually particulate. So why should we expect a mathematical framework based on ontological assumptions which run contra to the very foundations (and even the name) of quantum mechanics to describe actual, physical systems?
We shouldn’t, and don’t. But more importantly, if one states that whatever is in the math is possible, then one has to contend with the fact that there exist clearly different mathematical formulations of quantum mechanics which are not mathematically equivalent and are often not at all equivalent.
Moreover, most interpretations of QM do not involve any different mathematical structures or objects, but take the most widely used formulations of QM and claim the math means fundamentally different things. The Copenhagen interpretation was supposed to be an interpretation that simply took the math literally. But wave mechanics only works with if we never detect the objects it describes and the mathematics involves the application of operators to wavefunctions that ensure we will never find any system actually described by wavefunctions, this particular attempt to simply depend upon the math was unsatisfactory to many.
Then Everett decided to do something similar, and just take the math at face value, but this time supplemented by the assumption of the reality of Born’s collapse postulate. That is, Everett assumed a probabilistic description and the collapse of the wavefunction, but described a non-collapse interpretation via the realization of all possible collapses realized in different branches of reality/the universe.
Of course, if there is no collapse, then what is our basis for a probabilistic description for predicting the values obtained via such a collapse? Well…nothing, really, which is the problem with the relative state interpretations.
Bohm, however, created an alternative mechanics via hidden variables. Supposedly, these were proven to be impossible by von Neumann, and according to the incredibly important text Quantum Theory there are no valid hidden variable theories, the author of this text formulated one anyway. In it, waves are taken as guiding mechanisms for the only truly ontological element of physical reality: particles. It is non-local and almost dualistic (it is dualistic, actually, at least in some sense), but it is obtained by simply taking the math to represent reality.
In short, if we go with the idea that whatever is described in the math is possible, we obtain several incompatible realities that currently exist by those who have already attempted to do this. We can add more indefinitely, and indeed reformulate the mathematics in ways that reproduce experimental outcomes but allow for experimental outcomes that can’t be obtained in other formulations of quantum mechanics.
Finally, the whole thing is rendered moot because it turned out the relativistic wave equation was inadequate, and modern physics is based upon developments following Dirac’s equation, not Klein-Gordan.

So quite apart from the fact that you use coherency in a sense that makes no sense as it stands, a simple reflection of what the "math says", in terms of various formulations (either those that are actually used all the time by physicists or alternatives) as well as the plethora of interpretations over what the "math says", can't seriously be regarded as anything other than nonsense.
1) The "math" changes fundamentally according to the various formulations "equivalent" in their ability to be used by physicists (and, for the sake of simplicity, those that have and are used so). The one connecting factor is that none of mathematical frameworks/formulations describe states of physical systems. They encode the probabilities of measurement outcomes.
2) What the "math says" is...nothing. What does it mean for it to be possible that a complex-valued vector in Hilbert space to possess the structure it does such that it behaves in the way it does when acted upon by an operator which, geometrically, projects it onto a complex 1D subspace?

If the same physical observations are being equally well described by incompatible mathematical frameworks, then it is merely the case that the observations are not sufficient to distinguish between them. Until new observations (or better theory comes along) both should be viewed as possible candidates for being real.
 
Top