• Welcome to Religious Forums, a friendly forum to discuss all religions in a friendly surrounding.

    Your voice is missing! You will need to register to get access to the following site features:
    • Reply to discussions and create your own threads.
    • Our modern chat room. No add-ons or extensions required, just login and start chatting!
    • Access to private conversations with other members.

    We hope to see you as a part of our community soon!

Lets try this another way: if you have faith the brain creates the mind, and that mind depends on brain, can we please see your logic and evidence?

RestlessSoul

Well-Known Member
I listened for anything in that talk which would suggest that consciousness could somehow be irreducible or "fundamental", but the speaker really did admit that very few of his colleagues agreed with him and quipped at the end "only my graduate students". It came off to me as something of a disjointed ramble grounded in the "correlation is not causation" meme. Of course, causation is always a case of correlation, just not vice versa.

So there was nothing in what he said, if you listen carefully, that actually pointed to behavior that could not be explained in terms of physical brain activity. He admitted that perception was matched against what he termed "predictive models", and that kind of behavior is essentially what we program into robotic behavior in order to get them to respond to unexpected and unpredictable events. The trick in AI these days is to get robots to integrate all of that information into a good predictive model for navigating in real world environments. Humans do that naturally, but we have about a hundred billion neurons in our brains to form vastly complex connections to world-body interactions. Neural nets in computers are nowhere near as sophisticated as the ones that have evolved over hundreds of millions of years inside our heads.


I put that video there for interest, people are free to follow up and learn more about Hoffman's ideas, or not as they see fit. He's an unequivocal idealist, a position which has a fair pedigree in philosophy, though less so in the natural sciences. He goes much further than the likes of Tononi or Chalmers, in positing that consciousness is not only fundamental, but that reality exists only and entirely in the mind of the observer. What Hoffman isn't, given his three decades as a professor of cognitive science, is someone whose ideas, radical though they may be, deserve to be dismissed out of hand.

His radical idealism talks to two related issues; firstly, the hard problem of consciousness, which demands an answer as to how physical processes in the brain give rise to qualitative experiences; and secondly, relating to the measurement problem in QM, how to disentangle the object, the observer and the act of observation. Particularly when quantum systems don't appear to be localised in space at all, until they are observed, measured, or otherwise interacted with.

"Useful as it is under ordinary circumstances to say that the world exists 'out there' independent of us, that view can no longer be upheld. No phenomenon is a phenomenon, until it is an observed phenomenon."
- John Wheeler

A related issue was also recognised in cosmology by Stephen Hawking (See Thomas Hertog, On The Origins of Time) who, having come to view the anthropic principle as an inadequate solution to the "fine tuned" universe problem, challenged some of his graduate students to develop a theory of the universe which accounts for the unique perspective of the observer. According to Hertog, Hawking came round to the opinion that cosmology was necessarily distorted by efforts to comprehend the universe as if we the observers, were on the outside looking in. And that only by accounting somehow for the perspective of the observer, could a complete picture of the universe ever be achieved; though perhaps it was beginning to dawn on Hawking that his theory of everything might never be forthcoming, due to limitations dictated to man by his perspective.
 
Last edited:

Yerda

Veteran Member
To the extent that we use words to discuss consciousness and experiences, we must clarify what we mean by those words. Otherwise, we literally don't know what we are talking about.
But you do know what we're talking about. You're a person, you have experiences.

You see shapes and colours, and hear sounds, and taste different flavours and feel textures etc. Do you not?

How do you get experiences without a nervous system to create them?
For the panpsychist the answer is that consciousness is fundamental and not created in the brain.

If we're taking a panpsychist view it is more relevant to ask, how do brains "combine" the experiential properties at the microphysical level into a coherent subject of experience beholding a world. This is one of the major problems that a (certain type of) panpsychist account of the world has to overcome.

But I just gave you examples of how we do know about what might give rise to what you call "subjectivity qualitative states", and you simply reply by ignoring those comments.
Would that be when you said the following?

But there is a way to investigate subjective qualitative states, as I've pointed out in the past. We not only experience our environments, but we interact with them. Physical bodies come equipped with sensors and actuators, not unlike the ones we build into robots. One of the reasons that Artificial Intelligence is a component of cognitive science is that we learn a lot about how humans and other animals both experience the world and interact with it when we attempt to build machines that can do the same things. It turns out that building a walking machine means that one has to figure out how a machine can "watch its step" while walking. So there needs to be a sense of self-awareness built into it. It must be able to remember objects and events and be able to make plans about future actions. All of that has to do with making a purely mechanical object exhibit the same behaviors that a biological flesh and blood physical machine has evolved over millions of years to do. We are a long way from creating anything like animal intelligence in machines, but we are learning a lot about what brains do and how they do those things as we carry on scientific research that enables us to build intelligent machines.

If this is what you are referring to then I don't agree that you've given me examples of how we do know about what might give rise to what you call "subjectivity qualitative states".

Machines and their sensors might give us valuable insights into how we solve all sorts of problems in movement and pathfinding and whatnot. If they tell us anything about how subjective qualitative states arise then I'm not seeing it. "Watching their step" and having "self-awareness" appear to me to be useful metaphors rather than literal descriptions of an internal world presenting itself to a robot.

Having said that, I'm not an expert. I tend to rely on the expertise of other people so if you have any insights that would help make your position more apparent to me then I'm genuinely interested.

For the panpsychist there are conscious properties present in all matter (at least for the ones who take Russelian monist approach like Philip Goff) but nothing like a mind in non-animal matter. Machines would be made of the same inherently conscious stuff but "a machine" would not be the subject of conscious experience, in the way a person is, any more than a rock.

As for machines, I would claim that they have more awareness than rocks or raindrops, because they are interact with their environments nondeterministically. That is, they can be constructed to carry out tasks that mimic the way in which intelligent animals interact with their environments. That's why so many of our machines come equipped with sensors and actuators, just like biological creatures.
They interact nondeterministically? You'll have to explain because I'm not sure how that is possible.

So far, I haven't read anything about panpsychism that would make me think I've been getting the wrong end of the stick on that subject. If you're saying that I need to keep reading until I am convinced to your way of thinking, my reply would be that I would prefer you to keep reading literature on emergent materialism until you are convinced to my way of thinking. ;)
I'm saying if you want to get a better understanding of the subject then you would have to look into it because I am not an expert. The best place to start would be the trusty old Stanford Encyclopedia of Philosophy: Panpsychism (Stanford Encyclopedia of Philosophy)

Philip Goff's talks on the subject and his pop-philosophy book Galileo's Error are good, imo.

I'm fine with emergent materialism. There are problems with it but I happen to think it is still the most reasonable position because there are greater problems with all the other ideas I've come across.

Also, if you have anything worth reading on emergence as an explanation I'm open to suggestions.
 

Bthoth

*banned*
Claim: the brain creates the mind. The mind depends on the brain. When the brain dies mind dies. Etc.

Evidence: ?????
Creates? Your mind created the post.

Brain enables you to create. The mind is why you can represent yourself and know that it is occurring
 

vulcanlogician

Well-Known Member
Claim: the brain creates the mind. The mind depends on the brain. When the brain dies mind dies. Etc.

Evidence: ?????

I'm not sure any physicalist insists that the brain "creates" the mind. They claim that mental states are causally reducible to physical states (in this case, brain states). If that is true, you can't have a mental state (ie. a sensation) without a corresponding brain state. It's a pretty sound assumption. And plenty of neuroscience backs this assumption up.

But there is no direct evidence that the brain "creates" the mind. I would want a more precise definition of either "create" or "mind" before proceeding.

Copious metaphysical discussions have transpired on the subject. I read page 1 and skipped to here. Did I miss anything important?
 

Copernicus

Industrial Strength Linguist
I put that video there for interest, people are free to follow up and learn more about Hoffman's ideas, or not as they see fit. He's an unequivocal idealist, a position which has a fair pedigree in philosophy, though less so in the natural sciences. He goes much further than the likes of Tononi or Chalmers, in positing that consciousness is not only fundamental, but that reality exists only and entirely in the mind of the observer. What Hoffman isn't, given his three decades as a professor of cognitive science, is someone whose ideas, radical though they may be, deserve to be dismissed out of hand.

The question to me is whether his ideas deserve to be accepted out of hand. There are a lot of other professors of cognitive science out there, and the majority seem not to accept them, as he admits. The problem for me is that he just seems to declare that consciousness is fundamental without trying to distinguish it from aspects of cognition that seem to be components of consciousness--memory, sensation, self-awareness, situational awareness, etc. Unless he were to address the nature of consciousness more clearly, I would be inclined to dismiss his claim that consciousness is somehow fundamental or irreducible. AFAICT, it is quite reducible.

His radical idealism talks to two related issues; firstly, the hard problem of consciousness, which demands an answer as to how physical processes in the brain give rise to qualitative experiences; and secondly, relating to the measurement problem in QM, how to disentangle the object, the observer and the act of observation. Particularly when quantum systems don't appear to be localised in space at all, until they are observed, measured, or otherwise interacted with.

Let me do entanglement first, because quantum entanglement is fundamental, unlike consciousness. Physicist Sean M Carroll has pointed out in Something Deeply Hidden that people tend to forget that all objects in the universe are quantum objects and that human beings don't actually observe the wave collapses in experiments that detect them. Inanimate devices do, and humans look at records of the results. So consciousness isn't really involved in anything but the experimental setup to make the measurement. IOW, the observation devices can be considered fully entangled at the time of observation, so the observation itself does not actually cause the wave collapse. It just correlates with a wave collapse. I interpret that to mean that we can think of entanglement as coming in successive waves that spread at the speed of light. The universe we observe is a sequence of entanglement "snapshots" progressing from frame to frame, not unlike a film advancing in a movie projector. Any act of measurement is a temporal act in which the observer (animate being or inanimate object) is already entangled with what is being observed at every stage of the observation.

Now, what is "hard" about the "hard problem"? In my opinion, it always comes down to an equivocation on first and third person descriptions of observations--essentially a category mistake. And, in physical objects that are human bodies, there is a process that we think of loosely as "consciousness" which is integral to the survival of the human body. It contains a physical device that constructs causal predictive models allowing it to interact with its environment safely. Those physical human bodies also interact with other physical bodies in ways dictated by the predictive models. Although the brain is not really the same thing as a computer, it does have properties that are similar to a computer. It is an analog machine that programs itself (i.e. "learns") to survive.


"Useful as it is under ordinary circumstances to say that the world exists 'out there' independent of us, that view can no longer be upheld. No phenomenon is a phenomenon, until it is an observed phenomenon."
- John Wheeler

I think that the moon exists and does things when we aren't looking at it. Things happen when we aren't paying attention. Just sayin'.

A related issue was also recognised in cosmology by Stephen Hawking (See Thomas Hertog, On The Origins of Time) who, having come to view the anthropic principle as an inadequate solution to the "fine tuned" universe problem, challenged some of his graduate students to develop a theory of the universe which accounts for the unique perspective of the observer. According to Hertog, Hawking came round to the opinion that cosmology was necessarily distorted by efforts to comprehend the universe as if we the observers, were on the outside looking in. And that only by accounting somehow for the perspective of the observer, could a complete picture of the universe ever be achieved; though perhaps it was beginning to dawn on Hawking that his theory of everything might never be forthcoming, due to limitations dictated to man by his perspective.

Or maybe Hawking was mistaken. Or maybe Hertog misunderstood what he was really thinking. The fine tuning argument is only a problem if you make certain assumptions about how physics works. Science only advances when we discover where our assumptions have been flawed.
 

Copernicus

Industrial Strength Linguist
But there is no direct evidence that the brain "creates" the mind. I would want a more precise definition of either "create" or "mind" before proceeding.

Copious metaphysical discussions have transpired on the subject. I read page 1 and skipped to here. Did I miss anything important?

Not really. Your point that the verb "create" can have different senses in spot on, in my opinion. It is a terminological issue, but I think 1137 gets a bit tangled up in it, because the OP that he created uses that word, among others.
 

RestlessSoul

Well-Known Member
I'm not sure any physicalist insists that the brain "creates" the mind. They claim that mental states are causally reducible to physical states (in this case, brain states). If that is true, you can't have a mental state (ie. a sensation) without a corresponding brain state. It's a pretty sound assumption. And plenty of neuroscience backs this assumption up.

But there is no direct evidence that the brain "creates" the mind. I would want a more precise definition of either "create" or "mind" before proceeding.

Copious metaphysical discussions have transpired on the subject. I read page 1 and skipped to here. Did I miss anything important?


All roads lead back to the observer; you just took a short cut.
 

vulcanlogician

Well-Known Member
Now, what is "hard" about the "hard problem"? In my opinion, it always comes down to an equivocation on first and third person descriptions of observations--essentially a category mistake. And, in physical objects that are human bodies, there is a process that we think of loosely as "consciousness" which is integral to the survival of the human body. It contains a physical device that constructs causal predictive models allowing it to interact with its environment safely. Those physical human bodies also interact with other physical bodies in ways dictated by the predictive models. Although the brain is not really the same thing as a computer, it does have properties that are similar to a computer. It is an analog machine that programs itself (i.e. "learns") to survive.

I like the criticism of the issue boiling down to a "category mistake" in our assumptions. Thomas Nagel did some thinking about that. To Nagel, something is conscious because it is like something to be that thing. For example, imagine a bird flying through the sky. Our intuitions tell us that it must be "like something" to be that bird. There are sensations of wind upon its wings and tail feathers.

To Nagel, the essence of consciousness lies in that "like-something-ness" and nowhere else. Consciousness is pure experience.

When we do something like describe physical objects scientifically, we are describing its properties. Sure, we use our senses (or our "consciousness") to gain information about physical objects, but the things we determine about them (The atomic weight of this isotope is x) then we essentially disregard entirely the experience of the object-- what it's like to touch the isotope-- and move into a realm where we describe the object wholly apart from what it's like to experience the object.

To Nagel, you simply cannot reconcile the two. He is not a dualist by any means. Apparently he's done some academic work criticizing dualism (even property dualism). But Nagel also rejects physicalism (apart from what physicalism states about causation of mental states). He's quite an interesting thinker. He wrote a paper called "What is it Like to be a Bat?" that i found rather interesting.

The metaphysical theories I find most attractive are:

1) Spinoza's "multi-aspect" monism-- cuz I'm old school.
2) Searle's Biological Naturalism
3) Functionalism
 

Copernicus

Industrial Strength Linguist
But you do know what we're talking about. You're a person, you have experiences.

You see shapes and colours, and hear sounds, and taste different flavours and feel textures etc. Do you not?

I'm also a linguist who knows that words like "consciousness" and "experience" can be used in vague and ambiguous ways. I'm very aware of the use of first, second, and third person perspectives, because they play a huge role in language. I tend not to be bothered those distinctions. They are a feature of language (and therefore thought), not a bug.


For the panpsychist the answer is that consciousness is fundamental and not created in the brain.

If we're taking a panpsychist view it is more relevant to ask, how do brains "combine" the experiential properties at the microphysical level into a coherent subject of experience beholding a world. This is one of the major problems that a (certain type of) panpsychist account of the world has to overcome.

IMO, they won't make any progress by failing to deconstruct consciousness into its component elements. Treating it as "fundamental" is kind of a showstopper.

Would that be when you said the following?

But there is a way to investigate subjective qualitative states, as I've pointed out in the past. We not only experience our environments, but we interact with them. Physical bodies come equipped with sensors and actuators, not unlike the ones we build into robots. One of the reasons that Artificial Intelligence is a component of cognitive science is that we learn a lot about how humans and other animals both experience the world and interact with it when we attempt to build machines that can do the same things. It turns out that building a walking machine means that one has to figure out how a machine can "watch its step" while walking. So there needs to be a sense of self-awareness built into it. It must be able to remember objects and events and be able to make plans about future actions. All of that has to do with making a purely mechanical object exhibit the same behaviors that a biological flesh and blood physical machine has evolved over millions of years to do. We are a long way from creating anything like animal intelligence in machines, but we are learning a lot about what brains do and how they do those things as we carry on scientific research that enables us to build intelligent machines.

If this is what you are referring to then I don't agree that you've given me examples of how we do know about what might give rise to what you call "subjectivity qualitative states".

Machines and their sensors might give us valuable insights into how we solve all sorts of problems in movement and pathfinding and whatnot. If they tell us anything about how subjective qualitative states arise then I'm not seeing it. "Watching their step" and having "self-awareness" appear to me to be useful metaphors rather than literal descriptions of an internal world presenting itself to a robot.

If we can replicate every aspect of human or animal cognition in a manufactured object, then I think we will have understood how inanimate matter can come to model subjective qualitative states through that process. What better way to understand how something works than to deconstruct it and reconstruct it? There is nothing that I can see which will prevent us from doing that, but, should we find some fundamental barrier along the path of discovery, we will let you know. Sitting in an armchair and speculating that it can't be done isn't going to be helpful, even if we end up discovering that it can't be done. I do think it implausible that there is something magical about neurons that makes it impossible for us to replicate their behavior in other, artificially constructed, media.


Having said that, I'm not an expert. I tend to rely on the expertise of other people so if you have any insights that would help make your position more apparent to me then I'm genuinely interested.

For the panpsychist there are conscious properties present in all matter (at least for the ones who take Russelian monist approach like Philip Goff) but nothing like a mind in non-animal matter. Machines would be made of the same inherently conscious stuff but "a machine" would not be the subject of conscious experience, in the way a person is, any more than a rock.

I suspect that, if Russell were alive today, he might be taking a non-Russellian approach on these matters. A lot of things happened after he died, and his "ideal language" philosophical framework has been somewhat superseded by speech act theory.


They interact nondeterministically? You'll have to explain because I'm not sure how that is possible.

Since the future cannot be predicted, robots, like animals, use strategies to achieve goals that cannot be determined in advance. It isn't that their environment is nondeterministic. It is that their behavior cannot model it in a deterministic way. They operate under conditions of uncertainty--a prime concern for AI researchers.


I'm saying if you want to get a better understanding of the subject then you would have to look into it because I am not an expert. The best place to start would be the trusty old Stanford Encyclopedia of Philosophy: Panpsychism (Stanford Encyclopedia of Philosophy)

Yes, I'm familiar with that source, which is one of my favorites. So far, I'm not seeing anything that would make me change my mind about what I've been saying here.
https://plato.stanford.edu/entries/panpsychism/
Philip Goff's talks on the subject and his pop-philosophy book Galileo's Error are good, imo.

I'm fine with emergent materialism. There are problems with it but I happen to think it is still the most reasonable position because there are greater problems with all the other ideas I've come across.

Also, if you have anything worth reading on emergence as an explanation I'm open to suggestions.

I like Howard Bloom's tour de force The God Problem, but it's not for everyone. He does a good job (towards the end of a long book) of explaining chaos theory and emergence in chaotic deterministic systems, including the use of cellular automata to model them.
 

Copernicus

Industrial Strength Linguist
To Nagel, you simply cannot reconcile the two. He is not a dualist by any means. Apparently he's done some academic work criticizing dualism (even property dualism). But Nagel also rejects physicalism (apart from what physicalism states about causation of mental states). He's quite an interesting thinker. He wrote a paper called "What is it Like to be a Bat?" that i found rather interesting.

Yes, I'm very familiar with it. A classic. You might appreciate some of the work by Cognitive Linguists like George Lakoff, Charles Fillmore, and others. Lakoff has produced a number of works on the subjects of metaphor and embodied cognition. Searle, of course, is part of the Berkeley crowd, but I take issue with some of his positions, e.g. the Chinese Room argument. I think he falls victim to the conduit metaphor.
 

Yerda

Veteran Member
I'm also a linguist who knows that words like "consciousness" and "experience" can be used in vague and ambiguous ways. I'm very aware of the use of first, second, and third person perspectives, because they play a huge role in language. I tend not to be bothered those distinctions. They are a feature of language (and therefore thought), not a bug.
Cool job.

You know what I'm referring to when I use the word experience though, yes?

IMO, they won't make any progress by failing to deconstruct consciousness into its component elements. Treating it as "fundamental" is kind of a showstopper.
You could well be right about that.

The promise, I suppose, is that you begin with the fact of consciousness and try to show how a physical world appears therby side steping the conceptual and philosophical difficulties that come with beginning with a physical world and trying to show how consciousness appears. This appears to be the project a few philosophers, physicists and cognitive scientists have taken up.

I'm not qualified to judge whether this is a dead-end or just plain silly, though I think time will tell us. I will say that people considerably smarter than myself believe it is worthwhile. I'm just happy to read about the ideas that I find interesting.

If we can replicate every aspect of human or animal cognition in a manufactured object, then I think we will have understood how inanimate matter can come to model subjective qualitative states through that process. What better way to understand how something works than to deconstruct it and reconstruct it? There is nothing that I can see which will prevent us from doing that, but, should we find some fundamental barrier along the path of discovery, we will let you know. Sitting in an armchair and speculating that it can't be done isn't going to be helpful, even if we end up discovering that it can't be done. I do think it implausible that there is something magical about neurons that makes it impossible for us to replicate their behavior in other, artificially constructed, media.
First, I'm not sure I understand that first sentence fully, so if you could elaborate a bit it would help me here.

That aside, I guess I'm not convinced that replicating behaviour (a publicly observable phenomenon) will reveal anything about consciousness (a privately observable phenomenon).

I don't want the people investigating machine intelligence and building conceptual frameworks for human cognition to stop what they're doing. That stuff is fascinating.

But say someone comes up with a model for human consciousness - will it contain an explanation of why the parts, interacting as they do, produce experiences? Or will it merely say when we arrange things in this way experiences happen?

As in,

any system organised in such a manner will be the subject of experience because experiences are logically entailed by the assumptions, organisation and operation of the system,

or,

any system organised in such a manner will be the subject of experience and this appears to be a brute fact of reality.


I suppose this is just another way of stating the hard problem, but this is the way I find it most natural to think of it. Do you see what I'm getting at though?

Since the future cannot be predicted, robots, like animals, use strategies to achieve goals that cannot be determined in advance. It isn't that their environment is nondeterministic. It is that their behavior cannot model it in a deterministic way. They operate under conditions of uncertainty--a prime concern for AI researchers.
I see. That's cool.

Yes, I'm familiar with that source, which is one of my favorites. So far, I'm not seeing anything that would make me change my mind about what I've been saying here.
Panpsychism (Stanford Encyclopedia of Philosophy)
That's fair enough.

I like Howard Bloom's tour de force The God Problem, but it's not for everyone. He does a good job (towards the end of a long book) of explaining chaos theory and emergence in chaotic deterministic systems, including the use of cellular automata to model them.
Thanks.
 
Top