• Welcome to Religious Forums, a friendly forum to discuss all religions in a friendly surrounding.

    Your voice is missing! You will need to register to get access to the following site features:
    • Reply to discussions and create your own threads.
    • Our modern chat room. No add-ons or extensions required, just login and start chatting!
    • Access to private conversations with other members.

    We hope to see you as a part of our community soon!

Artificial Intelligence

LegionOnomaMoi

Veteran Member
Premium Member
We always think that and we still manage to surprise ourselves.
I'm the first to agree there (human ingenuity has thus far proved to be the bane of all Malthusian prophecy). But I did not mean a limit to computing technology, simply to the current approach to cram more of the same into less space. I have no doubt that our computing power will increase, although perhaps not at the same rate it has until new technologies take off.
 

idav

Being
Premium Member
Assuming that you mean exponentially in the non-technical sense, that's true. But most who work within computer science believe we're quickly approaching a limit.

667px-Transistor_Count_and_Moore%27s_Law_-_2011.svg.png
 

idav

Being
Premium Member
I'm the first to agree there (human ingenuity has thus far proved to be the bane of all Malthusian prophecy). But I did not mean a limit to computing technology, simply to the current approach to cram more of the same into less space. I have no doubt that our computing power will increase, although perhaps not at the same rate it has until new technologies take off.

How small do we need to go? We already have the firsts toward nanorobots.
 

LegionOnomaMoi

Veteran Member
Premium Member
1) Transistor counts aren't equivalent with computer power
2) Notice the clustering of points on this regression model.
3) Notice that the inclusion of some models and not others is not exactly arbitrary, but not exactly accurate either.
4) The underlying physics behind our current approach do have absolute limits. MOS logic gates can be complemented (they have), and we can continue to cram more into less for a time, but the reason people are pouring money into alternative approaches is not simply because they think this will lead to more computing power faster, but because there is a limit we are (relatively) quickly approaching to the current approach to hardware developments intended to increase speed and storage (more speed than storage, actually).
 

idav

Being
Premium Member
And yet we are no closer to computational devices which capable of semantic, rather than syntactic, processing than we were decades ago. We've just gotten better at faking it.

We are striving for those three things you named, better architecture programming etc. New technologies are always coming out that can give significant boosts in any given area.
 

LegionOnomaMoi

Veteran Member
Premium Member
How small do we need to go? We already have the firsts toward nanorobots.
Most of nanotechnology research is devoted to alternative hardware (such as quantum computers, genetic computers, biocomputers, etc.), rather than continually "shrinking" the MOS based processors currently used.
 

LegionOnomaMoi

Veteran Member
Premium Member
We are striving for those three things you named, better architecture programming etc. New technologies are always coming out that can give significant boosts in any given area.
But that sort of misses my point. My phone is vastly more powerful than ENIAC, and even old PCs. Processing speed, storage, and the hardware of computers in general has increased enormously. The same is true with computational intelligence paradigms. Evolutionary algorithms, neural networks which incorporate fuzzy set theory, gene expression programs, etc., are all pretty new and huge strides have been made. Yet, despite all this, we're just doing more of the same better.

Perhaps a good analogy would be flying machines (ignore the actual history behind them for a moment). Imagine that we started out with parachutes, but over the years developed hang-gliders, better and better paper airplanes, parasails, and numerous other devices which allowed people to appear as if they were using flying machines, but in reality they were always only kept in the air by machines on the ground, or were in constant descent. Nothing like an airplane existed.

This is akin to the qualitative difference between all the developments within computer science and to computer technology, and the goal of some machine capable of conceptual processing. In this analogy, the latter is an actual "flying machine" or airplaine, while the former is all the ways we fake it by making it seem as if our devices our allowing us to fly, when in reality we are either falling slowly or gliding. The fact that we now have machines which are better appearing as if they understand speech doesn't mean they are any closer to actually understanding; they aren't.

All that progress in machine technology and programming innovations, and Watson is no closer to understanding language than SHRDLU
 

idav

Being
Premium Member
But that sort of misses my point. My phone is vastly more powerful than ENIAC, and even old PCs. Processing speed, storage, and the hardware of computers in general has increased enormously. The same is true with computational intelligence paradigms. Evolutionary algorithms, neural networks which incorporate fuzzy set theory, gene expression programs, etc., are all pretty new and huge strides have been made. Yet, despite all this, we're just doing more of the same better.

Perhaps a good analogy would be flying machines (ignore the actual history behind them for a moment). Imagine that we started out with parachutes, but over the years developed hang-gliders, better and better paper airplanes, parasails, and numerous other devices which allowed people to appear as if they were using flying machines, but in reality they were always only kept in the air by machines on the ground, or were in constant descent. Nothing like an airplane existed.

This is akin to the qualitative difference between all the developments within computer science and to computer technology, and the goal of some machine capable of conceptual processing. In this analogy, the latter is an actual "flying machine" or airplaine, while the former is all the ways we fake it by making it seem as if our devices our allowing us to fly, when in reality we are either falling slowly or gliding. The fact that we now have machines which are better appearing as if they understand speech doesn't mean they are any closer to actually understanding; they aren't.

All that progress in machine technology and programming innovations, and Watson is no closer to understanding language than SHRDLU

Well we are faking it because we don't necessarily have to have a brain to get there. Could be any machine that gives the same result. We are actually achieving interesting results. Watson reminds me of the that show where they got an AI type personality by having complete access to personal files videos etc and being able to mimic someones persona. But let me ask something. If an AI has access to all that for one person why wouldn't it count as experience?
 

LegionOnomaMoi

Veteran Member
Premium Member
Well we are faking it because we don't necessarily have to have a brain to get there. Could be any machine that gives the same result.

I'm not sure what you intend here by relating these two statements.

We are actually achieving interesting results.

Of course. Much work done in "AI" or computational intelligence has contributed to any number of practical applications. I'm not saying it has all been pointless (from from it). Just that the "exponential" increase in computing power and computational approaches hasn't gotten us any closer to what the founders of computer science thought was just around the bend: strong AI or at least machines which could "understand".

Watson reminds me of the that show where they got an AI type personality by having complete access to personal files videos etc and being able to mimic someones persona. But let me ask something. If an AI has access to all that for one person why wouldn't it count as experience?

It does count as "experience" and as "learning" under most relevant definitions. But the qualitative bridge that all our increases in technology and complexity have yet to get us across isn't machine learning or imitation, but actual conceptual processing even non-human mammals can do (to varying degrees) so easily. Adaption/reaction to stimuli is qualitatively different than semantic processing. We can observe the difference, we can model the former but not the later, we can theorize about what might make it possible for us or for some constructed device to attach meaning to stimuli, but what we have no idea how to do is to go from what we have become increasingly skilled at (making machines adapt/react via syntactic manipulation) to what is required for a machine to do even the semantic processing most mammals can do.
 

MD

qualiaphile
How are humans something other than matter and energy? Why would biological lifeforms have some other substance that isn't found anywhere else in the universe except in life? The answer is that lifeforms are made of the same things as the rest of the universe and is a result of cause and effect since the beginning. Volition is just cause and effect with variables that aren't easily calculable but it doesn't mean it is impossible to calculate. With enough knowledge anything can be calculated.

Well you said the universe creates the entities we call perception, how else would you describe them? Energy?

I'm not saying biological lifeforms have some other substance. I'm saying it's everywhere. Mental properties are fundamental. Human consciousness only arises through the interaction of neurons in certain specific neuronal configurations. Plants have their own qualia, as do dogs. An AI would have it's own form of qualia as well, depending on the arrangement of its artificial neurons.
 
Last edited:

apophenia

Well-Known Member
An AI would have it's own form of qualia as well, depending on the arrangement of its artificial neurons.

I would say "may have it's own form of qualia".

Or may not. Does the internet have qualia ? It is one system. Millions of eyes and ears. Vastly more than encyclopedic knowledge.

If I throw arms and legs in a bucket, will something crawl out ?
 

MD

qualiaphile
I would say "may have it's own form of qualia".

Or may not. Does the internet have qualia ? It is one system. Millions of eyes and ears. Vastly more than encyclopedic knowledge.

If I throw arms and legs in a bucket, will something crawl out ?

Well according to my views it would most likely have some form of qualia. The type of qualia would depend on many other things such as the arrangement of the artificial nodes/neurons.

Here's a beautiful article for all those who ridiculed me continuously. Please do read it properly.

Christof Koch: Consciousness Is Everywhere
 

LegionOnomaMoi

Veteran Member
Premium Member
Here's a beautiful article for all those who ridiculed me continuously. Please do read it properly.

Christof Koch: Consciousness Is Everywhere

A few important (IMO) points:

1) Koch (who, until very recently, I've only read as a co-author of some papers), apparently now supports the IIT model of consciousness, although this runs contrary to the views espoused in papers he previously co-wrote, particularly those he authored with Francis Crick. In fact, in their paper "A framework for consciousness", Crick & Koch explicitly contrast their model/theory of consciousness with that of Tononi's (see p. 124 under the heading "Related Ideas"). But everyone is entitled to change their mind, and the differences between his older work and IIT are not that large.

2) The title here "Consciousness is everywhere" doesn't seem to match either the content of the article or IIT theory itself. It's true that IIT uses "phi" values (defined by the theory) as a measure of consciousness, and that these values are based on the ability of any system to integrate information. The higher the value, the more "conscious" the system. However, this still means that there is a distinction between systems which have a phi value greater than zero and things which do not. Also, it is a consequence of this approach that everything which processes & integrates information, from a calculator to a camera to an ant, has some measure of "consciousness." The important point here is that this is not an empirical finding of the theory. It is defined ahead of time (along with information) so that consciousness can be investigated with greater ease (it's hard to study a phenomenon which is so difficult to define).

3) Most of the work on consciousness by Tononi or Tononi & Koch from the IIT framework unfortunately uses the IIT definition of consciousness to make conclusions about it which are at least in part a product of the definitions set beforehand. Even in somewhat more formal articles, such as "Can Machines Be Conscious?", this becomes apparent. When the authors discuss what isn't necessary for consciousness they are depending on their definition assumed to be true a priori. Nor is this the only time when particular definitions lead to results because of the definitions, rather than hypotheses and empirical study. The juxtaposition of their technical use of the term "awareness" and the more commonplace one, for example, leads to somewhat awkward phrasing. They state that (italics in original): "People can attend to events or objects—that is, their brains can preferentially process them—without consciously perceiving them. This fact suggests that being conscious does not require attention." In other words, people "attend" to things which they aren't paying attention to. The authors have simply taken what most people mean when they use the term attention (in the "paying attention" sense) and replaced it with "consciously perceiving". There's nothing inherently wrong with this, but it is important to recognize what components of the theory hold true under investigation simply because of the way terms are defined in advance.

3) Finally, the more formal the venue, the less extravagant (probably not the right word) the claims made by Tononi or Tononi & Koch become. Compare, for example, those articles already linked to with the more technical paper by Tononi & Koch here. Here the fact that their approach depends upon a theoretical framework in which particular definitions are used is made more explicit. For example:

"If a system has a positive value of φ (and it is not included within a larger subset having higher φ) it is called a complex. For a complex, and only for a complex, it is appropriate to say that when it enters a particular state, it generates an amount of integrated information corresponding to φ. Since integrated information can only be generated within a complex and not outside its boundaries, consciousness is necessarily subjective, private, and related to a single point of view or perspective (Tononi & Edelman 1998). Some properties of complexes are worth pointing out. A given physical system, such as a brain, is likely to contain more than one complex, many small ones with low φ values, and perhaps a few large ones. We suspect that in the brain there is at any given time a complex of comparatively much higher φ, which we call the main complex. Also, a complex can be causally connected to elements that are not part of it through ports-in and ports-out. In that case, elements that are part of the complex contribute to its conscious experience, while elements that are not part of it do not, even though they may be connected to it and exchange information with it through ports-in and ports-out. One should also note that the φ value of a complex is dependent on both spatial and temporal scales that determine what counts as a state of the underlying system."
 
Last edited:

MD

qualiaphile
A few important (IMO) points:

1) Koch (who, until very recently, I've only read as a co-author of some papers), apparently now supports the IIT model of consciousness, although this runs contrary to the views espoused in papers he previously co-wrote, particularly those he authored with Francis Crick. In fact, in their paper "A framework for consciousness", Crick & Koch explicitly contrast their model/theory of consciousness with that of Tononi's (see p. 124 under the heading "Related Ideas"). But everyone is entitled to change their mind, and the differences between his older work and IIT are not that large.

I had no idea that Koch changed his mind. I personally feel that Koch is guided by a spiritual drive towards understanding consciousness. He has specifically stated that he believes the universe has meaning, that consciousness is fundamental, that he thinks there is something like Spinoza's pantheist God in the universe. A deeper fundamental principle at play in everything. And he has spent more years pondering such things than most of us have even lived.

2) The title here "Consciousness is everywhere" doesn't seem to match either the content of the article or IIT theory itself. It's true that IIT uses "phi" values (defined by the theory) as a measure of consciousness, and that these values are based on the ability of any system to integrate information. The higher the value, the more "conscious" the system. However, this still means that there is a distinction between systems which have a phi value greater than zero and things which do not. Also, it is a consequence of this approach that everything which processes & integrates information, from a calculator to a camera to an ant, has some measure of "consciousness." The important point here is that this is not an empirical finding of the theory. It is defined ahead of time (along with information) so that consciousness can be investigated with greater ease (it's hard to study a phenomenon which is so difficult to define).

I think Koch and Tononi have stated that conciousness is fundamental because information is only meaningful when something assigns meaning to it. Otherwise it's just chaos. As you have told me before with the pebble example in Greek voting, the pebbles themselves conveyed information with the meaningful observation of the Greeks themselves. Without the meaning the greeks assigned them, they are just pebbels. Koch has stated that qualia are a different property of the universe, and as such I think when people like him and Chalmers state that consciousness is fundamental to the universe, they mean that there's another property which must be introduced which can account for subjective experiences. If according to the theory the causal effects of different neurons create consciousness, and that is information, there must be something which gives meaning to that information. And Koch is a reductionist, if the 'mind' is separate emergent property of neural impulses, it must also have properties which can be reduced.

3) Most of the work on consciousness by Tononi or Tononi & Koch from the IIT framework unfortunately uses the IIT definition of consciousness to make conclusions about it which are at least in part a product of the definitions set beforehand. Even in somewhat more formal articles, such as "Can Machines Be Conscious?", this becomes apparent. When the authors discuss what isn't necessary for consciousness they are depending on their definition assumed to be true a priori. Nor is this the only time when particular definitions lead to results because of the definitions, rather than hypotheses and empirical study. The juxtaposition of their technical use of the term "awareness" and the more commonplace one, for example, leads to somewhat awkward phrasing. They state that (italics in original): "People can attend to events or objects—that is, their brains can preferentially process them—without consciously perceiving them. This fact suggests that being conscious does not require attention." In other words, people "attend" to things which they aren't paying attention to. The authors have simply taken what most people mean when they use the term attention (in the "paying attention" sense) and replaced it with "consciously perceiving". There's nothing inherently wrong with this, but it is important to recognize what components of the theory hold true under investigation simply because of the way terms are defined in advance.

I think a good way to test this out would be through optogenetics on animals who have absent seizures. If they can somehow show that the animal is still conscious while not actually feeling a sense of awareness, that would be strong empirical evidence that awareness is not consciousness.

A final addition I wanted to make. Koch isn't very keen on the quantum theories of mind. Perhaps that's why he is remaining a staunch reductionist and the only other way I feel one could explain qualia would be through some new quantum mechanism. If one cannot agree with the quantum theories, then mental properties are fundamental.
 
Last edited:

LegionOnomaMoi

Veteran Member
Premium Member
He has specifically stated that he believes the universe has meaning, that consciousness is fundamental, that he thinks there is something like Spinoza's pantheist God in the universe.
He may well have (you would know better than I), but the question then becomes whether this is part of the theory he follows or not. It need not be, of course, and if it is not that doesn't make his belief false. But it is still, I think, important to distinguish what views are part of a theory and those which are unique (or at least individual) to the researcher.

I think Koch and Tononi have stated that conciousness is fundamental because information is only meaningful when something assigns meaning to it.
It's one thing to say that consciousness is fundamental, and another to say it is everywhere. IIT specifically prohibits the latter, as information is "physical" and is not conscious. Only systems which are capable of processing and integrating information to some extent can be considered conscious. Personally, I think that as soon as one states that some intracellular organelle assigns "meaning" simply because according to information theory it is to some extent responding to and integrating information, the term loses any meaning (and if there is any term which should retain its meaning, it's the term "meaning").

More importantly, perhaps, while these definitions do help us formalize notions like consciousness, they don't get us closer to explaining what humans do (or similar mammals). Having defined consciousness as the ability of a system to integrate information, they are then able to use information theory to quantify, to some extent, how "conscious" a system is, but we could do that anyway without the theory. And unless IIT can be shown to bridge the gap between conceptual processing and the unthinking adaption/reaction of most systems, then I think treating consciousness in this gradient way actually creates more problems than it solves. In a way, IIT treats consciousness the way that classical computer science did, only instead of symbolic processing and algorithms we have system complexity and information. We're still left with nothing to explain the distinction between the type of learning a machine or plant is capable of, and that which a human is, along with the conception of this difference as quantitative rather than qualitative.

Koch has stated that qualia are a different property of the universe
In his book (Quest for Consciouness)? Unfortunately, I haven't read it.

If according to the theory the causal effects of different neurons create consciousness, and that is information
According to the theory, consciousness is the ability of a system like some neural regions/populations/networks in the brain to receive and integrate information distinct in some sense from the system. Consciousness and information are, according to IIT necessarily distint, as the former is a product of the capacity of a system to react in particular ways to the latter.

I think a good way to test this out would be through optogenetics on animals who have absent seizures. If they can somehow show that the animal is still conscious while not actually feeling a sense of awareness, that would be strong empirical evidence that awareness is not consciousness.
The problem, though, is again how the terms are defined. According to IIT, humans are not conscious, but rather possess numerous parts which are independently conscious. Tononi & Koch make this pretty explicit when they speak about the brain having multiple systems with a phi value, and thus multiple "conscious systems" to the extent that anything with a phi value is conscious. Of course, this makes the whole theory problematic because the point is to be able to quantify consciousness in a way which makes empirical investigation possible while maintaining some useful way of describing the human "mind". In order to avoid the consequence of proposing that humans aren't really composed of multiple conscious systems at the same time, they propose a "main complex", but give no reasons to support this. In effect, then, the theory sort of folds back on itself, at least to the extent that it is useful when it comes to explaining qualia. At any given time, the brain has multiple "complexes" with phi values, so how do these result in the subjective, cohesive, and conscious experience of some sensation (like color)? Apart from the suggestion of a "main complex", we're sort of back at square one: the brain does stuff.
 

MD

qualiaphile
He may well have (you would know better than I), but the question then becomes whether this is part of the theory he follows or not. It need not be, of course, and if it is not that doesn't make his belief false. But it is still, I think, important to distinguish what views are part of a theory and those which are unique (or at least individual) to the researcher.

From what I've read so far Koch is all over the place. Sometimes he says he's a pantheist, but he still goes to church sometimes, yet he says there is no afterlife but at the same consciousness is fundamental. He's as confused as the rest of us are, which is fine and quite normal.

To be honest you're much more well read with regards to IIT than I am. But if I had to guess why he would hold such a belief, following IITs 'consciousness is information' statement, then there is information at the quantum level. As such consciousness is as fundamental as spin or charge, because it can arise from the quantum information.

It's one thing to say that consciousness is fundamental, and another to say it is everywhere. IIT specifically prohibits the latter, as information is "physical" and is not conscious. Only systems which are capable of processing and integrating information to some extent can be considered conscious. Personally, I think that as soon as one states that some intracellular organelle assigns "meaning" simply because according to information theory it is to some extent responding to and integrating information, the term loses any meaning (and if there is any term which should retain its meaning, it's the term "meaning").

Yes that's true. But if consciousness emerges through information states, then wouldn't it be an example of emergence? Why call it fundamental? Calling it fundamental implies that it's just there in the universe waiting to be channeled like charge in a circuit.

More importantly, perhaps, while these definitions do help us formalize notions like consciousness, they don't get us closer to explaining what humans do (or similar mammals). Having defined consciousness as the ability of a system to integrate information, they are then able to use information theory to quantify, to some extent, how "conscious" a system is, but we could do that anyway without the theory. And unless IIT can be shown to bridge the gap between conceptual processing and the unthinking adaption/reaction of most systems, then I think treating consciousness in this gradient way actually creates more problems than it solves. In a way, IIT treats consciousness the way that classical computer science did, only instead of symbolic processing and algorithms we have system complexity and information. We're still left with nothing to explain the distinction between the type of learning a machine or plant is capable of, and that which a human is, along with the conception of this difference as quantitative rather than qualitative.

Yes I agree IIT in many ways is more about the measurement of consciousness rather than what consciousness actually is. But I think it's the best non quantum theory of mind. To be honest I can't really think of any other recent non quantum theories of mind. Do you know any?

In his book (Quest for Consciouness)? Unfortunately, I haven't read it.

I found it actually on a website. Lol. From Qualia, Consciousness, and Zombies (RJS)

"I believe that qualia are properties of the natural world. They do not have a divine or supernatural origin. Rather they are the consequences of unknown laws that I would like to uncover.

Many questions follow from that belief: Are qualia an elementary feature of matter itself, or do they come about only in exceedingly organized systems? Put differently, do elementary particles have qualia, or do only brains have them? … Does my Mac enjoy its intrinsic elegance, whereas my accountant’s slab of non-Mac machinery suffers because of its squat gray exterior and clunky software? Is the Internet, with its billion nodes, sentient? (p. 28) "


According to the theory, consciousness is the ability of a system like some neural regions/populations/networks in the brain to receive and integrate information distinct in some sense from the system. Consciousness and information are, according to IIT necessarily distint, as the former is a product of the capacity of a system to react in particular ways to the latter.

Ahhh I see. So consciousness is basically how a system reacts to information. Well in that case anything could be conscious, couldn't it? I mean almost everything is a system to some extent which has its own information. Even an atom is brimming with information and has a very weak system to it.

The problem, though, is again how the terms are defined. According to IIT, humans are not conscious, but rather possess numerous parts which are independently conscious. Tononi & Koch make this pretty explicit when they speak about the brain having multiple systems with a phi value, and thus multiple "conscious systems" to the extent that anything with a phi value is conscious. Of course, this makes the whole theory problematic because the point is to be able to quantify consciousness in a way which makes empirical investigation possible while maintaining some useful way of describing the human "mind". In order to avoid the consequence of proposing that humans aren't really composed of multiple conscious systems at the same time, they propose a "main complex", but give no reasons to support this. In effect, then, the theory sort of folds back on itself, at least to the extent that it is useful when it comes to explaining qualia. At any given time, the brain has multiple "complexes" with phi values, so how do these result in the subjective, cohesive, and conscious experience of some sensation (like color)? Apart from the suggestion of a "main complex", we're sort of back at square one: the brain does stuff.

I agree that IIT does not really explain qualia. And although qualia are an important part of consciousness, IIT is trying. And in science it's all about trial and error when it comes to new things.
 

LegionOnomaMoi

Veteran Member
Premium Member
Yes I agree IIT in many ways is more about the measurement of consciousness rather than what consciousness actually is. But I think it's the best non quantum theory of mind. To be honest I can't really think of any other recent non quantum theories of mind. Do you know any?

Most theories of consciousness don't have names, because for most they involve how the human brain (and possibly similar brains) produce consciousness (or, in the case of more theological/spiritual views, how something else must). Chalmer's is the head editor of a monograph series published by Oxford University Press entitled Philosophy of Mind. His contribution, along with those of Levitt, Thau, Rosenberg, and others are all explicitly about consciousness, and each promotes a view at least somewhat distinct from others. The same is true for Stapp, Eccles, Penrose, and others who believe that quantum mechanics is the key. Simplistically, all approachs which are not religious/spiritual in principle (that is, they do not claim that something which cannot be understood by any natural laws is at work) can be divided into three general categories: those which hold that the brain is fundamentally reducible and algorithmic, those which see it as a physical system from which emerges a nonphysical and irreducible property we call consciousness, and those who see consciousness as the product of quantum mechanics. Generally speaking, those in the second category rely on notions of emergence, but as with QM theories, how this is defined and how it functions to produce consciousness changes from person to person.


I found it actually on a website. Lol. From Qualia, Consciousness, and Zombies (RJS)
Ironically, I ordered the book quoted there by accident. I haven't read it yet.



Ahhh I see. So consciousness is basically how a system reacts to information. Well in that case anything could be conscious, couldn't it? I mean almost everything is a system to some extent which has its own information. Even an atom is brimming with information and has a very weak system to it.

If information is the result of probability states of physical entities, and information is distinct from consciousness (as IIT holds) then it must be that not all things are conscious. Otherwise there would be no information. Photons, for example, are not conscious according to IIT.



I agree that IIT does not really explain qualia. And although qualia are an important part of consciousness, IIT is trying. And in science it's all about trial and error when it comes to new things.
That's true. I just find other solutions to be closer to the "truth" or perhaps just closer to a more complete explanation. The biggest issue I see with IIT is the same I find in the classical approach: consciousness is simply a quantitative gradiant, with photodiodes at the lower end and human brains at the higher end. After decades and decades of classical cognitive science and computer science treating the brain as just quanitatively more complex than current computers (rather than qualitatively), and the fact that we have learned more about how wrong we were than we have about how to simulate the brain, it seems to me that it's time to really focus on the qualitative differences between complex systems.
 

apophenia

Well-Known Member
Random interjection :

I have been anaesthetised for surgical procedures, and I find waking from anaesthesia disturbing. Under the influence of some anaesthetics, all awareness disappears. One moment I am being asked to count down from ten to zero, and the next moment (so it seems) I am being told that the operation was completed and it's time to wake up.

Even after a dreamless sleep I do not have that sensation of having ceased to exist. After anaesthetic, it is like the timeline has been edited. It seems continuous from the countdown to the wake-up, which is very disorienting and for some reason unpleasant.

I expected that after years of meditation I would remain in objectless samadhi, or perhaps a subtle realm. In fact, all consciousness was utterly annihilated.

Not sure where that fits in this discussion, but there it is.
 

MD

qualiaphile
Random interjection :

I have been anaesthetised for surgical procedures, and I find waking from anaesthesia disturbing. Under the influence of some anaesthetics, all awareness disappears. One moment I am being asked to count down from ten to zero, and the next moment (so it seems) I am being told that the operation was completed and it's time to wake up.

Even after a dreamless sleep I do not have that sensation of having ceased to exist. After anaesthetic, it is like the timeline has been edited. It seems continuous from the countdown to the wake-up, which is very disorienting and for some reason unpleasant.

I expected that after years of meditation I would remain in objectless samadhi, or perhaps a subtle realm. In fact, all consciousness was utterly annihilated.

Not sure where that fits in this discussion, but there it is.

Well Orch OR (by Penrose and Hameroff) tries to address this. It says that the microtubules are involved in creating consciousness through quantum entaglement and wave function collapse. Thus when you are asleep the microtubules are still somewhat active so when you wake up you still don't feel like time has just flown by (although that has happened to me many times after a deep sleep). When you're anesthetised the microtubules stop completely so your consciousness literally stops.

It's a speculative theory with not a lot of evidence to back it up but it tries to address this dilemma. It has also received a lot of flak from the scientific community because anything quantum is seen as woo woo (since quantum mechanics itself is poorly understood). Hameroff also talks about the possibility of a soul through entaglement and hangs out with Deepak Chopra sometimes, which doesn't help his reputation (although I like them both).
 

MD

qualiaphile
Simplistically, all approachs which are not religious/spiritual in principle (that is, they do not claim that something which cannot be understood by any natural laws is at work) can be divided into three general categories: those which hold that the brain is fundamentally reducible and algorithmic, those which see it as a physical system from which emerges a nonphysical and irreducible property we call consciousness, and those who see consciousness as the product of quantum mechanics. Generally speaking, those in the second category rely on notions of emergence, but as with QM theories, how this is defined and how it functions to produce consciousness changes from person to person.

Ahh I see. Although one can argue that the second and third categories could easily imply mysticism. In both of those areas, consciousness is 'one' with the universe.

If information is the result of probability states of physical entities, and information is distinct from consciousness (as IIT holds) then it must be that not all things are conscious. Otherwise there would be no information. Photons, for example, are not conscious according to IIT.

Well correct me if I'm wrong but doesn't information exist on the quantum level? Wouldn't time and space also have information embedded within them to create their dimensions? Unless my definition of what I consider information is completely wrong. =/


That's true. I just find other solutions to be closer to the "truth" or perhaps just closer to a more complete explanation. The biggest issue I see with IIT is the same I find in the classical approach: consciousness is simply a quantitative gradiant, with photodiodes at the lower end and human brains at the higher end. After decades and decades of classical cognitive science and computer science treating the brain as just quanitatively more complex than current computers (rather than qualitatively), and the fact that we have learned more about how wrong we were than we have about how to simulate the brain, it seems to me that it's time to really focus on the qualitative differences between complex systems.

If you had to pick one theory from a pure intuitive feel which one would it be? I think you like Stapp's theory :cool:
 
Top