• Welcome to Religious Forums, a friendly forum to discuss all religions in a friendly surrounding.

    Your voice is missing! You will need to register to get access to the following site features:
    • Reply to discussions and create your own threads.
    • Our modern chat room. No add-ons or extensions required, just login and start chatting!
    • Access to private conversations with other members.

    We hope to see you as a part of our community soon!

Is Kaku correct?

Polymath257

Think & Care
Staff member
Premium Member
Some neurons light up and there is sexual thrill? Who feels the thrill? Neurons or the chemicals?

The 'thrill' is in the larger scale response, not at the level of individual neurons. It consists of an increased sensitivity (meaning more processing goes to sensory data), a jolt from the emotion centers (increasing the likelihood of continuing the activity), and an activation of pleasure centers (again, to increase the likelihood of continuing the activity).

The point is that the 'who' consists of the activity of all of these neurons, not any single one. An emergent phenomenon.
 

charlie sc

Well-Known Member
If I want to know about computers and programming and specifically AI, I'll rely on experts in those fields. If I need a heart transplant I choose a cardiologist, not a neurologist. I think you would do the same.
Your analogy is broken. Indeed, programmers are trying to create AI, but, surprise, self-aware AI does not actually exist. If I was trying to mimic the only intelligence I know about, I would get experts in the only intelligence that currently exists.

Accepting historical progress has nothing to do with scepticism or optimism. It has to do with accepting historical progress.
Accepting the progress of technological advancement does not mean you should throw your scepticism down the toilet. I love Star Trek and have seen every episode, but that doesn't mean I'm expecting some inventions to exist. Some have already been invented, but I'm not expecting things like the teleporter anytime soon ;) Similarly, from what I know of physics, going back in time is impossible. Am I to now use technology as an analogy to think this can be the case? Nah, only the gullible will think like this. Equally, from what I know from our intelligence, is it likely we can create self-aware AI? It's unlikely, but it may be possible in the distant future. It's only the gullible that are pushing this forward as if they know it'll happen.

On the other hand, I'm also aware that people who have said "it's just too impossible" in the past have almost always been proved wrong. I'll just repeat what I posted earlier...
You were either implying this about me or it's a complete waste of space. Pick one :p

"Seemed to suggest"? Care to show what I wrote that led you to that assumption?
Sure. Look bellow.
What is self-aware? Have you asked a bot if it is self-aware?
Yes, I have. You also seemed to think that bot traits in games seem to equate to traits in reality. Bizarre.

If you think I need your help with financial matters, you are wrong.

However, since this is the second time you've mentioned kickstarter cons, I'm getting the impression that you may have been burned.
I have never supported anyone on these platforms, but I've seen Thunderf00t's videos so I know people do. With your fervent attitude on this, you seem like the type to do this. Eh, do what you want ;) Seems like you bite the hand that feeds/helps you.

Btw, I suggest you look at this Applications of artificial intelligence - Wikipedia.
Even wiki differentiates between weak and strong AI. You should study this topic and get back to me.
 
Last edited:

ecco

Veteran Member
Your analogy is broken.

Show how my analogy is broken.

Accepting the progress of technological advancement does not mean you should throw your scepticism down the toilet. I love Star Trek and have seen every episode, but that doesn't mean I'm expecting some inventions to exist. Some have already been invented, but I'm not expecting things like the teleporter anytime soon ;) Similarly, from what I know of physics, going back in time is impossible. Am I to now use technology as an analogy to think this can be the case? Nah, only the gullible will think like this.

You say my analogy of using the right specialist for the job is broken. Yet here you delve into woo, show it is woo, and then tell me I am gullible for believing actual progress in computer-related fields will not continue.


Equally, from what I know from our intelligence, is it likely we can create self-aware AI? It's unlikely, but it may be possible in the distant future. It's only the gullible that are pushing this forward as if they know it'll happen.

See above. Actually, look at all the things that have come to pass since 1949 and then you can continue your denial of progress if you want to.

ecco previously:
On the other hand, I'm also aware that people who have said "it's just too impossible" in the past have almost always been proved wrong.​

You were either implying this about me or it's a complete waste of space. Pick one
At the time, it was a general comment. The more I read your posts, the more I'm inclined to accept it applies to you.


You also seemed to think that bot traits in games seem to equate to traits in reality. Bizarre.

Some programmed AI traits do have equivalencies in human traits. Do you deny that some people are more aggressive than other people? Do you deny that some AI avatars are more aggressive than others? Do you deny that some people are more skilful than others? Do you deny that some AI pilots are more skilful than others?



I have never supported anyone on these platforms, but I've seen Thunderf00t's videos so I know people do. With your fervent attitude on this, you seem like the type to do this. Eh, do what you want ;) Seems like you bite the hand that feeds/helps you.

What a nonsensical, baseless, ridiculous comment.

Btw, I suggest you look at this Applications of artificial intelligence - Wikipedia.
Even wiki differentiates between weak and strong AI. You should study this topic and get back to me.

This started out as a fairly intellectual discussion about the future of AI. You have dragged it down into just another forum mudfight.
So, no, I'll not bother to look at another of your links. I'll not bother to get back to you.
 

charlie sc

Well-Known Member
Show how my analogy is broken.



You say my analogy of using the right specialist for the job is broken. Yet here you delve into woo, show it is woo, and then tell me I am gullible for believing actual progress in computer-related fields will not continue.




See above. Actually, look at all the things that have come to pass since 1949 and then you can continue your denial of progress if you want to.

ecco previously:
On the other hand, I'm also aware that people who have said "it's just too impossible" in the past have almost always been proved wrong.​


At the time, it was a general comment. The more I read your posts, the more I'm inclined to accept it applies to you.




Some programmed AI traits do have equivalencies in human traits. Do you deny that some people are more aggressive than other people? Do you deny that some AI avatars are more aggressive than others? Do you deny that some people are more skilful than others? Do you deny that some AI pilots are more skilful than others?





What a nonsensical, baseless, ridiculous comment.



This started out as a fairly intellectual discussion about the future of AI. You have dragged it down into just another forum mudfight.
So, no, I'll not bother to look at another of your links. I'll not bother to get back to you.
You're very funny ecco and your stubbornness is very amusing. The best we can do is disagree. You seem to think AI is coming soon/is already here or whatever. By this point, I'm not sure what you think. I think strong AI is more likely in the distant future(I.E. it won't happen anytime soon). We'll leave it at that. ;)

Good talk.
 
Last edited:

Jumi

Well-Known Member
You made the comment that Gandhi's change in behavior is a programming bug. I was merely pointing out that changes in behavior are not necessarily caused by bugs.
You could argue that evolution is a series of bugs that became features. If you wanted to compare the two. And I don't believe we are created like programs by a creator god, that you might have implied in your reply to me, so it felt off.

You made to comment that the Civ AI was just a few sets of IFs. Isn't that how the human brain makes evaluations?
Not really, and real AIs don't either.

You're driving down the street and ahead the traffic light turns yellow.
If you stay at the same speed you may not get through before it turns red. If you don't get through before it turns red you may get a ticket. If you don't get through before it turns red you may get crash. If you speed up you may make it. If you've been driving for a while it might be, if this situation has come up before in terms of speed and distance, what was done then?

If, if, if all done very quickly.
If statements don't really work that way in programming.
 

atanu

Member
Premium Member
Michio Kaku has proposed a consciousness model comprising 4 levels:

full


I can understand Kaku when he says that AI will be fully conscious soon, with ability to imagine future, since Kaku’s model is mechanistic.

The following article also seems to support Kaku. The article claims that currently AI machines exhibit level II and II consciousness.

Gödel, Consciousness and the Weak vs. Strong AI Debate

A machine exhibiting consciousness of Kaku’s scale I to III may be very very dangerous depending upon the user’s intention.
...

However, consciousness is not amenable to mathematical formalism and Kaku’s scheme is incomplete. The hard problem of consciousness exists. So, a truly conscious machine that feels beauty or anger may not be possible.
 

Polymath257

Think & Care
Staff member
Premium Member
Michio Kaku has proposed a consciousness model comprising 4 levels:

full


I can understand Kaku when he says that AI will be fully conscious soon, with ability to imagine future, since Kaku’s model is mechanistic.

The following article also seems to support Kaku. The article claims that currently AI machines exhibit level II and II consciousness.

Gödel, Consciousness and the Weak vs. Strong AI Debate

A machine exhibiting consciousness of Kaku’s scale I to III may be very very dangerous depending upon the user’s intention.
...

However, consciousness is not amenable to mathematical formalism and Kaku’s scheme is incomplete. The hard problem of consciousness exists. So, a truly conscious machine that feels beauty or anger may not be possible.

I've never quite grasped what the 'hard' problem of consciousness is supposed to be. I know it is supposed to be something about experiences being *my* experience, but I just don't see how that is a problem. If the activity happens in my brain, then it is my experience.

Perhaps the problem is in the concept of an 'experience'? So, if a computer/robot is hooked up to the outside world, gets information from sensors about that world, has goals that direct behavior, that behavior is based also upon the sensory data from the world, etc, what else is required to say that the robot has an experience of its environment?

The only other way I can see playing this leads to the position that we cannot know if another person is conscious or not And, sure, if you go to extreme skepticism, that may be correct. Maybe this is all an illusion and I am the only conscious entity in the universe and everything is playing out on some screen. But, truthfully, if you go that far, then there really isn't much more to say.

But if, like we do in day to day life, we assume someone is conscious because they *act* conscious, have we missed anything essential? I would say not. And I see no reason to think that we won't be able to get robots (acting in an environment and with goals) to have emotions (likes/dislikes based on goals) and points of view.
 

ecco

Veteran Member
You could argue that evolution is a series of bugs that became features. If you wanted to compare the two. And I don't believe we are created like programs by a creator god, that you might have implied in your reply to me, so it felt off.
I didn't want to compare the two. I also never said anything about a creator god although now twice you attributed that to me.





previously...
You made to comment that the Civ AI was just a few sets of IFs. Isn't that how the human brain makes evaluations?​
Not really, and real AIs don't either.

If the human brain doesn't make decisions by evaluating pros and cons, then please explain how it makes decisions.
If an AI doesn't make decisions by evaluating pros and cons, then please explain how it makes decisions.


If statements don't really work that way in programming.
Oh goody. Now you are going to tell me how IF statements work. I can't wait.
 

ecco

Veteran Member
However, consciousness is not amenable to mathematical formalism and Kaku’s scheme is incomplete. The hard problem of consciousness exists. So, a truly conscious machine that feels beauty or anger may not be possible.

If I had to choose between your view of the problem or Kaku's view of the problem, I would have to go with Kaku's.
 

atanu

Member
Premium Member
I've never quite grasped what the 'hard' problem of consciousness is supposed to be. I know it is supposed to be something about experiences being *my* experience, but I just don't see how that is a problem. If the activity happens in my brain, then it is my experience.

Perhaps the problem is in the concept of an 'experience'? So, if a computer/robot is hooked up to the outside world, gets information from sensors about that world, has goals that direct behavior, that behavior is based also upon the sensory data from the world, etc, what else is required to say that the robot has an experience of its environment?

The only other way I can see playing this leads to the position that we cannot know if another person is conscious or not And, sure, if you go to extreme skepticism, that may be correct. Maybe this is all an illusion and I am the only conscious entity in the universe and everything is playing out on some screen. But, truthfully, if you go that far, then there really isn't much more to say.

But if, like we do in day to day life, we assume someone is conscious because they *act* conscious, have we missed anything essential? I would say not. And I see no reason to think that we won't be able to get robots (acting in an environment and with goals) to have emotions (likes/dislikes based on goals) and points of view.

I know your position. I repeat that in this model there is no explanation of why neural information processing observed from the outside should give rise to subjective experience on the inside.

I do not think about it too much.
 

Polymath257

Think & Care
Staff member
Premium Member
I know your position. I repeat that in this model there is no explanation of why neural information processing observed from the outside should give rise to subjective experience on the inside.

I do not think about it too much.

Why? because it only exists internal to one brain.

An example. Suppose that a robot uses its cameras to scan an area. it then uses its CPU to process the information from that camera and determines that there is a flower in the scanned area (some function comes back with a pointer to 'flower').

Isn't that robot 'aware' of the flower at that point? In what sense is it NOT? Isn't it having an 'internal experience' of that flower?
 

charlie sc

Well-Known Member
Why? because it only exists internal to one brain.

An example. Suppose that a robot uses its cameras to scan an area. it then uses its CPU to process the information from that camera and determines that there is a flower in the scanned area (some function comes back with a pointer to 'flower').

Isn't that robot 'aware' of the flower at that point? In what sense is it NOT? Isn't it having an 'internal experience' of that flower?
This topic is fairly convoluted. There’s something called the rouge test, that tests when infants are able to self-recognise. There’s more of a consensus for recognition than self aware because we aren’t entirely sure how to test this. Infants are placed in front of a mirror with a mark secretly put on their nose/check. As I recall, around 3 years old is when infants touch the mark on their face because they have developed some concept of self. Before this age they don’t actually recognise the mark. Therefore, this type of self recognition is missing. This might imply that toddlers are working primarily on instincts and reactivity. Similarly, if we were to think of awareness, self awareness requires more than just consciousness.

If we were to apply consciousness in a very loose sense, a fire detector is conscious. Though, what AI proponents and idealists want/fear is consciousness that is also self aware. ;)
 

Polymath257

Think & Care
Staff member
Premium Member
This topic is fairly convoluted. There’s something called the rouge test, that tests when infants are able to self-recognise. There’s more of a consensus for recognition than self aware because we aren’t entirely sure how to test this. Infants are placed in front of a mirror with a mark secretly put on their nose/check. As I recall, around 3 years old is when infants touch the mark on their face because they have developed some concept of self. Before this age they don’t actually recognise the mark. Therefore, this type of self recognition is missing. This might imply that toddlers are working primarily on instincts and reactivity. Similarly, if we were to think of awareness, self awareness requires more than just consciousness.

If we were to apply consciousness in a very loose sense, a fire detector is conscious. Though, what AI proponents and idealists want/fear is consciousness that is also self aware. ;)

And I would agree that the robot as described would not be self-conscious. But in that case, neither would my cat. And it seems clear to me that my cat *is* conscious.

if the hard step is simply going from 'conscious' to 'self-conscious', I have to admit I see it as even less 'hard'.
 

charlie sc

Well-Known Member
And I would agree that the robot as described would not be self-conscious. But in that case, neither would my cat. And it seems clear to me that my cat *is* conscious.

if the hard step is simply going from 'conscious' to 'self-conscious', I have to admit I see it as even less 'hard'.
It may be easier for living beings since they have emotions, senses, instincts and so on. Nonetheless, their brains are still not capable of self-awareness. Our brains have developed in such an way to allow this happen. As I recall, the prefrontal cortex is vital in conceptualising the future, therefore, we have the ability to conceptualise ourselves. I also think language plays a vital role and it’s symbolic nature. Since robots are missing all these properties, no one really know how to create strong AI. And since your cat does not possess the same cognitive ability or self-awareness as us, I don’t know why you’re saying it’s an easy step. I mean, it might be, but there isn’t enough information to make that claim.
 
Top