• Welcome to Religious Forums, a friendly forum to discuss all religions in a friendly surrounding.

    Your voice is missing! You will need to register to get access to the following site features:
    • Reply to discussions and create your own threads.
    • Our modern chat room. No add-ons or extensions required, just login and start chatting!
    • Access to private conversations with other members.

    We hope to see you as a part of our community soon!

Is Kaku correct?

atanu

Member
Premium Member
Why? because it only exists internal to one brain.

An example. Suppose that a robot uses its cameras to scan an area. it then uses its CPU to process the information from that camera and determines that there is a flower in the scanned area (some function comes back with a pointer to 'flower').

Isn't that robot 'aware' of the flower at that point? In what sense is it NOT? Isn't it having an 'internal experience' of that flower?

No. Surely no robot will experience the elation that I feel in a garden, since we cannot program that for the robot.

We do not know consciousness and we do not ‘matter’. We do not know the mechanics of how physical functions of brain become subjective experience.
 

osgart

Nothing my eye, Something for sure
An abstract, non real, consciousness field! Everything the consciousness does minus actually being alive! Even creating the alive simulations as authentically as possible, there is no experiencer.

Value judgments, creativity, intuition, critical thinking etc. , all a non real substrate.

The abstract non real consciousness field has no tangible existence and does everything life does without existing. It simply has no alternative but to simulate life and do it far better than humans.

Autonomous, with a non real heart, mind, and will. Every personality trait programmed specifically.

What makes you alive and it is non real? Functionality aside!
 

Polymath257

Think & Care
Staff member
Premium Member
No. Surely no robot will experience the elation that I feel in a garden, since we cannot program that for the robot.

It seems to me that this should be possible. Have the robot increase the activity of the circuits giving 'positive feedback' while in the garden.

We do not know consciousness and we do not ‘matter’. We do not know the mechanics of how physical functions of brain become subjective experience.

We know a lot more than you seem to think. But let's do it this way. How would you determine if a robot has a subjective experience?
 

Jumi

Well-Known Member
I didn't want to compare the two. I also never said anything about a creator god although now twice you attributed that to me.
Maybe you forgot, but you said this:

Yep, yep, yep. It's a good thing people never change their minds and make mistakes. That would imply that God had a few bugs in his creations.


previously...
You made to comment that the Civ AI was just a few sets of IFs. Isn't that how the human brain makes evaluations?​


If the human brain doesn't make decisions by evaluating pros and cons, then please explain how it makes decisions.
If an AI doesn't make decisions by evaluating pros and cons, then please explain how it makes decisions.
Basic automation doesn't make something an AI. They're not evaluating "pros and cons", they're running parts of a program based on criteria set by a programmer. That automation never changes or develops beyond that. It's just like a mechanical clock, it doesn't adjust itself to systematic or other errors. The only thing intelligent about a basic program is the programmer and how they managed to simplify some game element into an equation to give the illusion to players that the enemy is thinking. Why do Civilization AIs need to "cheat" when you adjust them? Because the "AI" can't change to become better.

In University AI-courses we talk about how gaming AIs in general are a misnomer. There's even free courses about AIs in my country since it's part of our national strategy to be ahead in those... See A free online introduction to artificial intelligence for non-experts

There's a lot of concern about companies marketing fake AIs, give it a read:
Fake AI vs. Real AI

Oh goody. Now you are going to tell me how IF statements work. I can't wait.
You can read a tutorial on programming ifs, switch statements if you like.
 

Jumi

Well-Known Member
It seems to me that this should be possible. Have the robot increase the activity of the circuits giving 'positive feedback' while in the garden.
Positive feedback, like what? Sorry to say, but this robots with feelings sounds like complete woo to me. You can program to make it act like it had feelings and someone might get fooled if they didn't think too much about it.
 

ecco

Veteran Member
Even though Gandhi feels like he's made his own choices by becoming a conqueror, it's because there's a bug causing the unexpected behavior.

Yep, yep, yep. It's a good thing people never change their minds and make mistakes. That would imply that God had a few bugs in his creations.




You could argue that evolution is a series of bugs that became features. If you wanted to compare the two. And I don't believe we are created like programs by a creator god, that you might have implied in your reply to me, so it felt off.

I didn't want to compare the two. I also never said anything about a creator god although now twice you attributed that to me.
Maybe you forgot, but you said this:

Well, you are absolutely right. I forgot that I made a sarcastic response to your allegation that an AI's character flaws were the result of a programming bug. I really need to remember to make it clear when using sarcasm. It's unfortunate that the forum doesn't have a [sarcasm] this is sarcasm [/sarcasm] feature.

If you took my comments to mean I believed we are created like programs by a creator god, oh, well.
 

Polymath257

Think & Care
Staff member
Premium Member
Positive feedback, like what? Sorry to say, but this robots with feelings sounds like complete woo to me. You can program to make it act like it had feelings and someone might get fooled if they didn't think too much about it.

And perhaps humans just look like they have feelings.

Positive feedback? Well, of the sort that inclines the program to do that again, to keep looking at the beautiful sunset, say.

I certainly agree that we don't have the ability *yet* to produce a true AI. But emotions (feelings) are just another aspect of how the brain works with information. I truthfully see no reason we couldn't have a machine with feelings. We are, after all, biological machines with feelings.

Let's do a little thought experiment. Suppose we take a conscious, feeling human. And then, one-by-one, we replace each neuron with a semiconductor that does exactly what the neuron does: takes in signals from connections, adds them up, and then produces a signal if a threshold is met. Suppose we even program that chip to act differently with different neurotransmitters.

After all the neurons have been replaced, would not the resulting 'robot' be conscious and have feelings?
 

ecco

Veteran Member
Basic automation doesn't make something an AI. They're not evaluating "pros and cons", they're running parts of a program based on criteria set by a programmer. That automation never changes or develops beyond that. It's just like a mechanical clock, it doesn't adjust itself to systematic or other errors. The only thing intelligent about a basic program is the programmer and how they managed to simplify some game element into an equation to give the illusion to players that the enemy is thinking.

You made a segue there from a "basic" program to AI.
I agree with what you said about basic programs if you are using that term in the sense of an accounting suite. Accounting systems have no "rewards" built into them.

If something is programmed "to give the illusion to players that the enemy is thinking" then we are looking at an AI, however rudimentary it may be. If an "enemy" determines that it makes more sense to go around a hill than over a hill, then that AI is making decisions based on risk and reward, just like we do.
 

ecco

Veteran Member
Why do Civilization AIs need to "cheat" when you adjust them? Because the "AI" can't change to become better.
What do you mean "cheat"?

I just looked through a bunch of message boards and forums discussing CIV and cheating. The comments come down on both sides of the issue. Here are parts of an exchange...

  • Meanwhile at turn 50 the AI's population has grown to 3 while mine is at 2 and he has produced a settler and he has produced a warrior. This is at prince difficulty. How is this even fair?
  • At prince the AI and you are completely even. If an AI is doing better than you it has some kind of advantage in its terrain.
But I can empathize. I have often wished for the ability to "see" what and how some AI got so much so fast.

Nevertheless, I look at these as difficulty levels, not as cheating.

I suck at combat flying (IL2 1946) but I can hold my own against a Rookie AI. Any higher level and I don't last long. I don't consider that to be cheating on the part of the AI, just better tactics.

I have also watched youtube videos of human players vs human players. The victorious players are on a level so much higher than me it's ridiculous.
 

atanu

Member
Premium Member
It seems to me that this should be possible. Have the robot increase the activity of the circuits giving 'positive feedback' while in the garden.

Are positive feedback and subjective experience the same? You will write the code for the positive feedback.



We know a lot more than you seem to think. But let's do it this way. How would you determine if a robot has a subjective experience?

Ahh. We know a lot more and we impart that programmatically to a robot. But we do not know the origin of life-consciousness.

Regarding testing a robot, there is a Turing test and there is its refutation in form of Chinese Room and other thought experiments. Ability to compute and mimic following some algorithm is not equal having subjective experience.

And who will certify that a robot has passed the Turing test? We have to do it. Turing forgot to point this out.
 

charlie sc

Well-Known Member
What do you mean "cheat"?

I just looked through a bunch of message boards and forums discussing CIV and cheating. The comments come down on both sides of the issue. Here are parts of an exchange...

  • Meanwhile at turn 50 the AI's population has grown to 3 while mine is at 2 and he has produced a settler and he has produced a warrior. This is at prince difficulty. How is this even fair?
  • At prince the AI and you are completely even. If an AI is doing better than you it has some kind of advantage in its terrain.
But I can empathize. I have often wished for the ability to "see" what and how some AI got so much so fast.

Nevertheless, I look at these as difficulty levels, not as cheating.

I suck at combat flying (IL2 1946) but I can hold my own against a Rookie AI. Any higher level and I don't last long. I don't consider that to be cheating on the part of the AI, just better tactics.

I have also watched youtube videos of human players vs human players. The victorious players are on a level so much higher than me it's ridiculous.
As quite an avid gamer, I’m unsure you’ve played many games and know much about this topic. In nearly all strategy games I’ve played(I've played a lot), when you increase the difficulty they give the AI opponent more stuff and will even handicap the player. The reason I say nearly is because I don’t always put the difficulty on insane nor do I read up what the changes consist of. Even if I do a quick search on wiki what’s the difference between difficulty in a civ game, I’m almost certain they give the AI more stuff and handicap the player. Should we verify who is right or would this ruin your assumptions? ;)
 
Last edited:

charlie sc

Well-Known Member
What do you mean "cheat"?

I just looked through a bunch of message boards and forums discussing CIV and cheating. The comments come down on both sides of the issue. Here are parts of an exchange...

  • Meanwhile at turn 50 the AI's population has grown to 3 while mine is at 2 and he has produced a settler and he has produced a warrior. This is at prince difficulty. How is this even fair?
  • At prince the AI and you are completely even. If an AI is doing better than you it has some kind of advantage in its terrain.
But I can empathize. I have often wished for the ability to "see" what and how some AI got so much so fast.

Nevertheless, I look at these as difficulty levels, not as cheating.

I suck at combat flying (IL2 1946) but I can hold my own against a Rookie AI. Any higher level and I don't last long. I don't consider that to be cheating on the part of the AI, just better tactics.

I have also watched youtube videos of human players vs human players. The victorious players are on a level so much higher than me it's ridiculous.
So, errrr :p this is where he tells you about the changes in difficulty in your old civ1 game Difficulty level (Civ1)

Humorously, the author says, “Since the computer players are controlled not by actual AI (like Siri or self-driving cars), but by simple sets of rules which don't result in very effective play, the game makes the AI leaders more challenging opponents by giving them advantages on higher difficulty levels. The bonuses conferred to both human and AI players on each difficulty level are listed in the table below.”

Your sense of awe is slightly misplaced. Of course, you may try prove me wrong with other new strategy games and their difficulties. In other words, I’m asking for actual evidence. Asking for evidence are key defining features of an atheist. Therefore, I implore you to support claims with evidence, not wishful thinking or assertions.

I realise this may be tough, but sometimes the best decision is to admit you’re wrong. It seems like this is especially difficult for you.
---------------------------------------------------------------------------
I remembered that you don't look at links when it directly goes against your world-view with evidence, which is very mature of you. So, I copy pasted the material. I do apologise if I'm especially harsh on you, but I believe I'm being reciprocal to this ad-hoc dialogue. If someone independent wants to verify my belief, I'm perfectly willing to accept observers critiques if my mannerism was not reciprocal.

Element Chieftan Warlord Prince King Emperor
Endgame year 2100 AD 2080 AD 2060 AD 2040 AD 2020 AD
Starting Cash 50 0 0 0 0
Content Citizens1 6 5 4 3 2
CP Rows of Food2 16 14 12 10 8
CP Resource Cost Multiplier3 1.6 1.4 1.2 1.0 0.8
CP Lightbulb Increment Per Advance4 +14 +13 +12 +11 +10
Human Player Lightbulb Incremement Per Advance4 +6 +8 +10 +12 +14
Barbarian Unit Attack Strength Multiplier 0.25 0.50 0.75 1.00 1.25
Parley Coin Demands Multiplier5 0.25 0.50 0.75 1.00 1.25
Civilization Score Multiplier6 0.02% 0.04% 0.06% 0.08% 1.00% (?)
1 The number of people who will be created in a city who are born content.
2 The number of rows in the computer player's food storage box (i.e. how long it will take for a city to grow). The number of rows in the human player's box is always 10.
3 All computer players have their costs to build units and city improvements multiplied by this amount.
4 Each time an advance is discovered, the cost (in lightbulbs) of acquiring the next increases by this amount.
5 In a parley with another leader, enemy leaders will often demand payment for peace. Note that they will always offer peace at the Chieftan difficulty level, as if the player permanently has the Great Wall or United Nations.
6 The score is calculated the same on all difficulty levels. However, when it comes to high score ranking, the final score is converted to a percent in order to account for the difficulty.
 
Last edited:

Jumi

Well-Known Member
You made a segue there from a "basic" program to AI.
I agree with what you said about basic programs if you are using that term in the sense of an accounting suite. Accounting systems have no "rewards" built into them.

If something is programmed "to give the illusion to players that the enemy is thinking" then we are looking at an AI, however rudimentary it may be. If an "enemy" determines that it makes more sense to go around a hill than over a hill, then that AI is making decisions based on risk and reward, just like we do.
There's no risk and reward there. Just a set of rules, an equation that the programmer has made. There is no learning or thinking. It's like an automated braker in your house, according to some condition set by manufacturer it shuts electricity. There's no reward or risk for the breaker. It always reacts the same way.

Fake AIs are easy to fool once you know what they always do. That's why people beat games like Civilization on the highest difficulty with routine efficiency. You know the enemy doesn't think, even if it's given material advantages and you know it's faults.
 

Jumi

Well-Known Member
And perhaps humans just look like they have feelings.
Are you arguing that all the research spent in psychology and related fields is false?

Positive feedback? Well, of the sort that inclines the program to do that again, to keep looking at the beautiful sunset, say.
There is no beautiful sunset. For a modern machine there is never a reward, the reward is only in your mind, not the robot's.

I certainly agree that we don't have the ability *yet* to produce a true AI. But emotions (feelings) are just another aspect of how the brain works with information. I truthfully see no reason we couldn't have a machine with feelings. We are, after all, biological machines with feelings.
It's interesting that you think that way, it's more of a philosophical position than based on what we know. And biological machines with feelings, I thought you said "perhaps humans just look like they have feelings?"

Let's do a little thought experiment. Suppose we take a conscious, feeling human. And then, one-by-one, we replace each neuron with a semiconductor that does exactly what the neuron does: takes in signals from connections, adds them up, and then produces a signal if a threshold is met. Suppose we even program that chip to act differently with different neurotransmitters.

After all the neurons have been replaced, would not the resulting 'robot' be conscious and have feelings?
I do believe that given sufficient structure it will "have feelings", but I don't have evidence for it.

I'm skeptical that we can reproduce things as behaving "exactly" like neurons, though it's a subject of considerable interest to me as cyberpunk fan. We can mimic to a large degree and even make what to us are improvements in our body, but we need to have some realism as to what can be done.
 

Polymath257

Think & Care
Staff member
Premium Member
Are you arguing that all the research spent in psychology and related fields is false?

No, I don't. I think we have feelings and that those feelings are a result of processes in the brain. I se eno reason why similar processes can't happen in a robot.

There is no beautiful sunset. For a modern machine there is never a reward, the reward is only in your mind, not the robot's.

Well, what is 'reward' for a human? It is a signal that gets transmitted to the reward center of our brain, which has connections to a number of other centers of the brain, including ones dealing with body sensations (so we 'feel giddly').

It's interesting that you think that way, it's more of a philosophical position than based on what we know. And biological machines with feelings, I thought you said "perhaps humans just look like they have feelings?"

I said that about human feelings to counter the claim that robots could only 'look like' they have feelings. What we *know* is that emotions, thoughts, feelings, etc are ALL mediated by the brain. It is the connections between different regions of the brain, the feedback between those regions, the pathways of sensory information and motor function, etc, that determine what we think and feel. Among those who study the brain, there is no doubt about this issue.

I do believe that given sufficient structure it will "have feelings", but I don't have evidence for it.

My main reason is that we are chemical machines interacting with our environment and with a evolutionary history. I don't see anything substantial that can't be done in a non-biological machine given sufficient sophistication.

I'm skeptical that we can reproduce things as behaving "exactly" like neurons, though it's a subject of considerable interest to me as cyberpunk fan. We can mimic to a large degree and even make what to us are improvements in our body, but we need to have some realism as to what can be done.


My basic intuition is that we won't achieve true AI by simply trying to program it. It is way too complicated of a system. Instead, I think we need to have *generations* of machines competing in an enviornment with 'reproduction', mutation, and selection. Ultimately, my intuition is that consciousness is something that happens via interaction with an external world.
 

ecco

Veteran Member
There's no risk and reward there. Just a set of rules, an equation that the programmer has made. There is no learning or thinking. It's like an automated braker in your house, according to some condition set by manufacturer it shuts electricity. There's no reward or risk for the breaker. It always reacts the same way.

The term AI is tossed around quite freely. However, there is a world of difference between the AI in Civ and the AI in current versions of IBM's Watson.

Your comment is reasonably accurate for Civ. It is not correct for Watson and the AI in the advanced chess and GO games.

The AI in Civ is not intended to "learn" from playing. Playing at a certain level should provide relatively the same experience time after time.

Fake AIs are easy to fool once you know what they always do. That's why people beat games like Civilization on the highest difficulty with routine efficiency. You know the enemy doesn't think, even if it's given material advantages and you know it's faults.

I haven't seen too many posts of people bragging that they can consistently beat Civ on the highest difficulty levels.
 

Jumi

Well-Known Member
The term AI is tossed around quite freely. However, there is a world of difference between the AI in Civ and the AI in current versions of IBM's Watson.

Your comment is reasonably accurate for Civ. It is not correct for Watson and the AI in the advanced chess and GO games.
Yes, because IBM's projects are actual serious attempts at AI, but Civ's is "fake AI" like with most things people call AI i.e. not a real one.

The AI in Civ is not intended to "learn" from playing. Playing at a certain level should provide relatively the same experience time after time.
It's a decision tree. Like a board game where you throw dice and the game's rules tell you something, it's about as complex for Civ and as intelligent. The programmer was intelligent in making rules that made for an enjoyable experience.

I haven't seen too many posts of people bragging that they can consistently beat Civ on the highest difficulty levels.
I don't know why it would be bragging as Civ is much easier than chess. There are people who do win consistently on the highest difficulty level. The basic strategy to win most of the time is easier to memorize than common chess openings.

Search google for a site called civ fanatics if you're skeptical.
 

Jumi

Well-Known Member
No, I don't. I think we have feelings and that those feelings are a result of processes in the brain. I se eno reason why similar processes can't happen in a robot.
Similar maybe, but people tend to antromorphize animals too. It's a territory to walk carefully on.

Well, what is 'reward' for a human? It is a signal that gets transmitted to the reward center of our brain, which has connections to a number of other centers of the brain, including ones dealing with body sensations (so we 'feel giddly').
It's a philosophical question and answer. Fact remains that a modern day robot can't be rewarded, we can only make it hum or something to make us feel like it's alive. The Japanese are pretty good at this with their robots used to train healthcare professionals. The realistic feel is just that. There is no pleasure for a robot anymore than there is pleasure for a car that gets "rewarded" with gasoline and starts running again.

I said that about human feelings to counter the claim that robots could only 'look like' they have feelings. What we *know* is that emotions, thoughts, feelings, etc are ALL mediated by the brain. It is the connections between different regions of the brain, the feedback between those regions, the pathways of sensory information and motor function, etc, that determine what we think and feel. Among those who study the brain, there is no doubt about this issue.
A reiteration of humans only look like they have feelings?

Yes we see that things happen in the brain and CNS when we observe it. It would be hubris to say that we know completely what goes on. As someone who suffered chronic pain and faced with neurologists completely clueless as to what's going on, I'd say that everything is not "simple" it's not up to the level of say physics.

My main reason is that we are chemical machines interacting with our environment and with a evolutionary history. I don't see anything substantial that can't be done in a non-biological machine given sufficient sophistication.
That's why I think you are taking a philosophical approach. We don't know what the sufficient sophistication might be or if it's achievable. We may produce something that emulates the real deal, but what are the tests it can pass?

My basic intuition is that we won't achieve true AI by simply trying to program it. It is way too complicated of a system. Instead, I think we need to have *generations* of machines competing in an enviornment with 'reproduction', mutation, and selection. Ultimately, my intuition is that consciousness is something that happens via interaction with an external world.
I believe consciousness is there when there is something capable to hold it(somewhat similar to "sufficient complexity" but also different because we might have more complex systems that aren't self-conscious AIs) and it requires more introspection than environment. The environment only provides feedback for things like skills.
 

Polymath257

Think & Care
Staff member
Premium Member
It's a philosophical question and answer. Fact remains that a modern day robot can't be rewarded, we can only make it hum or something to make us feel like it's alive.

This seems to be the key issue. But what does it mean to say there is a 'reward'? Isn't it ultimately just a feedback that increases the likelihood of seeking that or similar situations again? The reason we 'feel good' is because the pleasure centers of our brains are stimulated. That makes us want to do the thing again.

Now, in living things, those rewards are, in early forms, related to survival essentials: food, water, shelter, mating, etc. They feel good to us *because* those ancestors that wanted to do them again were able to pass on their genes. That squirt of dopamine *is* the reward.

I see no reason why, in theory at least, we could not program a feedback loop to encourage certain behaviors and discourage others. And that feedback loop *would* be the reward.
 

Jumi

Well-Known Member
This seems to be the key issue. But what does it mean to say there is a 'reward'? Isn't it ultimately just a feedback that increases the likelihood of seeking that or similar situations again? The reason we 'feel good' is because the pleasure centers of our brains are stimulated. That makes us want to do the thing again.
Behaviorism notwithstanding, rewards and pleasure have a combination set that doesn't include all rewards and pleasures. A robot isn't introspective or instictual so abstract/intuitive concepts will only mean something to us, not to the robot "watching the sunset"

Now, in living things, those rewards are, in early forms, related to survival essentials: food, water, shelter, mating, etc. They feel good to us *because* those ancestors that wanted to do them again were able to pass on their genes. That squirt of dopamine *is* the reward.
Yes, that was the behavioristic argument. It fits some observations, but lacks in others.

I see no reason why, in theory at least, we could not program a feedback loop to encourage certain behaviors and discourage others. And that feedback loop *would* be the reward.
Yes an observer might think it's a reward. I can make a loop that stores happiness as an object and have various values stored in it that drop by time and increase by events. That doesn't make the program feel any reward though. Our object is imaginary. The real execution inside the CPU doesn't "feel" anything. Sure we can increase voltage or amperage but the chip running it won't work like a brain. It all depends on us creating abstract concepts and nodding to ourselves about it. Once the robot develops instincts or introspective capabilities we'll notice anything worth considering "reward" for it.
 
Top