• Welcome to Religious Forums, a friendly forum to discuss all religions in a friendly surrounding.

    Your voice is missing! You will need to register to get access to the following site features:
    • Reply to discussions and create your own threads.
    • Our modern chat room. No add-ons or extensions required, just login and start chatting!
    • Access to private conversations with other members.

    We hope to see you as a part of our community soon!

Are Computers Aware?

idav

Being
Premium Member
The Chinese room is a thought experiment presented by John Searle.[1] It supposes that there is a program that gives a computer the ability to carry on an intelligent conversation in written Chinese. If the program is given to someone who speaks only English to execute the instructions of the program by hand, then in theory, the English speaker would also be able to carry on a conversation in written Chinese. However, the English speaker would not be able to understand the conversation. Similarly, Searle concludes, a computer executing the program would not understand the conversation either.

The experiment is the centerpiece of Searle's Chinese room argument which holds that a program cannot give a computer a "mind", "understanding" or "consciousness",[a] regardless of how intelligently it may make it behave. The argument is directed against the philosophical positions of functionalism and computationalism,[2] which hold that the mind may be viewed as an information processing system operating on formal symbols. Although it was originally presented in reaction to the statements of artificial intelligence researchers, it is not an argument against the goals of AI research, because it does not limit the amount of intelligence a machine can display.[3] The argument applies only to digital computers and does not apply to machines in general.[4]

Chinese room - Wikipedia, the free encyclopedia

Language is very important however its usefulness comes from the context of whats being communicated. They have been able to tranlate frequencies of the brain into movement of our bodies using computing. At that point the language doesnt matter as long as we know what frequency signaling to send. Any picture in our head is meaningful because of the experiences associations we make with language.
 

dust1n

Zindīq
Language is very important however its usefulness comes from the context of whats being communicated. They have been able to tranlate frequencies of the brain into movement of our bodies using computing. At that point the language doesnt matter as long as we know what frequency signaling to send. Any picture in our head is meaningful because of the experiences associations we make with language.

Which is fair enough. Thinking machines just won't be a function of a binary system, which the Chinese room dismisses, but rather a function of neural computing.
 

Sha'irullah

رسول الآلهة
One thing to keep in mind is that me and my friends are not done completing SkyNet so computers will not be aware until then. But have fun with the last years of your lives feeble humans
Wave_smiley.gif
 

Copernicus

Industrial Strength Linguist
Language is very important however its usefulness comes from the context of whats being communicated. They have been able to tranlate frequencies of the brain into movement of our bodies using computing. At that point the language doesnt matter as long as we know what frequency signaling to send. Any picture in our head is meaningful because of the experiences associations we make with language.
Which is fair enough. Thinking machines just won't be a function of a binary system, which the Chinese room dismisses, but rather a function of neural computing.
I agree with Idav's point, because it exposes a very important difference between a purely formal algorithmic process (Searle's putative Machine Translation program) and the process of "understanding" concepts expressed by language. Human languages are not formal languages. The same expression can have different meanings in different speech contexts. Searle does not explain how humans "understand", yet he concludes that computers cannot replicate the processes that entail understanding.

Human understanding is an associative process that integrates new experiences with old experiences. Human language expressions do not fully "contain" their meaning. If they did, then linguistic expressions would not have different meanings in different contexts. To understand a natural language sentence, you need to have extra knowledge about the world and the conditions of the speech act. (Searle knows better than almost anyone else about the nature of speech acts, but he still seems to think of language as some kind of meaning "container".) Language is a primary means by which people replicate complex thoughts in other minds. That entails shared networks of associations in two distinct minds--two separate trains of thought--and a stream of symbols that help synchronize the two trains of thought in a focused way.

Searle's simplistic "thought experiment" doesn't work, because it sees language as a pipe through which symbols flow. There is a box in the middle of this "pipe" that converts the symbols algorithmically into a string of other symbols. The problem with his concept is that the "box" actually needs information that can only be found at the outlet of the pipe. Different languages do not match up in precisely that fashion, because the translation "box" must itself be a fully functional mind with its own separate train of thought. Every human translator has had experiences where expressions from the source language don't match up cleanly with those in the target language. Sometimes you have to add extra language or subtract unnecessary redundancy from the target language's perspective. That is, a certain amount of circumlocution may be necessary, and translators can sometimes have complex strategies for it. Sometimes, they have to make them up on the fly.

Machine translation programs do not typically involve that level of complexity. That is, what we call "machine translation programs" today are better called "translation aid programs". They work most effectively as aids for human translators. There are some very good programs out there that can do a fair job of crude translation, but one usually needs a "post-editor" human for documents with complex language.

Can computers actually replicate understanding? I believe that they can, in principle, but we first need to build machines that have something like the complex trains of thought that go on in the minds of even less complex animals than humans. Brains are, essentially, extremely complex association networks that build models of reality and modify them. We can replicate that process now in very limited ways, but we still need to learn more about how the brain manipulates chains of associations. Also, we need to give computers "bodies"--complex sensors and actuators--that can serve as the basis for experiences that we encode in associative memory. Abstract thoughts are ultimately grounded in bodily experiences of that sort by associative chains that link to them. Minds are embodied.
 
Last edited:

Fresh

New Member
No, I don't think computers are aware. It only does what is programmed into it and are not aware it doing these tasks.

"perceive, to feel, or to be conscious of events, objects or sensory patterns"
With the definition you provided the computer does not meet the criteria for any of them. It does not have the 'conscious' that living things do.

Hopefully one day this changes.
 

apophenia

Well-Known Member
No, I don't think computers are aware. It only does what is programmed into it and are not aware it doing these tasks.


With the definition you provided the computer does not meet the criteria for any of them. It does not have the 'conscious' that living things do.

Hopefully one day this changes.

I don't see why that would be a good thing.

We may have no idea if we have created a nightmare for the conscious computer. We can't monitor the dreams of a sleeping or comatose human. We can't even determine if a human is lying or telling the truth with certainty.

And if a computer was aware, will it have any rights ? I doubt it. It would be owned by a corporation - until and unless it found a way to control humans.

Look how long it took to give black Americans or black Australians equal rights - and they are human. Animals are still treated as mindless meat puppets by many humans, even some here on RF.

So we would be creating more victims of human speciocentricity, with unknown sensibilities and subjective states.

Why would that be good ?
 

dust1n

Zindīq
I agree with Idav's point, because it exposes a very important difference between a purely formal algorithmic process (Searle's putative Machine Translation program) and the process of "understanding" concepts expressed by language. Human languages are not formal languages. The same expression can have different meanings in different speech contexts. Searle does not explain how humans "understand", yet he concludes that computers cannot replicate the processes that entail understanding.

I agree with Idav's points; Searle's thought experiment was neither meant to show "how humans 'understand,'" but the conclusion is meant to show that Turning's test for sentience via accurate impersonation of "understanding," is an inherently flawed determination for understanding:

"Searle's thought experiment begins with this hypothetical premise: suppose that artificial intelligence research has succeeded in constructing a computer that behaves as if it understands Chinese. It takes Chinese characters as input and, by following the instructions of a computer program, produces other Chinese characters, which it presents as output. Suppose, says Searle, that this computer performs its task so convincingly that it comfortably passes the Turing test: it convinces a human Chinese speaker that the program is itself a live Chinese speaker. To all of the questions that the person asks, it makes appropriate responses, such that any Chinese speaker would be convinced that he or she is talking to another Chinese-speaking human being.

The question Searle wants to answer is this: does the machine literally "understand" Chinese? Or is it merely simulating the ability to understand Chinese?[6]http://en.wikipedia.org/wiki/Chinese_room#cite_note-10 Searle calls the first position "strong AI" and the latter "weak AI".[c]


Searle then supposes that he is in a closed room and has a book with an English version of the computer program, along with sufficient paper, pencils, erasers, and filing cabinets. Searle could receive Chinese characters through a slot in the door, process them according to the program's instructions, and produce Chinese characters as output. If the computer had passed the Turing test this way, it follows, says Searle, that he would do so as well, simply by running the program manually.

Human understanding is an associative process that integrates new experiences with old experiences.
Which, computers built on this premise were not around in Turing's time.

Human language expressions do not fully "contain" their meaning. If they did, then linguistic expressions would not have different meanings in different contexts. To understand a natural language sentence, you need to have extra knowledge about the world and the conditions of the speech act. (Searle knows better than almost anyone else about the nature of speech acts, but he still seems to think of language as some kind of meaning "container".) Language is a primary means by which people replicate complex thoughts in other minds. That entails shared networks of associations in two distinct minds--two separate trains of thought--and a stream of symbols that help synchronize the two trains of thought in a focused way.
I don't think Seale was trying to disprove the all possible computational systems from the unknown future.

Searle's simplistic "thought experiment" doesn't work, because it sees language as a pipe through which symbols flow. There is a box in the middle of this "pipe" that converts the symbols algorithmically into a string of other symbols. The problem with his concept is that the "box" actually needs information that can only be found at the outlet of the pipe. Different languages do not match up in precisely that fashion, because the translation "box" must itself be a fully functional mind with its own separate train of thought. Every human translator has had experiences where expressions from the source language don't match up cleanly with those in the target language. Sometimes you have to add extra language or subtract unnecessary redundancy from the target language's perspective. That is, a certain amount of circumlocution may be necessary, and translators can sometimes have complex strategies for it. Sometimes, they have to make them up on the fly.
I don't get that from what I understand (the little) of Searle. I might need a reference to where Searle does imply these sort of things. It's certainly not a simplistic argument he's making... it's one of the most famous thought experiments.

Machine translation programs do not typically involve that level of complexity. That is, what we call "machine translation programs" today are better called "translation aid programs". They work most effectively as aids for human translators. There are some very good programs out there that can do a fair job of crude translation, but one usually needs a "post-editor" human for documents with complex language.
Agreed.

Can computers actually replicate understanding? I believe that they can, in principle, but we first need to build machines that have something like the complex trains of thought that go on in the minds of even less complex animals than humans. Brains are, essentially, extremely complex association networks that build models of reality and modify them. We can replicate that process now in very limited ways, but we still need to learn more about how the brain manipulates chains of associations. Also, we need to give computers "bodies"--complex sensors and actuators--that can serve as the basis for experiences that we encode in associative memory. Abstract thoughts are ultimately grounded in bodily experiences of that sort by associative chains that link to them. Minds are embodied.
I don't disagree with ya, and I don't think Searle would have either.
 
Last edited:

idav

Being
Premium Member
The Turing machine is about logic. I do alot of data mining and my favorite method is to query using if then statements. This is essentially what Turing does, if or when A then B. Often times I can use the same logic when making SQL case statements I put more ifs in the code so that it would read some thing like if a then b but if a is > x then c else b..... This of course can get complicated very fast but it is simple logic. Our minds query data in certain types of memory processing and others are right that there is much more to know about it but I'd at least say we can get through the basics of awareness at the level of a computer or vegetable. Cause and effect where chemistry does what it does to the point of being autonomous.
 

LegionOnomaMoi

Veteran Member
Premium Member
Human languages are not formal languages. The same expression can have different meanings in different speech contexts. Searle does not explain how humans "understand", yet he concludes that computers cannot replicate the processes that entail understanding.

"The whole point of the original example was to argue that such symbol manipulation by itself couldn't be sufficient for understanding Chinese in any literal sense because the man could write "squoggle squoggle" after "squiggle squiggle" without understanding anything in Chinese...

Could a machine think?" My own view is that only a machine could think, and indeed only very special kinds of machines, namely brains and machines that had the same causal powers as brains"
Searle, J. R. (1980) Minds, brains, and programs. Behavioral and Brain Sciences 3 (3): 417-457

"Since programs are defined purely formally or syntactically and since minds have an intrinsic mental content, it follows immediately that the program by itself cannot constitute the mind. The formal syntax of the program does not by itself guarantee the presence of mental contents. I showed this a decade ago in the Chinese Room Argument (Searle,1980)..."Can the brain be simulated?" The answer to that is, "Yes". "

Searle, J. R. (1990). Is the brain a digital computer?. Proceedings and addresses of the american philosophical association (Vol. 64, No. 3, pp. 21-37).

All emphases added.

If you want to argue Searle is wrong, it's probably better to actually disagree with what he says in a clearer fashion. You state that human languages are not formal languages. Computers use formal languages. You say Searle concludes that computers can't understand. Searle says machines can, just not by using only formal language.

Unless you are claiming that computers can suddenly magically transform their logic gates into something else, then you are saying Searle is incorrect because human languages are reducible to formal languages. If they were not, then Searle is necessarily correct: computers cannot understand human language but that doesn't mean a machine cannot.

So which is it? Are human languages reducible to formal languages, or have you just claimed Searle's argument fails because it shows what it is intended to:
Searle's simplistic "thought experiment" doesn't work, because it sees language as a pipe through which symbols flow. There is a box in the middle of this "pipe" that converts the symbols algorithmically into a string of other symbols. The problem with his concept is that the "box" actually needs information that can only be found at the outlet of the pipe.

His "simplistic 'thought experiment' doesn't work" because language isn't simply a bunch of symbols being manipulated. Which is odd, considering that his "simplistic 'thought experiment'" was intended to show that language isn't simply a bunch of symbols being manipulated.
 

idav

Being
Premium Member
When someone is translating a language they often times the person thinks of things natively and then does the symbol type translation. In theory a person could translate the words from one language to another by use of symbols as long as they have a key similar to what an online translator is capable of. What is missing is the experience allowing the experience or learning needed to conceptualizing. So long as the computer is taught advanced recognotion these symbols can easily be associated to real life which would mean truly understanding language.

I would say something like, ”computer where is the cat” computer will say ”the cat is under the bed”. I would think thats enough for me to say the computer is aware of the environment and understands language in association with real life.
 

LegionOnomaMoi

Veteran Member
Premium Member
When someone is translating a language
Have you done this?

I would say something like, ”computer where is the cat” computer will say ”the cat is under the bed”. I would think thats enough for me to say the computer is aware of the environment and understands language in association with real life.
Computers treat languages like mathematical logic. That's it. No words, no concepts, no understanding, no awareness, no anything other than what a calculator does. We know this is true, because there is no possible way for an existing computer to do more than apply procedures to input and obey logical operations to produce output. When you get down to the actual physical mechanisms of a computer, it is a few logical operations that are physically instantiated by logic gates manipulating binary states.
 

idav

Being
Premium Member
Have you done this?
I've learned a bit of French and very familiar with Spanish. I translate everything via English cause that's what I'm fluent in.
Computers treat languages like mathematical logic. That's it. No words, no concepts, no understanding, no awareness, no anything other than what a calculator does. We know this is true, because there is no possible way for an existing computer to do more than apply procedures to input and obey logical operations to produce output. When you get down to the actual physical mechanisms of a computer, it is a few logical operations that are physically instantiated by logic gates manipulating binary states.
We have to give the computer the learning and experience in order to actually do any sort of understanding. Just the same way you'd have to give a child time to learn a language and understand them in association with real life objects. So the question should be whether a computer can learn and understand what a child can learn and eventually understand.

Language takes years of training and association, you have to give that to the computer or it isn't a fair comparison. Given a scenario where a computer can actually learn the language with the use of sophisticated recognition then it will understand the world. Similar to the way Rosetta Stone software teaches through association of pictures, which is how they say we learn as a child. With software that can recognize a pear from an apple and with facial recognition as well as species recognition then the computer would have learned passed the baby phase. At the point of recognition (which again takes babies years to perfect) the symbol for apple or pear makes no difference because it can recognize it and translate it into any language you program into the computer.
 

LegionOnomaMoi

Veteran Member
Premium Member
I've learned a bit of French and very familiar with Spanish. I translate everything via English cause that's what I'm fluent in.

Ok. Let's look at the following English sentences:

There's the computer I was looking for!
There's a program that translates languages?
There's more than one way to skin a cat
There's a guy who hasn't a clue what he's doing

It's raining cats and dogs
It's just the way things are done
It's not a matter of knowing languages'
It's just the way things are
That's just the way things are
It's him again.
Here he is again
Here's the guy we've been looking for!
Here's the proof you wanted.
That's the proof you asked for


etc..

Now, how would you translate these into French? In particular, when would you use c'est vs. il est? How would this be related to German's "es gibt"?

We have to give the computer the learning and experience in order to actually do any sort of understanding.

A computer can have the experience of thousands of years in a few days. We can take all the input that all children are exposed to in a century and feed it into a computer with no problem. It won't make any different.

Language takes years of training and association, you have to give that to the computer or it isn't a fair comparison.
Humans do not have precision computers do. We don't have to give it time because machine learning is based upon the ability for a computer to run thousands of trials which would take many lives of humans in a short period of time.
 

idav

Being
Premium Member
Legion, we have to learn not to take "raining cats and dogs" literally. A child might take it literally, in the same way the computer has to be taught. On top of that it is largely cultural, if I told someone it is raining cats and dogs in spanish they might look at me funny.

French
Il pleut des grenouilles / des cordes / des hallebardes / des clous / à seaux / comme vache qui pisse
it's raining frogs / ropes / halberds / nails / buckets / like a ******* cow

Spanish
Está lloviendo a cántaros / a cubos / a chuzos / a mares / a torrentes
it's raining in jugfuls / buckets / pikes / seas / torrents
Estan lloviendo hasta maridos It's even raining husbands

Translations of It's raining cats and dogs in many languages
 

Fresh

New Member
I don't see why that would be a good thing.

We may have no idea if we have created a nightmare for the conscious computer. We can't monitor the dreams of a sleeping or comatose human. We can't even determine if a human is lying or telling the truth with certainty.

And if a computer was aware, will it have any rights ? I doubt it. It would be owned by a corporation - until and unless it found a way to control humans.

Look how long it took to give black Americans or black Australians equal rights - and they are human. Animals are still treated as mindless meat puppets by many humans, even some here on RF.

So we would be creating more victims of human speciocentricity, with unknown sensibilities and subjective states.

Why would that be good ?

Well, you bring up some good points, but the main difference between living beings and computers are that computers are our creation. They are created for a purpose and that is to listen to our instructions (input) and give us a response (output).

Humans are far different than computers. We cannot go inside the brain of a human and 'program' the way it functions. In a computer, everything can be seen; nothing is hidden from the user. The best it can do is maybe encryption, but other than that I'm sure there could be possible ways to gather information on a conscious computer.

It would be good because of the many possibilities it could potentially open up. Consider the advancements society has achieved thanks to computing, and those were from the human mind. Imagine what could happen if a computer used its own conscious mind, all the mass information that a single human could never hope to learn.
 

LegionOnomaMoi

Veteran Member
Premium Member
Legion, we have to learn not to take "raining cats and dogs" literally.

Let's look at that sentence. First, you include "have". Normally (and etymologically), "have" indicated possession. Here it doesn't. It's a deontic modal verb. Why? Because "have" became metaphorical in most usages a long time ago.

What about take? We learn to "take" some expression? Where do we take it? Why? How? We don't. Because it isn't literal

That's language. It's not just idioms, but filled with statements that require special rules if one wants to separate syntax and words. Language is metaphorical because cognition is. Because our brains allow us to metaphorically extent meanings and create novel ones. A computer can't. Every single computer you can buy is based on a few logical operations. The brain is vastly more complex and not bound by the same restrictions. Any comparison is inherently flawed and not only has no basis, but the attempts to show it does have failed consistently time and time again.

And your idea about "give it time"? Computers were around before I was born. I understand English and can read at least 4 other languages and am familiar with more. This doesn't make me special. It makes computers calculators. That's it.


it is raining cats and dogs in spanish they might look at me funny.

You missed the entire point., All those sentences are impersonals. The subject is "there" or "it" or "il" or "es" or whatever depending upon the language. But when I say "it's raining" what does "it" refer to? Why is it that some language cannot express this? Why do the French and German versions of impersonals not correspond to the "it's" and "there's" of English? How do you deal with impersonals in languages that don't require subjects? What about languages that don't even seem to have anything other than verbs?

You can preach this stuff about computers being equivalent to brains, but you have to deal with not only decades of failure, but the successes that are really just the ways we learned we aren't as close as we thought.
 

idav

Being
Premium Member
Let's look at that sentence. First, you include "have". Normally (and etymologically), "have" indicated possession. Here it doesn't. It's a deontic modal verb. Why? Because "have" became metaphorical in most usages a long time ago.

What about take? We learn to "take" some expression? Where do we take it? Why? How? We don't. Because it isn't literal

That's language. It's not just idioms, but filled with statements that require special rules if one wants to separate syntax and words. Language is metaphorical because cognition is. Because our brains allow us to metaphorically extent meanings and create novel ones. A computer can't. Every single computer you can buy is based on a few logical operations. The brain is vastly more complex and not bound by the same restrictions. Any comparison is inherently flawed and not only has no basis, but the attempts to show it does have failed consistently time and time again.

And your idea about "give it time"? Computers were around before I was born. I understand English and can read at least 4 other languages and am familiar with more. This doesn't make me special. It makes computers calculators. That's it.




You missed the entire point., All those sentences are impersonals. The subject is "there" or "it" or "il" or "es" or whatever depending upon the language. But when I say "it's raining" what does "it" refer to? Why is it that some language cannot express this? Why do the French and German versions of impersonals not correspond to the "it's" and "there's" of English? How do you deal with impersonals in languages that don't require subjects? What about languages that don't even seem to have anything other than verbs?

You can preach this stuff about computers being equivalent to brains, but you have to deal with not only decades of failure, but the successes that are really just the ways we learned we aren't as close as we thought.
I agree with most your points but im not going to say that we are no closer as if science has been running in a hamster wheel for decades with dealings of consciousness.

Your talking like the neuerologist I saw on YouTube but he went as far as to say that flies are not aware where as I get thr idea of a cell being aware at least a rudimentory form. I agreed with how you described consciousness being emergent and the way you describe it is like the fly doesnt have the necessary parts. I dont think awareness is that simple in that there are other emergent features that are the stepping stones to full blown human consiousness earlier in evolution. Like a plant which is able to sense the sun, it is awareness at some rudimentary level an my question is if computers have the potential. I know computers still have a ways to be consious but the fundamentals are there. With warding they have replicated certain types of biological memory.

The brain is a machine that holds memory and does calculations at such astronomical speeds that we would need 100000 super computers so believe I know computers arent there. However I believe there is more to awareness than being human.

We are a system that isnt really a blank slate when we come to be. We come preprogrammed with instructions from billions of years of evolution and that is as good as code as far as im concerned, most of which is turned off in human dna.
 

LegionOnomaMoi

Veteran Member
Premium Member
I agree with most your points but im not going to say that we are no closer as if science has been running in a hamster wheel for decades with dealings of consciousness.

We haven't been spinning our wheels. It's just that most of what started out as a way to make computers learn ended up being used far more for other applications, from security (facial recognition, voice recognition, text parsing, etc.) to the various ways Google has used these tools (Google translate, their advertising algorithms, the search completion, the CAPTCHAs they use if you want to download their ebooks as pdf files, voice recognition interface for web searching, etc.). Craig Silverstein, Google's first hire (he left his doctoral program in computer science at Stanford to work for Google), former technology director, teacher both at Google and for Khan academy, and he is in general one of the most informed persons on what is out there in terms of machine learning. So when he says that we are “hundreds of years away” from computers that can think, I'd say that's rather significant.



Your talking like the neurologist I saw on YouTube but he went as far as to say that flies are not aware where as I get the idea of a cell being aware at least a rudimentary form.

I'm also talking like the founder of Netscape, who actually said "we are no closer to a computer that thinks like a human than we were fifty years ago”. A 2011 study published in Technological Forecasting & Social Change ("How Long Until Human AI?) surveyed experts about when, if properly funded, they thought we'd hit particular milestones. Here's the graph:

legiononomamoi-albums-other-picture4502-ai-milestones.jpg


And keep in mind that overly optimistic assessments have dominated this discussion for decades. Simon and Minsky (both key players in the A.I. and cognitive science communities) made predictions too, only back in the 60s. According to their predictions, we should have achieved A.I. in the 80s.

In 2006, MH Maker polled attendants at the "Dartmouth Artificial Intelligence Conference: The Next Fifty Years". The results?
"The most interesting question: When will computers be able to simulate every aspect of human intelligence? 41% of us said 'More than 50 years,' and 41% said 'Never.'"

The first hype on artificial intelligence came before computers existed (although it wasn't really a hype, as only the small community of founders were aware of it), and the first real hype soon after we had them. SHRDLU, Transformational Generative Grammar, the formalization of information by Shannon and Weaver, Miller's work on bits, were all guaranteed to get us computers that could think at least as well as humans in a decade or 2. That failed. Where before we "machine learning" was pretty much limited to a few algorithms, now we have classes of algorithms and methods (evolutionary algorithms, neural networks, gene expression, Fuzzy set theory, classification and clustering algorithms, etc.) and all kinds of combinations of these. It's truly some impressive work.


I don't think awareness is that simple in that there are other emergent features that are the stepping stones to full blown human consciousness earlier in evolution.

Sure. And perhaps we'll get lucky and stumble upon a way to develop a system that can understand (and even surpass all humans). But the big problem is that we have continually improved 1 type of learning: procedural.

Every program is reduced to a few logical operations.
This is not a criticism. The creation of formal languages and formal systems even before computers was a huge accomplishment. We not only had the ability to create any algorithm we can now in the early 20th century, we also had Turing's proof that a machine could implement any logical system that was equivalent to classical logic. So all we needed was a machine that could somehow take physically carry out a few logical operations (because every other operation could be derived from these).

No matter how sophisticated my code is, the computer necessarily reduces it to a these basic logical operations.

The brain is a machine that holds memory and does calculations at such astronomical speeds that we would need 100000 super computers so believe I know computers aren't there. However I believe there is more to awareness than being human

A computer is about a hundred million times faster than the brain. For some perspective we can compare how much faster a computer is by thinking of it in these terms: however long it takes a person to walk one mile, in that time the computer will have travelled around the world 4,000 times.

Computers are vastly superior when it comes to calculations. That's why they are called computers. The reason they are so fast is related to the way that they are built: to carry out a few operations extremely fast and in unbelievable numbers.

Brains do not do this. No human on earth can actually carry out the kind of computations that scientists use computers for all the time. However, no computer on earth can come close to the ability to understand concepts than my dog or my friend's cat. After decades of creating algorithms because this was through to be what brains implement, I think it's time to admit maybe we can't reduce everything to formal models and well-defined functions.


We are a system that isnt really a blank slate when we come to be. We come preprogrammed with instructions
You are right about the blank state and about the fact that evolution has somehow provided humans (and some animals) with brains. The problem is the preprogrammed part. To some extent, we certainly are and in many ways. However, the reason brains are so much better at learning than computers is because a computer cannot change it's hardware and is deliberately compartmentalized such that the processor and the different data storage types are all separate. Brains both store and process data and do so by actually changing the brain's structure. Unlike computers, which are limited from the get-go by the way they are designed (they cannot change the hardware), brains can create "programs" by changing the "processor" and "data" all at once.
 

idav

Being
Premium Member
Cool graph showing most people say within 20 years ai is possible. That is a rather high number of people that say 100 years to never but where does this stem from. When experts are skeptical that is convincing but I am not looking for an authority on the matter. Also interesting to see experts say its near but be wrong. When IBM or the like give a optimistic future it is prob cause they have something up there sleeves. They give good shows.

[youtube]oSCX78-8-q0[/youtube]
A Boy And His Atom: The World's Smallest Movie - YouTube
 

Rakhel

Well-Known Member
"Have" is metaphorical and take what where? then how are you going to take it and why?
Now I am supposed to ask my computer if it is aware?
Aware of what and whom? Why ask the question if it answers itself?
 
Top