• Welcome to Religious Forums, a friendly forum to discuss all religions in a friendly surrounding.

    Your voice is missing! You will need to register to get access to the following site features:
    • Reply to discussions and create your own threads.
    • Our modern chat room. No add-ons or extensions required, just login and start chatting!
    • Access to private conversations with other members.

    We hope to see you as a part of our community soon!

Humans are like robots. Choice is determined.

Thief

Rogue Theologian
We are still comparing the physical aspects of body to mind to computers?

I've been watching recent documentaries of such terms for quite some time now.
I see what we do know is sufficient to assess the one missing detail of this dicussion as....common.

We all share the ability to one degree or another.
Some of us surrender it without a great deal of regard.

It is that quality of denial.

What we say 'no' to....defines our spirit.

Our efforts to simulate that item in our handiwork has so far failed.

We know when something defies us.
We know how to deal with it.
But we don't know how to create it.
 

LegionOnomaMoi

Veteran Member
Premium Member
Recall my statement that is free from any Anthropocentric definition
That doesn't make it accurate.

Webster defines meaning
in terms of other words. Basically, it defines "meaning" by assuming you will understand the meaning of the definition. It's also a dictionary. If you were hired to create a computer which undestood meaning, and went to the customer with a product claiming that it did understand meaning because it fit your interpretation of a dictionary definition, they wouldn't care.

Because meaning is relative to a conceptual (knowledge) framework software can interpret meaning from its codification(language).
A "conceptual framework" involves "concepts". The reason formal languages exist is to minimize the semantic (conceptual) content in propositions such that they can be implemented using some sort of programming language (which in turn is capable of being reduced to logical operations on bits). if computers worked with conceptual frameworks (and by that I do not mean frameworks with concepts that are meaningful to us because they use words from actual human languages), then we wouldn't need programming languages. We could simply talk to the computers.

It doesn’t matter that the processes reduce themselves to bits no more than biological information and behaviors boil down to spike trains.

It matters a great deal. That's why the approach to language, A.I., and natural language processing has changed over the years. If biology didn't matter, we wouldn't have artificial neural networks, which are simplified models of what neurons do. There would be no need for an approach in A.I. to try to mimic what biological systems do if biology didn't matter. Yet there is, from evolutionary algorithms to swarm intelligence to artificial neural networks. All of these are based on biological systems and are attempts at imitating biological processes using machines.

Furthermore, you are assuming that there is some reduction possible such that we can understand the how brains create "minds" in terms of the activities of individual neurons (not to mention the component parts of these neurons). A fair amount of evidence, as well as current epistemological frameworks adopted in life and (to a lesser extent) physical sciences holds this to be false.

The notion of processing and storage as seperate functions isn't a deal breaker as to how computer processes information and how a brain does.

Perhaps not. But as we don't know what the deal breaker is, we can't really assert this without acknowledging the lack of empirical support for it.

Also software designed for vision processing, such as face recognition, most certainly can be done with OOP.
I'm aware. I've worked with this. There's a great volume on this subject- Object Categorization: Computer and Human Vision Perspectives. A project I worked on concerned the relevance of functionality to perceptual classification by humans, and feature extraction in shape analysis in the computer sciences has equivalent issues. However, no matter how related the issues are, there is still a wide gulf: with humans, we want to know how it is that they do what they do. With machines, it's how to get them to do things we want them to do.
 
Last edited:

idav

Being
Premium Member
That is pretty much what my point was (albeit better stated). A common methodology in NLP is the use of something which are like "objects" in OOP in an abstract sense, such as Fillmore's "frames" and the annotation used in FrameNet. But although classes and similar terminology from OOP are used here, they are not used (nor implemented) in the same way. In fact, this approach to parsing languages is (so far as I have seen) quite distinct from any approach designed to enable computers to understand language. Instead, someone who does understand designs a "lexicon" of sorts which is of maximal value for someone with the appropriate algorithm such that the algorithm can bypass semantics as much as possible with as little cost as possible. This is not an approach which will ever (at least in and of itself; perhaps the increase in our understanding of processing and language will) result in a computer capable of understanding language.


I agree, so long as it is understood that "information" here is used informally. Because machine learning, just like nonassociative learning in animals with only nervous systems rather than brains, does involve integrating new information with established. However, it is entirely devoid of semantic content. It is akin to saying that I integrate "new information with established information" when I become startled if someone trying to scare me jumps out of a hiding place and grabs me. My response consists of an increased heart rate, loss of fine motor coordination, perhaps tunnel vision, perhaps (thanks to training) the automated adoption of a particular stance or offensive action, etc. But as soon as I realize it was my friend lying in wait as an scaring me as a prank, all that becomes rather marginal. We can know this using a simple thought experiment: if, instead of a friend who I recognized shortly after being grabbed by suprise, it was someone I did not recognize with a facial expression and bodily posture that made her or him look like an attacker (rather than a prankster), I would behave entirely differently. The nobel prize winning work with sea snails/slugs and memory and the resulting model of nonassociative learning lacks that differentiation. The "learning" process (which involves the integration of information with established information) showed that such "learning" really involves making mistakes. The sea slug will continue to act as if it were being shocked even when it is not, until it is poked enough times without the shock to become desensitized.

This is the learning that computers currently are capable of: nonassociative. So the question becomes "how do we make something learn concepts when the manner in which it learns not only lacks the capacity to learn any semantic content which might serves as a basis for the integration of additional conceptual knowledge, but which was also based on learning models of animals who cannot ever learn concepts?



Too true.

I don't really see a huge difference in associative or non-associative learning. The only difference is in the organisms ability to self reflect and know what it is its learning. I don't see any reason why computers can't do that, certainly more than just reaction to stimuli as demonstrated by a slug. Machines have shown to solve adaptive issues beyond that of a slug, not to mention that machines can beat humans at chess and jeopardy which slugs certainly can't do let alone any other organism.
 

LegionOnomaMoi

Veteran Member
Premium Member
I don't really see a huge difference in associative or non-associative learning.
Perhaps that is because you have a brain capable of the former, and one which allows you to understand concepts with an ease that makes it difficult to see the difference between the type of learning you are capable of, and that which a machine (or plant) is capable of.

The only difference is in the organisms ability to self reflect and know what it is its learning. I don't see any reason why computers can't do that, certainly more than just reaction to stimuli as demonstrated by a slug. Machines have shown to solve adaptive issues beyond that of a slug, not to mention that machines can beat humans at chess and jeopardy which slugs certainly can't do let alone any other organism.
The ability for machines to do things like recognize faces or learn the rules to a game comes from combining the ability of slugs, squids, etc., to "learn", with a massive amount of storage ability and a hardware designed so that we can manipulate it according to well-defined rules. Currently, there is no computer in the world which can do more than approximate the very "simple" living systems current learning algorithms are largely based on.

Chess is a game. The moves are defined, the goals are defined, and in fact everything about can be described mathematically. Which is key, because computers are calculators. The best A.I. algorithms are a waste of time when it comes to programming a computer to play chess. That's because the best chess programs don't learn anything. They are programmed with quite explicit strategies which maximize the capacity of a computer to run a lot of computations in a very short time. Learning algorithms are inefficient when it comes to environments like this. If we know all the rules, then there is no need to use any A.I.-type approach. It's when things involve ill-defined, fuzzy, abstract notions that learning algorithms become useful. But we're still back at slug-level. The difference is not that computers can do more than slugs in terms of learning, but that we can make them do things and that they have a lot of storage capacity.
 

Leonardo

Active Member
in terms of other words. Basically, it defines "meaning" by assuming you will understand the meaning of the definition. It's also a dictionary. If you were hired to create a computer which undestood meaning, and went to the customer with a product claiming that it did understand meaning because it fit your interpretation of a dictionary definition, they wouldn't care.


A "conceptual framework" involves "concepts". The reason formal languages exist is to minimize the semantic (conceptual) content in propositions such that they can be implemented using some sort of programming language (which in turn is capable of being reduced to logical operations on bits). if computers worked with conceptual frameworks (and by that I do not mean frameworks with concepts that are meaningful to us because they use words from actual human languages), then we wouldn't need programming languages. We could simply talk to the computers.

Here's the part you're not getting; your definition of "meanng" is too anthropocentric. You can't engineer anything from your ditch. I can't go any further with details but you keep digging youself into hole.

I'm aware. I've worked with this. There's a great volume on this subject- Object Categorization: Computer and Human Vision Perspectives. A project I worked on concerned the relevance of functionality to perceptual classification by humans, and feature extraction in shape analysis in the computer sciences has equivalent issues. However, no matter how related the issues are, there is still a wide gulf: with humans, we want to know how it is that they do what they do. With machines, it's how to get them to do things we want them to do.

You have gotten that far and haven't been able to build knowledge frameworks yet? LOL, OK...:)
 

LegionOnomaMoi

Veteran Member
Premium Member
Here's the part you're not getting; your definition of "meanng" is too anthropocentric. You can't engineer anything from your ditch. I can't go any further with details but you keep digging youself into hole.

Any definition has to deal with meaning. And the definition of meaning is no exception. There is nothing "anthropocentric" about it. But perhaps you don't mean anthropocentric:
I can understand that perhaps you didn't quite get where I'm going...wouldn't you say that your definition is a bit anthropomorphic?


Of course, it's hard to tell as you can't give any details (which is, again, suprising, given that even militaries the world over give some general ideas in projects they are developing, from unmanned aircraft to code breaking). And typically, the necessary collaboration between government, corporate, and academic groups means a good deal more than "general ideas" at least as far as something so basic as "how machine learning and the definition of meaning can be adapted to do X" is concerned.
You have gotten that far and haven't been able to build knowledge frameworks yet? LOL, OK...:)
I have built (actually, have helped build) knowledge representation models. However, as I actually know what this is, I also know that this has little to nothing to do with what computers "know".
 
Last edited:

Leonardo

Active Member
Any definition has to deal with meaning. And the definition of meaning is no exception. There is nothing "anthropocentric" about it.

Ah...wow you are so far from any means of building any kind of AI system if that is what you believe.

But perhaps you don't mean anthropocentric:

Your hyperbole amuses me...:p



Of course, it's hard to tell as you can't give any details (which is, again, suprising, given that even militaries the world over give some general ideas in projects they are developing, from unmanned aircraft to code breaking). And typically, the necessary collaboration between government, corporate, and academic groups means a good deal more than "general ideas" at least as far as something so basic as "how machine learning and the definition of meaning can be adapted to do X" is concerned.

Sure they do...Like "wow we broke code using sophisticated software" or "our drone uses the latest microchip, encryption and stealth technology." LOL, OK the encryption description will have some details as to how they use multiple wavelengths to scramle the signal. :jester3:

I have built (actually, have helped build) knowledge representation models.

Perhaps, but nothing that a machine could use, your perspective keeps :banghead3
By now your head must really hurt...:rolleyes:
 

LegionOnomaMoi

Veteran Member
Premium Member
Ah...wow you are so far from any means of building any kind of AI system if that is what you believe.
My beliefs are utterly irrelevant. The lab I worked at before I had to move quite recently was not only one of the most respected in the world, it also was directed by someone whom I did not agree with. Much of the work I did was to provide support for hypotheses I did not find tenable. But the work of those I found more fruitful was no closer to A.I. than my lab or any other project in the world. The difference concerned simply what might or might not be relevant to enabling A.I.



Sure they do...Like "wow we broke code using sophisticated software" or "our drone uses the latest microchip, encryption and stealth technology." LOL, OK the encryption description will have some details as to how they use multiple wavelengths to scramle the signal. :jester3:
Before I moved a few months ago, my neighbor was Joseph Silverman. He's a math professor at Brown University who wrote (among other things) a book on cryptography. How? Because everybody who knows anything about cryptography knows what is involved. But as no one has a solution to Reimann's hypothesis, this doesn't matter.

Perhaps, but nothing that a machine could use, your perspective keeps :banghead3
By now your head must really hurt...:rolleyes:
Can you point to (i.e., cite) a different use of knowledge representation?
 

Leonardo

Active Member
My beliefs are utterly irrelevant. The lab I worked at before I had to move quite recently was not only one of the most respected in the world, it also was directed by someone whom I did not agree with. Much of the work I did was to provide support for hypotheses I did not find tenable. But the work of those I found more fruitful was no closer to A.I. than my lab or any other project in the world. The difference concerned simply what might or might not be relevant to enabling A.I.

Sounds very productive and I'm right you are far from building anything remotely ressembling AI.



Can you point to (i.e., cite) a different use of knowledge representation?

Here are some working with the right frame of mind:

NYU Computer Science Department > Machine Learning and Knowledge Representation
 

LegionOnomaMoi

Veteran Member
Premium Member
Sounds very productive and I'm right you are far from building anything remotely ressumbling AI.

I am. So is the lab I worked in. But so is every single other organization on the planet.

Thank you for demonstrating conclusively that your understanding of knowledge representation and A.I. approaches in general is not reflected in any actual work in the field. It is due to some sort of disconnect between what you think groups like the one you refer to do, and what they actually do. Most of my work has involved groups from Harvard, MIT, & collaboratoring groups from across the world. So either you are misunderstanding that which you read, or you belong to a group whose work is so sophisticated nobody on the planet is aware of anything like it, or you do not know what you are talking about.
 

Leonardo

Active Member
I am. So is the lab I worked in. But so is every single other organization on the planet.

Ah...not really from what you describe.

Thank you for demonstrating conclusively that your understanding of knowledge representation and A.I. approaches in general is not reflected in any actual work in the field.

You must not be able to read. In any case I knew you would answer the way you did which is why I said the work is being done by those with "right frame of mind", meaning "perspective"...:p

It is due to some sort of disconnect between what you think groups like the one you refer to do, and what they actually do. Most of my work has involved groups from Harvard, MIT, & collaboratoring groups from across the world. So either you are misunderstanding that which you read, or you belong to a group whose work is so sophisticated nobody on the planet is aware of anything like it, or you do not know what you are talking about.

Strawman argument, if MIT or Havard can't think of it then nobody can! :biglaugh:
 

LegionOnomaMoi

Veteran Member
Premium Member
You must not be able to read. In any case I knew you would answer the way you did which is why I said the work is being done by those with "right frame of mind", meaning "perspective"...:p
I meant anybody, period. Your latest link involves an approach I am quite familiar with. But, for someone who doesn't understand the field, I've no doubt the descriptions can be misleading.
Strawman argument, if MIT or Havard can't think of it then nobody can! :biglaugh:
As I said, I disagree with the basic approach of the lab I am associated with (and worked with, until I had to move). If you want proof, I will be happy to provide an email address you can test through a PM. The point is not that MIT, or Harvard, or CalTech, or Standford, or IBM, or anybody can serve as the example for what can or can't be done (or the way in which we should approach the "mind"). It is simply that you have misunderstood whatever sources you have read, and that your latest link reveals this.
 

Leonardo

Active Member
As I said, I disagree with the basic approach of the lab I am associated with (and worked with, until I had to move). If you want proof, I will be happy to provide an email address you can test through a PM. The point is not that MIT, or Harvard, or CalTech, or Standford, or IBM, or anybody can serve as the example for what can or can't be done (or the way in which we should approach the "mind"). It is simply that you have misunderstood whatever sources you have read, and that your latest link reveals this.

Legion you don't understand the whole issue of "meaning" from a non-anthropocentric basis. In the link all the studies have to do with my notion of "meaning" which is machine useable. :cool: That you cite or claim the giants of computer science, machine learning and neuro science have no better solution than you does not subtantiate your argument. :(

But just so you really understand why your argument is a fallacy; its analogous to: "Since Microsoft, the leader in operating systems and desktop applications, can't make stable software then nobody can. So either you know something that no one on the planet knows or you don't know what you're talking about!" :biglaugh:
 

LegionOnomaMoi

Veteran Member
Premium Member
Legion you don't understand the whole issue of "meaning" from a non-anthropocentric basis. In the link all the studies have to do with my notion of "meaning" which is machine useable. :cool:
1) You have confused anthropocentric with other terms before
2) You linked to a page doing work I am quite familiar with
3) No part of "meaning" in any link you have provided relates to anything beyond the "meaning" which ants, plants, and similar biosystems can "understand".


That you cite or claim the giants of computer science, machine learning and neuro science have no better solution than you does not subtantiate your argument. :(
It is not "my" argument. The argument concerns the propoer way in which A.I. should be approached (if any). You are at odds with the entire field, not just me.

But just so you really understand why your argument is a fallacy
What argument? I'm simply relating what the cutting edge research involves, not what it could amount to.
 

atanu

Member
Premium Member
Suppose some machine passed Turing test and suppose we ignore John Searle and assume that passing Turing test means presence of understanding. Then, who will know the feat? Conscious being/s beings. No?


:eek:
 
Last edited:

Leonardo

Active Member
1) You have confused anthropocentric with other terms before

More hyperbole Legion? Come on you're better than this...:facepalm:

2) You linked to a page doing work I am quite familiar with
3) No part of "meaning" in any link you have provided relates to anything beyond the "meaning" which ants, plants, and similar biosystems can "understand".

You didn't read the individual studies you just read the first paragraph. And you still don't get the phrase: "relative to a conceptual(knowledge) framework." Nor how it can become more sophisticated. So keep :banghead3 you must enjoy it...
 
Last edited:

idav

Being
Premium Member
Perhaps that is because you have a brain capable of the former, and one which allows you to understand concepts with an ease that makes it difficult to see the difference between the type of learning you are capable of, and that which a machine (or plant) is capable of.


The ability for machines to do things like recognize faces or learn the rules to a game comes from combining the ability of slugs, squids, etc., to "learn", with a massive amount of storage ability and a hardware designed so that we can manipulate it according to well-defined rules. Currently, there is no computer in the world which can do more than approximate the very "simple" living systems current learning algorithms are largely based on.

Chess is a game. The moves are defined, the goals are defined, and in fact everything about can be described mathematically. Which is key, because computers are calculators. The best A.I. algorithms are a waste of time when it comes to programming a computer to play chess. That's because the best chess programs don't learn anything. They are programmed with quite explicit strategies which maximize the capacity of a computer to run a lot of computations in a very short time. Learning algorithms are inefficient when it comes to environments like this. If we know all the rules, then there is no need to use any A.I.-type approach. It's when things involve ill-defined, fuzzy, abstract notions that learning algorithms become useful. But we're still back at slug-level. The difference is not that computers can do more than slugs in terms of learning, but that we can make them do things and that they have a lot of storage capacity.

Yet several computations and mass storage is exactly what is needed for the brain to do it. That in no way discredits the machine and is one more step toward AI. I understand you believe we are cheating to mimic awareness but I think there is more than sever ways to be aware. The type of learning your looking for is not only being are to learn but being aware in a cognitive sense. How was Watson not aware of all that data it is required to sift through and analyze? The computer knows whether it was programmed or it learned through experience.
 

LegionOnomaMoi

Veteran Member
Premium Member
Yet several computations and mass storage is exactly what is needed for the brain to do it.

I work in cognitive neuropsychology, and I don't know how the brain does it. I have worked with a lot of people who (unlike me) have a PhD in this or a related field, and none of them do either. As I've said before, something so basic as neural encoding still sparks considerable debate. First, much of computational neuroscience has been devoted to what is often called the "labelled-line" or "labeled-line" theory. Simplistically, each individual "receptor" neuron in the eye carries unique information to the brain that collectively allow animals (in a particularly famous study done in 1959, the animal was a frog) to "see". In other words, there is a "line" from each receptor to some specific place (or even neuron) in the brain. In this model, visual neurons are more akin to "bits" in that although it takes a lot of them, each one is somehow "meaningful".

That's no longer considered true even for neural receptors. Volume 130 of the edited series Progress in Brain Research (Advances in Neural Population Coding; 2001) represents a turning point in computational neuroscience and neuroscience in general away from this idea. But the problem (and the reason for the volume) is what to replace it with: "If the brain uses distributed codes, as certainly seems to be the case, does this mean that neurons cannot be 'labeled lines'? Clearly, to support a code of any complexity, active populations must be discriminable from one another, which means that differences among the individual cells are important. Neurons cannot respond equally well to everything and form useful representations of different things. Thus, the sharp dichotomy between distributed coding and labeled lines seems to be a false one and the critical question is 'labeled how and with what'."

That was back in 2001, before neuroimaging studies (and in particular fMRI studies) were as prevalent. Now we know more about how much farther away from understanding the "neural code" we are than previously believed. For one thing, it is now certain that the neural "bit" isn't typically based on the activity of individual neurons, but on the synchronization/correlation of their spike trains. Thus most of the time, the "minimal" meaningful information (the "bit") is a constantly changing level of correlated activity among a changing number of neurons.

But it gets worse. In a monograph published the same year as the volume referenced above (Theoretical Neuroscience: Computational and Mathematical Modeling of Neural Systems), the authors baldly state "[t]he nature of the neural code is a topic of intense debate within the neuroscience community. Much of the discussion has focused on whether neurons use rate coding or temporal coding, often without a clear definition of what these terms mean." A later volume (Dynamical Systems in Neuroscience) from the same series (MIT's Computational Neuroscience) closes with a chapter on neural "bursts" in which the author remarks that "single spikes may be noise." In other words, the way that neurons work, as described in just about every undergrad textbook on neurophysiology, is just plain wrong. These textbooks start at the level of the neuron and offer a simplistic (or distorted) model of how neurons "fire". They then typically skip how this means anything and it is just assumed that this firing is somehow the basis of the "neural code". As it turns out, this description may be describing what is only "noise", rather than part of "the neural code". And it is certain that even if there is some meaning to the all-or-nothing action potentials (firing) described so often, this is not the typical "minimal unit".

And things get even worse still. As stated in the article "Nonlocal mechanism for cluster synchronization in neural circuits" (from the journal Europhysics Letters), "one of the main enigmas in neuroscience" is not about the neural code per se (in that it isn't about how bursts or correlations of spike trains and so forth can be "units" of information), but about the level and speed of correlations of nonlocal neural populations. In other words, not only do find that the "minimal unit" doesn't really exist except as a concept (in that the minimal unit is described as something which changes in size and nature), but the same coordinated activity which can make up a "minimal unit" within a neural population can be found among neural populations themselves. Moreover, this synchronization between different cortical neural populations occurs almost instantaneously. Which means that the "minimal unit" can not only be correlations among various neurons, but even correlations between correlated neural populations themselves.

One major theory about how brains can deal with concepts concerns such massively parallel and quantum-like (or actually quantum) properties (e.g., nonlocal correlations) of the brain. The theory goes something like this: we know that concepts are not Platonic ideals. That is, there isn't a concept "tree" which corresponds to any single representation of neural activity in our brain because there isn't any single concept "tree". A "web" can be a spider web, a method for knowledge representation, a "web of deceit" or of lies, the internet, etc. Single concepts are really not single at all: they are networks of abstractions which share certain relationships in terms of particular semantic content. For example, the interconnectedness and structure of a spider web is metaphorically mapped onto the idea of something like lots of intricate lies also "organized" to deceive, or a bunch of connected computers. It may be that the seemingly impossible level of coordination between and within neural populations allows us to process concepts by allowing us to represent related conceptual content in distint cortical regions which are nonetheless strongly connected.

We can't even accurately model this level of coordination on a computer, let alone build computers capable of it. And it may very well be that no digital computer will ever be capable of what enables brains to deal with concepts rather than just "syntax" or formal rules.

I understand you believe we are cheating to mimic awareness but I think there is more than sever ways to be aware.
Not at all. For one thing, machine learning has produced a great deal. I am simply distinguishing (as everyone who works in the cognitive sciences does) between qualitatively different types of awareness. More importantly, I am suggesting that the current work in A.I. cannot result in accociative learning. It was largely based on how simple organisms which are purely reflexive "learn", and thus based on non-associative learning. That's not what we want if we want A.I. To continue to hope that more of the same (i.e., increasingly sophisticated neural network algorithms or pattern recognition algorithms) will somehow get us from non-associative to associative learning seems foolish. This is not to say we can't make this leap, or even that we can't do it on computers. Just that I don't think the current approach will get us anywhere and something else is needed.

How was Watson not aware of all that data it is required to sift through and analyze?
The same way a pocket calculater isn't aware of things like algebra. Computers compute (hence the name). They were built to carry out mathematical operations. The fundamental method for doing this is the design of logic gates which allow us to carry out basic logical operations automatically. As a lot of mathematics can be reduced to these logical operations, as long as we can come up with well-defined mathematical functions, we can (at least approximately) implement these on a computer.

Watson did that. It did a lot of automated mathematics. In order to make Watson capable of answering anything, humans had to build specialized databases which were annotated (or labeled) in a particular way to enable mathematical procedures to sort through them without understanding anything. To say that Watson was "aware" of the data is like saying your calculator is aware of addition, pi, trig, etc., simply because you can make it calculate the answers to math problems.

You might think of Watson in terms of the word problems from high school mathematics which almost everyone hates. They hate these because there is an extra step: turning the question into a mathematical equation, or equations, or mathematical expression or expressions. Once this is done, the word problem is no longer a word problem but is like the other math problems. With Watson, people who actually understood language built databases so that the "word problems" could be reduced to a bunch of equations.
 
Last edited:

Leonardo

Active Member
One major theory about how brains can deal with concepts concerns such massively parallel and quantum-like (or actually quantum) properties (e.g., nonlocal correlations) of the brain.

Non-local effects of digital computers exists as well. Mulit-core cpus, physics processors, graphic processors all work in coordination from non local effects of the thematic or high level objective of the software! Non-locality can even be observed in single cpus! The location of what software does is scattered throughout memory, not just ram but any form of storage from cloud computing, disk, cell phones, etc. Yet software acts as a single entity coordinating in a meta-physical way all the disparate hardware located anywhere in the world!

We can't even accurately model this level of coordination on a computer, let alone build computers capable of it. And it may very well be that no digital computer will ever be capable of what enables brains to deal with concepts rather than just "syntax" or formal rules.

You still dig that ditch and fail to realize concepts are aggregates of rules, networked through relational features and attributes. Even novel actions are based on past experiences that can relate to features of a current stimulus.

The same way a pocket calculater isn't aware of things like algebra. Computers compute (hence the name). They were built to carry out mathematical operations. The fundamental method for doing this is the design of "logic gates" which allow us to carry out basic logical operations automatically. As a lot of mathematics can be reduced to these logical operations, as long as we can come up with well-defined mathematical functions, we can (at least approximately) implement these on a computer.

Here gain you simply look at the hardware and not the software. Software must be aware of internal and external states in a meta-physical or thematic way. Software is an entity whose physicality, as an objective doesn’t exist, no more than the theme of a book physically exists. Where themes are collections of data that human brain neurons interpret through execution of neural systems formed by neural populations locally and non-locally that exchange spike trains. Computer software are collections of data that cpu hardware executes locally and non-locally. Both can encode many layers of abstraction and network data.

Watson did that. It did a lot of automated mathematics. In order to make Watson capable of answering anything, humans had to build specialized databases which were annotated (or labeled) in a particular way to enable mathematical procedures to sort through them without understanding anything. To say that Watson was "aware" of the data is like saying your calculator is aware of addition, pi, trig, etc., simply because you can make it calculate the answers to math problems.

Watson is aware of internal and external states, its software it has to be aware from a metaphysical perspective or thematic objective or it would not be able to operate.

You might think of Watson in terms of the "word problems" from high school mathematics which almost everyone hates. They hate these because there is an extra step: turning the question into a mathematical equation, or equations, or mathematical expression or expressions. Once this is done, the word problem is no longer a word problem but is like the other math problems. With Watson, people who actually understood language built databases so that the "word problems" could be reduced to a bunch of equations.

Watson did some mathematics, but in fact if you play Jeopardy at home and just typed in the answer for each category in Google the first few hints, actually just a page of results, gets you the answer that you then must transform into a question. So Watson is more rule based than applying mathematical equations as a 3D graphics program.
 

Copernicus

Industrial Strength Linguist
The problem, as I see it, is that we have no middle ground for our theoretical models of cognition. That is, we can understand low-level processing--suppression and facilitation of "neurons"--and we can describe very high level processes--decision-making behavior. What we don't have is a good way to connect the two. The brain is far too complex for us to come up with simple solutions--to take that giant leap that Legion talked about. In other words, there is a "scalability" problem. Our low-level models can't scale up to the high level ones, and that is why Legion said that we can't get there from here. (I'm not totally on board with him on that, because I think we need to model low-level life forms to get to the "higher" ones. I.e. we need to crawl before we can walk.)

There are lots of seemingly simple high-level behaviors that we have not captured well in our AI modeling. For example, object recognition is a huge problem. There are lots of impressive demos of object-recognition, but they don't hold much hope for scaling up. Myself, I favor using humans as seeing-eye dogs for robots in the near term. That is, we can label objects in their environment, and they can then be instantly smarter about how to interact with those objects. If we just used that trick, we could get better functionality out of robots. Of course, you need speech understanding systems for that, so salaries should go up for people with NLP skills. :)

Another high-level functionality is to build multiple world models that contradict each other and quickly adopt the one that gets the most positive feedback from multiple sensor-data. We already do this on a limited scale, but with nowhere near the ability of animals to change their minds about what they think the world is like. Robots don't change their minds easily. They are too prone to Obsessive-Compulsive Disorder. The ability to handle contradictions is paramount to successful navigation in the real world, and I don't see any solutions to this problem out there that show promise of scaling up to complex behavior.
 
Last edited:
Top