• Welcome to Religious Forums, a friendly forum to discuss all religions in a friendly surrounding.

    Your voice is missing! You will need to register to get access to the following site features:
    • Reply to discussions and create your own threads.
    • Our modern chat room. No add-ons or extensions required, just login and start chatting!
    • Access to private conversations with other members.

    We hope to see you as a part of our community soon!

Humans are like robots. Choice is determined.

atanu

Member
Premium Member
Evolution proves it. We go backward and we see less and less free will. To a highly advanced species, humans "free" will would be laughable.

Well the argument was about whether a machine can understand the meaning of information. What OOP provides is a means to encode knowledge where the information at every level from semantics to executing and/or acting on data is used meaningfully. This means that theoretically a machine can sensitize to an external environment and formalize information in terms of categories of features, properties and associated data, evaluate that information as an object and relate it to some hierarchy of knowledge and know how to act on it.

With that said; acting on information is subjective to the agenda of a machine's intent! Animals, inclusive of humans are machines with the intent of surviving. However, at least in mammalian brains, the art of survival takes on a whole new dimension where neurological systems classify experiences and actions in terms signals that are interpreted as positive or negative influences, rewards or punishments. Where rewards re-enforce behaviors that bring about positive signaling and punishments re-enforce behaviors that avoid negative signaling. I can see how a class hierarchy could be developed that capitalizes on generalized reward punishment features and allows a machine to know how to act on the information as well as understand it in a context of a larger hierarchical framework of knowledge, no different than the biological taxonomy class framework I described earlier.

Unfortunately both of you are stuck with the lowest qualities of what humanness means.

Especially for idav:

You are actually being funny. What higher being you are talking about, while in the same breath espousing Evolution? Does science recognize any higher being?

OTOH, the human who thinks that he is a body is lower, is bound to the dictates of the mind-body. The human, who has known his true nature as of awareness, not localized, is a higher being, not at all bound.

Evolution has no way of knowing or discriminating between these two kinds of humans.
 

Leonardo

Active Member
[/font][/color]
So, without revealing anything personal, can you provide any sources to support the idea that OOP somehow makes anything "meaningful" for computers?


Yes me! LOL, actually I do have the qualifications to assert a proposal and that is what I'm doing. I define meaning as the ability to react purposely to information and as such meaning is relative to a conceptual framework.

[/font][/color]
So can slugs and plants.

Well I think you'd be hardpressed to find any ability of a slug or plant to catergorize objects, they can sense their environment and trigger processes as a reaction but in no way can they formalize information into any kind of hierarchy of knowledge.


[/font][/color]
And once again, this behaviorist paradigm was decades ago.

Your discrediting of what I'm describing isn't helping your argument. You aren't understanding what I'm proposing. Reward and punishment signaling is similar to animal conditioning but it extends the signaling process to form an arbitration scheme. Where qualia of outcomes compete to force a decision. Quale can be manipulated like any other form of information, so for instance; why, for dinner, you decide to eat spinach rather than ice cream, which has a higher quale on its face value than spinach, is because of conditioning that you've experienced that desensitizes or lowers the quale memory of ice cream when you prioritize nutrition. Yes you were taught to prioritize nutrition and I have a five year old to prove it! :D

Note that this approach can allow for ad hoc behaviors, meaning novel or invented actions can be motivated by this arbitration processes.
 
Anyone ever wonder if people don't realize they were programmed? You were raised in a culture and according to a particular way of life. Maybe you don't remember what happened when you were a fertilized egg and then eventually born and then eventually able to type and post here but I am betting the events that happened between your last post and the time your egg was fertilized had a lot to do with your response here more than your idea of free will. Maybe people don't realize how improbable it is they would even be able to make a post in the first place...
 
thats an interesting idea, but since god apparently favors honesty, i dont think he would create us to be entirely controlled by outside forces and then deceive us into believing that we have free will to some degree.
 

LegionOnomaMoi

Veteran Member
Premium Member
Yes me! LOL, actually I do have the qualifications to assert a proposal and that is what I'm doing.

So you unaware of any other group on the planet doing what you describe with OOP such that you can link to anything to substantiate what you have claimed?

Well I think you'd be hardpressed to find any ability of a slug or plant to catergorize objects, they can sense their environment and trigger processes as a reaction but in no way can they formalize information into any kind of hierarchy of knowledge.
They absolutely can (at least insofar as computers can). Machine learning largely tries to imitate what things like bugs, squids, etc., can do. That's how it was developed. But as none of these, from machines to squids to plants, can understand anything, all the learning, information, and organization is at the same level. The difference with machines is that, as humans program them, and as humans determine input, methods for dealing with it, and are able to interpret the output, it is meaningful to us. But the computer's understanding of these processes is no greater than that of a slug. And considerably less complex.




Your discrediting of what I'm describing isn't helping your argument.
I don't need to discredit what you are describing. It was abandoned thanks to a plethora of work which thoroughly discredited it before I was born. You can mash together learning paradigms from the 20s, 30s, 40s, and 50s with philosophical/cog. sci. terms like "quale" all you want, just as you have OOP concepts with actual semantic/conceptual classification. But this merely confuses contradicting approaches, terms, and fields.

In the end, reducing everything to "rewards and punishments" will inevitably fail for the very reasons it did 60 years ago. Incorporating more modern terminology doesn't change this.
 

Leonardo

Active Member
They absolutely can (at least insofar as computers can).

Ah no they can't. You keep using the term computer without the context of the software. The computer as hardware is completely unware of anything, so too are neurons! But software is aware of information and processes. It has to inorder for it to work. Software has the ability to sense its environment and act on it, software is what interprets memory and external information. Software is what can articulate information into a hierarchy, not the hardware in a computer.

In the end, reducing everything to "rewards and punishments" will inevitably fail for the very reasons it did 60 years ago. Incorporating more modern terminology doesn't change this.

No theory of the senory nervous and limibic systems as a signaling scheme that arbitrates choices in terms of positive or negative states of emotional gradification and/or sensual feedback has been proposed. So again you don't understand what I'm taling about...
 

LegionOnomaMoi

Veteran Member
Premium Member
Ah no they can't. You keep using the term computer without the context of the software. The computer as hardware is completely unware of anything, so too are neurons! But software is aware of information and processes. It has to inorder for it to work. Software has the ability to sense its environment and act on it, software is what interprets memory and external information. Software is what can articulate information into a hierarchy, not the hardware in a computer.




No theory of the senory nervous and limibic systems as a signaling scheme that arbitrates choices in terms of positive or negative states of emotional gradification and/or sensual feedback has been proposed. So again you don't understand what I'm taling about...
The problem isn't so much that I don't understand what you are talking about. It's more that nobody whose field concerns the brain, cognitive science, or A.I. seems to agree with you at all. And as you refuse to supply anything which might substantiate your claims (other than your claim that you can serve in this capacity), it's a bit difficult for anybody, not just me, to determine the extent to which there is merit in what you state. In this thread, there are two other members who I know are quite familiar with certain aspects of this discussion: Copernicus and Polyhedral (I'm not saying others are not, simply that I know these two are). PolyHedral has already made some statements concerning your description of programming, but I would be very interested to hear what both Copernicus and PolyHedral (and others) have to say with respect to your differentiation between software and hardware, and how both relate to awareness and environment as you have stated. Perhaps they agree, and might be able to explain your statements in a way that make sense to me.
 

idav

Being
Premium Member
The problem isn't so much that I don't understand what you are talking about. It's more that nobody whose field concerns the brain, cognitive science, or A.I. seems to agree with you at all. And as you refuse to supply anything which might substantiate your claims (other than your claim that you can serve in this capacity), it's a bit difficult for anybody, not just me, to determine the extent to which there is merit in what you state. In this thread, there are two other members who I know are quite familiar with certain aspects of this discussion: Copernicus and Polyhedral (I'm not saying others are not, simply that I know these two are). PolyHedral has already made some statements concerning your description of programming, but I would be very interested to hear what both Copernicus and PolyHedral (and others) have to say with respect to your differentiation between software and hardware, and how both relate to awareness and environment as you have stated. Perhaps they agree, and might be able to explain your statements in a way that make sense to me.

Software and hardware power is needed to get to the level of complexity as that of a human but we know this would be a complexity of networking and processing that numbers in the amount of stars in the universe. I don't think we need to take it to that level to achieve AI we can simulate a ton with software. Software itself seems to be the means of awareness even to the point of knowing the speeds at temperature of hardware components and reacting according to programming. Learning and adaptation is the test and machines are excelling passed any organism and even rivaling humans. There are many examples of how machines can be as intelligent or more so in many different areas.
 

Me Myself

Back to my username
I think it's only a simplification if you oversimplify what a robot is.

This. I don´t really get why people think they have more "free will" than a tornado. Tornado happens because of the manipulations of wind that forms it. Human decisions happen like wise. Human actions happen likewise.
 

PolyHedral

Superabacus Mystic
DOM as in Dynamic Object Model. With exception handling in modern languages you're not limited to just errors and can use the exception handling Classes for customized procedures where you force or throw an exception for any rules you implement. Using this with reflection and/or bayesian trees and/or ANNs and/or genetic programming allows one to mature a rule set with experience.
Oh. That is equivalent to a language which allows unions as return values and has no exception handling at all.

Yeah like SIMULA is anything nearly as sophisticated as java or .NET. LOL
My screwdriver is more sonic than yours! :p

Like I said the generalization of data where it can be similarly treated has benefits where you don't have to code repeatedly for every type of data. But OOP goes further since I can treat data as an object that has the ability to store information and process it as a single unit or source!
In terms of A.I., this is irrelavent. The core of intelligent software is the meaning attached to data, and that meaning remains the same whether or not the data and the code working on that data appear as one unit.

Objects have properties, events, methods and data encapsulation. In structured programming of yesteryear the best you could build is a function library.
But everything you can do in OOP, you can do in a sufficiently advanced procedural language.

Leo you are bluffing, I say. What is intelligent about re-usabilty of functions, components, or objects beyond the fact that human programmers envisage and programme these. Limitations will be as per the code. Nothing more and nothing less.

And what is AI about that?
Code-in-general has no relevant limitations.
 
Last edited:

PolyHedral

Superabacus Mystic
I had to look up p-zombies to know what you were talking about. I think you are attempting to say that if we build a machine that mimics human behavior we have created a human. Is this your argument?
We have created a human mind, yes. Or something greater than it. Whether or not that human mind has a human body is another question entirely.

No, a computer can't.
How do you think the designers do it? With tweezers? They hand their design to a computer. There's no theorectical reason that the computer can't invent designs on its own. (It already compiles it from extremely high level constructs into actual instructions for the FPGA.)

That's my point. Computers can't either. They are just like a screwdriver with more functions. More like a desk lamp, really. That's why I always say that.
Computers can't be autonomous? Von Nueman replicators aren't a thing? As far as computability theory is concerned, the difference between Von Nueman's device and grey-goo nanotechnology is a matter of engineering - it certainly can be done, the question is how to do it.

Desk lamps and screwdrivers have a defined function - contrariwise, sufficiently advanced robotics can do anything.

You'll have to run that by me again.
"If A, B, else C."
That's a decision - but it's not really a choice, because "A" is something that's as obvious (to the computer) as "The sun is yellow" is to us.

The fact that it performs as designed is irrelevant. We can't turn screws as good as screwdrivers, either. But they don't really turn screws, do they? We do. With screwdrivers.
What sort of design applies to a machine that can rewrite its own instructions and rebuild its own processes?

Haha, okay. All the stuff I said barring inexplicable stellar events, divine intervention, alien co-habitation, or whatever it was that monolith was supposed to be... :p
The monolith is an autonomous Von Neuman replicator. Amongst other things.

Sure. It's inferred from basic quantum mechanics (although not necessarily true). But it's also irrelevant. Because the whole point is to be able to identify a relevant "state" in a finite state machine. The question is whether or not we can simulate the brain on a finite state machine.
The universe is technically a finite state machine, since there are only a finite number of ways to arrange the universe's contents. (Remembering that energy appears to be quantized.) The asynchonicity can be dealth with by either making the carrier fields part of the machine, or simply accepting a somewhat bizarre view of what a state of the universe is.

It's impossible to know, yes. But the more we learn about the brain, the more we do know what is at least involved in information storage. We know, for example, that unlike a computer there is no distinction between "storage" and "processing".
With any sort of non-trivial model, that's true in computers as well. That is, writing or reading of data requires significant processing.

Both are constant and dynamic (the brain is never at rest). We also have no idea how much can be represented by integers (even allowing for the infinite set, because from the physical to the life sciences everything is approximation.
Integers cannot represent all real numbers to infinite precision, but they can represent real numbers to arbitarary precision. All physical quantities have "smallest meaningful values", e.g. the Planck length, so integers suffice.

But they can't deal with concepts at all. So its irrelevant.
You earlier admitted that you can't define what a concept is. So how do you know that computers can't deal with them? What would dealing with them look like?

Computers deal well with "generalities" only in so far as there is a human there to understand what these mean. It is incredibly easy to create objects like "tree" have thinks like "oak", "spruce", etc. inherit from "tree". It's also meaningless to the computer.
But it's not quite meaningless - it is intrinsic to the program that oaks, spruces, etc, all fufill certain properties and the intersection of those is at least the generality "tree." If you write the program in a language such as Spec# or (to a lesser extent) Haskell, this becomes clear, because the language requires you to define that meaning. Compare:
Code:
def add(a,b):
    return a+b
(Python)
to
Code:
Number add(Number a, Number b)
    requires 0 <= a <= (1 << 32)
    requires 0 <= b <= (1 << 32)
    ensures 0 <= Return <= (1 << 33)
    ensures Return >= a
    ensures Return >= b
{
    return a+b;
}
(psudo-Spec#)

Though they perform the same function, the latter is far more meaningful, both to a human and, importantly, to the computer. It is meaningful to the computer because of reflection, which exposes its own inner workings as data - in this case, its own expectations of how things behave! Our software is capable of navel-gazing! Now that the computer knows what it's expecting, it can invent data to violate those expectations, and work through the model to determine a new answer. For instance, It can then build the more general function:
Code:
Number add(Number a, Number b)
    requires -(1 << 32) <= a <= (1 << 32)
    requires -(1 << 32) <= b <= (1 << 32)
{
    return a+b;
}
And then, invent the data (-1,-1), feed it into the function above and get -2. However, because the process in this new function is the same as the old one (obviously any IRL system will have something that's more complex and non-circular) the computer can logically deduce more meaning than initially provided. For instance, it might come up with,
Code:
Number add(Number a, Number b)
    requires -(1 << 32) <= a <= (1 << 32)
    requires -(1 << 32) <= b <= (1 << 32)
    ensures (Return >= a && Return >= b) || a < 0 || b < 0

As far as I'm concerned, this is 1) learning, 2) the concept of addition. (The bit missing from this being fully intelligent is that we haven't covered comparing the improved function to real-world evidences yet.)

CAPTCHAs and facial recognition are incredibly difficult for computers beacause they computers are designed from the ground up for "particulars." Lots of things can be stored, but they are stored specifically in well-defined ways.
gif.latex

That's one of the most general statements you can get - it qualifies over an infinite set! - yet it's a string less than 60 bytes long.
 

LegionOnomaMoi

Veteran Member
Premium Member
Software and hardware power is needed to get to the level of complexity as that of a human but we know this would be a complexity of networking and processing that numbers in the amount of stars in the universe. I don't think we need to take it to that level to achieve AI we can simulate a ton with software. Software itself seems to be the means of awareness even to the point of knowing the speeds at temperature of hardware components and reacting according to programming. Learning and adaptation is the test and machines are excelling passed any organism and even rivaling humans. There are many examples of how machines can be as intelligent or more so in many different areas.
The issue I had in mind (at least with respect to the differentiation between software and hardware) was not whether A.I. is possible to create using computers (rather than requiring, for example, biocomputers or quantum computers or something else). Rather, it was about this statement in particular:
Software is what can articulate information into a hierarchy, not the hardware in a computer.

This flies in the face of just about the entire science and philosophy of computing, not just A.I. It is akin to a basic assumption that one can still find (and was the assumption behind classical A.I. and cognitive science): the implementation device is mainly irrelvent, in that it could be a brain or a calculator. What matters is the algorithm. However, the algorithms (rules) behind arithmetic operations used by a computer, a brain, or an abacus are very much related to what that device can do once they are implemented. Brains cannot compute like a computer, and therefore the same algorithms may not be practical or even possible for a brain, but possible for a computer.

Even more important is that the entire idea behind creating a computer which is self-aware is that one can develop algorithms which can be implemented using computer hardware. If this is done, the "software" or "code" is basically the "rules" (physics) behind neural interactions. It is the entire machine, the computer itself, which can now "understand" concepts including the concept of itself as a unified entity (as an "I/me").
 

idav

Being
Premium Member
The issue I had in mind (at least with respect to the differentiation between software and hardware) was not whether A.I. is possible to create using computers (rather than requiring, for example, biocomputers or quantum computers or something else). Rather, it was about this statement in particular:


This flies in the face of just about the entire science and philosophy of computing, not just A.I. It is akin to a basic assumption that one can still find (and was the assumption behind classical A.I. and cognitive science): the implementation device is mainly irrelvent, in that it could be a brain or a calculator. What matters is the algorithm. However, the algorithms (rules) behind arithmetic operations used by a computer, a brain, or an abacus are very much related to what that device can do once they are implemented. Brains cannot compute like a computer, and therefore the same algorithms may not be practical or even possible for a brain, but possible for a computer.

Even more important is that the entire idea behind creating a computer which is self-aware is that one can develop algorithms which can be implemented using computer hardware. If this is done, the "software" or "code" is basically the "rules" (physics) behind neural interactions. It is the entire machine, the computer itself, which can now "understand" concepts including the concept of itself as a unified entity (as an "I/me").
What I see out of the statement is that it is software that gives the data any meaning, which would be true from basic to advanced levels. Looking at how Watson recalls it is the software that does the comparisons and decision making. The interface that gives the data actual meaning is a central point that gives awareness. You could have a billion computers feeding one central point that is aware of all the billions of processes.

The hardware is important especially in brains where the hardware is literally integrated with the brains data and operation procedures. With computers, that amount of redundancy is overkill and awareness is likely possibly using different methods and it all goes back to being aware of the environment and being able to learn and make choices from what is experienced.
 

LegionOnomaMoi

Veteran Member
Premium Member
The universe is technically a finite state machine, since there are only a finite number of ways to arrange the universe's contents.

A finite state machine is not simply something which has finite states, or even finite possible states. In fact, the origins of FSMs (and much much of computability theory) comes from Turing's work and the so-called "Turing machine" with an infinitely long "tape" (although not infinite states). The main idea revolved around particular (and well-defined) transitions from one discrete state to another. For the "original" finite state machine (i.e., Turing's), the "head" which stamps the states can be configured n different ways. These are the states (as I'm sure you know). But the state changes depend upon discrete, well-defined input (even given an infinitely long tape) and well-defined rules such that for any given in put x the machine will re-configure to a corresponding state y. The idea that a universal turing machine can do whatever a brain does is not something which rests on the assumption that the brain is a finite state machine (it clearly isn't) but upon computability. If the everything which the "mind" can do, from understanding concepts to being self-aware, can be reduced to computable functions, then a UTM can do what the mind does. However, this does not entail that the brain (or mind) is a FSM.

The asynchonicity can be dealth with by either making the carrier fields part of the machine, or simply accepting a somewhat bizarre view of what a state of the universe is.

Apart from the problems at the quantum level (where it appears well-defined "states" disappear altogether), the entire point behind computability theory and finite state machines is formal definitions. If we accept "a somewhat bizzare view" of states, whatever the system is, then we are no longer talking about finite state machines.


With any sort of non-trivial model, that's true in computers as well. That is, writing or reading of data requires significant processing.

But this is not the same as what is going on in the brain. Of course you are correct: being able to store data is pretty meaningless as far as running any program or software or whatever is concerned. But there is still a well-defined distinction between data storage (i.e., the states of "bits") and processing. They do go hand in hand in order for a computer to do anything, but they are distinct. This is not true of the brain. Processing corresponds in some largely unknown way to correlations between spike trains among neurons and neural populations. That's also what storage is. The same physical processes, physical parts, and often (to an unknown degree) the same places are literally both "storing" and "processing."


Integers cannot represent all real numbers to infinite precision, but they can represent real numbers to arbitarary precision.

Of course. But one issue here is whether any set would suffice, because the numbers alone are just that: numbers. The functions are key. And if, for example, we can (and we actually have) measured a quantum systems in two completely different states at the same time, then we require a many-valued logic which violates the basic logic behind computers (rules like non tertium datur no longer apply). And remember, this is the system itself which exists in multiple states at once, so noting that we make use of fuzzy set theory with computers doesn't help, because the system itself still follows classical logic.

All physical quantities have "smallest meaningful values", e.g. the Planck length, so integers suffice.
Even if this is true, it doesn't necessarily matter. It assumes that all physical quantities can be reduced to and totally explained by the parts which make them up (i.e., the interactions between constituents and the constraints of the laws of physics). This doesn't seem to be the case for living systems.


You earlier admitted that you can't define what a concept is. So how do you know that computers can't deal with them? What would dealing with them look like?

Sorry, that was a problem with tense when it comes to modals like "can". I meant "can't now". And the reason we know this is because currently, in order for a computer to do anything, we must supply formal definitions. Whether or not a finite state machine with a suitable amount of interaction with an environment (real or virtual) and a sufficiently complex learning algorithm could deal with concepts is a different question.


But it's not quite meaningless - it is intrinsic to the program that oaks, spruces, etc, all fufill certain properties and the intersection of those is at least the generality "tree."

The properties have no meaning to the computer. Computers deal with syntax. They have only rules. The labels we give objects in OOP (or the labels we give functions, variables, or anything else in any code) are simply higher level ways of interacting with the process, but they eventually become reduced to operations with logic gates and storage. If I say the class "tree" has certain properties such as being a plant, having limbs, having roots, etc., and that something which inherits from "tree" like "oak" will also have properties like leaves, these have meaning only to those who understand english. For the computer, they have no meaning. This is why when we actually try to get computers to parse language or to recognize faces, we don't use OOP, but adaptive algorithms in which the respresentation of a face does not correspond to particular variables (e.g., "scott's face, jane's face, etc.), but to a pattern distribution in the same way that mollusks "learn" to adapt to being poked.
 
Last edited:

idav

Being
Premium Member
The properties have no meaning to the computer. Computers deal with syntax. They have only rules. The labels we give objects in OOP (or the labels we give functions, variables, or anything else in any code) are simply higher level ways of interacting with the process, but they eventually become reduced to operations with logic gates and storage. If I say the class "tree" has certain properties such as being a plant, having limbs, having roots, etc., and that something which inherits from "tree" like "oak" will also have properties like leaves, these have meaning only to those who understand english. For the computer, they have no meaning. This is why when we actually try to get computers to parse language or to recognize faces, we don't use OOP, but adaptive algorithms in which the respresentation of a face does not correspond to particular variables (e.g., "scott's face, jane's face, etc.), but to a pattern distribution in the same way that mollusks "learn" to adapt to being poked.
Brains deal with rules too and all the objects are just coded. Any system that depends on memory is essentially object oriented. The interface is important because that would be equal to perception, the central point where everything comes together and is perceived as one, just like the cerebral cortex in the brain gets the data or perceives it as the neurons signals are decoded to something of use to us at the conscious level.
 

Leonardo

Active Member
The properties have no meaning to the computer. Computers deal with syntax. They have only rules. The labels we give objects in OOP (or the labels we give functions, variables, or anything else in any code) are simply higher level ways of interacting with the process, but they eventually become reduced to operations with logic gates and storage. If I say the class "tree" has certain properties such as being a plant, having limbs, having roots, etc., and that something which inherits from "tree" like "oak" will also have properties like leaves, these have meaning only to those who understand english. For the computer, they have no meaning. This is why when we actually try to get computers to parse language or to recognize faces, we don't use OOP, but adaptive algorithms in which the respresentation of a face does not correspond to particular variables (e.g., "scott's face, jane's face, etc.), but to a pattern distribution in the same way that mollusks "learn" to adapt to being poked.

Recall my statement that is free from any Anthropocentric definition:
Meaning is the ability to react purposefully to information and as such meaning is relative to a conceptual framework.


Webster defines meaning as:
http://www.merriam-webster.com/dictionary/meaning
a: the thing one intends to convey especially by language : purport
b: the thing that is conveyed especially by language : import
2
: something meant or intended : aim <a mischievous meaning was apparent>
3
: significant quality; especially: implication of a hidden or special significance <a glance full of meaning>
4
a: the logical connotation of a word or phrase
b: the logical denotation or extension of a word or phrase

Everyone of those definitions falls in line with my definition. Because meaning is relative to a conceptual (knowledge) framework software can interpret meaning from its codification(language). It doesn&#8217;t matter that the processes reduce themselves to bits no more than biological information and behaviors boil down to spike trains.

The notion of processing and storage as seperate functions isn't a deal breaker as to how computer processes information and how a brain does. The differences are the active elements neurons as oppose to logic gates. That the brain can't process information like a computer is not true and many savants have the ability to perform arithimetic in a very computer like way.

Also software designed for vision processing, such as face recognition, most certainly can be done with OOP. Just look up the OpenCVDotNet project. Oh and buy a Kinect and knock yourself out, believe me what can be done is awesome! :flirt:
 
Last edited:

Copernicus

Industrial Strength Linguist
It seems to me that the disscussion about OOP largely confuses a programming methodology designed to facilitate categorization and manipulation of objects with cognition--a much more complex phenomenon. I think of "understanding" as the cognitive process of integrating new information with established information. We understand new concepts in terms of how they are similar to, or different from, other things we know about. Human cognition is primarily associative, and we need to be able to model how we integrate information on an associative level before we can make more progress in understanding cognition. There are a lot of interesting proposals out there, but we are really still in the early stages of understanding how brains work.

Any programming language, including classical BASIC, can be used to implement OOP. It is just a matter of setting up the right data structures and accessor functions. There is nothing special about it. OOP is a very interesting programming technique, but it has its problems, not the least of which is the "black box effect" that makes debugging OOP programs so difficult. It can be tremendously useful for certain types of applications, but it really has little to do with AI research itself.
 

Copernicus

Industrial Strength Linguist
You're the competition (Wire Illuminator, it does some of the wiretracing our tools do but not the simulations)? :) Well you're retired so I guess that's ok...
I'm an NLP guy. I worked down the hall from our celebrated intelligent graphics mavens. They made some tremendous advances in the company's ability to handle wiring diagrams. Generally speaking, what we did for the company was to use AI techniques to make engineering solutions cheaper and more accurate. But it made those outside the field nervous to hear our work described in academic terms, so they kept changing the name of our organizations and departments until they were able to eliminate almost all mention of what it was that we really did. Now they are having trouble figuring out how to replace us with younger talent, because the company isn't supposed to do what we did. ;)
 

Leonardo

Active Member
It seems to me that the disscussion about OOP largely confuses a programming methodology designed to facilitate categorization and manipulation of objects with cognition--a much more complex phenomenon. I think of "understanding" as the cognitive process of integrating new information with established information. We understand new concepts in terms of how they are similar to, or different from, other things we know about. Human cognition is primarily associative, and we need to be able to model how we integrate information on an associative level before we can make more progress in understanding cognition. There are a lot of interesting proposals out there, but we are really still in the early stages of understanding how brains work.

Associative cognition is what ANNs are great for. Intergrating ANNs, with OOP has proven successful for us.

Any programming language, including classical BASIC, can be used to implement OOP. It is just a matter of setting up the right data structures and accessor functions. There is nothing special about it. OOP is a very interesting programming technique, but it has its problems, not the least of which is the "black box effect" that makes debugging OOP programs so difficult. It can be tremendously useful for certain types of applications, but it really has little to do with AI research itself.

Well a good platform should have a rich framework which is what the object code platforms like java and .NET have. If you use just striaght OOP as in C++ you are in for some headaches, but java and .NET really allow for some sweet tricks for integrating third party software seamlessly. Troubleshooting under those two platforms hasn't been a big problem.
 

LegionOnomaMoi

Veteran Member
Premium Member
It seems to me that the disscussion about OOP largely confuses a programming methodology designed to facilitate categorization and manipulation of objects with cognition--a much more complex phenomenon.
That is pretty much what my point was (albeit better stated). A common methodology in NLP is the use of something which are like "objects" in OOP in an abstract sense, such as Fillmore's "frames" and the annotation used in FrameNet. But although classes and similar terminology from OOP are used here, they are not used (nor implemented) in the same way. In fact, this approach to parsing languages is (so far as I have seen) quite distinct from any approach designed to enable computers to understand language. Instead, someone who does understand designs a "lexicon" of sorts which is of maximal value for someone with the appropriate algorithm such that the algorithm can bypass semantics as much as possible with as little cost as possible. This is not an approach which will ever (at least in and of itself; perhaps the increase in our understanding of processing and language will) result in a computer capable of understanding language.

I think of "understanding" as the cognitive process of integrating new information with established information.
I agree, so long as it is understood that "information" here is used informally. Because machine learning, just like nonassociative learning in animals with only nervous systems rather than brains, does involve integrating new information with established. However, it is entirely devoid of semantic content. It is akin to saying that I integrate "new information with established information" when I become startled if someone trying to scare me jumps out of a hiding place and grabs me. My response consists of an increased heart rate, loss of fine motor coordination, perhaps tunnel vision, perhaps (thanks to training) the automated adoption of a particular stance or offensive action, etc. But as soon as I realize it was my friend lying in wait as an scaring me as a prank, all that becomes rather marginal. We can know this using a simple thought experiment: if, instead of a friend who I recognized shortly after being grabbed by suprise, it was someone I did not recognize with a facial expression and bodily posture that made her or him look like an attacker (rather than a prankster), I would behave entirely differently. The nobel prize winning work with sea snails/slugs and memory and the resulting model of nonassociative learning lacks that differentiation. The "learning" process (which involves the integration of information with established information) showed that such "learning" really involves making mistakes. The sea slug will continue to act as if it were being shocked even when it is not, until it is poked enough times without the shock to become desensitized.

This is the learning that computers currently are capable of: nonassociative. So the question becomes "how do we make something learn concepts when the manner in which it learns not only lacks the capacity to learn any semantic content which might serves as a basis for the integration of additional conceptual knowledge, but which was also based on learning models of animals who cannot ever learn concepts?

There are a lot of interesting proposals out there, but we are really still in the early stages of understanding how brains work.

Too true.
 
Top