• Welcome to Religious Forums, a friendly forum to discuss all religions in a friendly surrounding.

    Your voice is missing! You will need to register to get access to the following site features:
    • Reply to discussions and create your own threads.
    • Our modern chat room. No add-ons or extensions required, just login and start chatting!
    • Access to private conversations with other members.

    We hope to see you as a part of our community soon!

Is the internet conscious of itself yet ?

LegionOnomaMoi

Veteran Member
Premium Member
Your taking for granted all the things we are preprogrammed to do. It doesn't matter that there has to be a programmer. In nature, nature is the programmer. We are programmed to have fear, love, lust, hunger as well as the brain being programmed to run our heart, lungs, digestive system etc.

As I said, it seems to do a good enough job of explaining the mind in terms of algorithms.




That seems very unlikely, since algorithms can rewrite and evaluate themselves just fine, and I've seen no reason to think "conciousness" is anything beyond that.

"Program" is still very meaningful. It's a method, an algorithm, a guide to achieve something. Just because it's easy to describe what it is, doesn't mean that they're limited by anything. "Being human" is a goal you need an algorithm for.

Rather than respond in my own words, I'll use the following except from a neuroscientist R. J. MacGregor who is and has been an active researcher in the field for at leat 50 years. From his latest book on consciousness and the brain (World Scientific, 2006):

Although a simplified computer-like model can provide a limited interpretative view of part of the dynamic signaling of some brain networks in certain circumstances, the overreaching applications of such partial models as a realistic view of the brain as a whole is completely unjustified…
In the 1950s and 1960s artificial intelligence was defining itself with a strong sense of importance for the cold war, and the associated philosophy of computer intelligence, including the famous Turing test for “thought” in computing machines became prominent. In the 1970s, the field was adopted within cognitive psychology groups all over the country forming the restructured and heavily AI-and computer-oriented field of “cognitive science.” In the 1980s a second and highly pervasive wave of ‘computational neural networks’ surged providing many kinds of ‘smart’ and sophisticated adaptive ‘hi-tech’ electronic modules with many widespread practical applications. In the 1990s the computational view of brain function, with its discrete variables, algorithms, and modules became sacrosanct. The present decade has seen the field turn to consciousness study. Two current representative and complementary statements of the computer metaphor are the overall discussion of the current computational litany How the Mind Works, by Steven Pinker, and The Quest for Consciousness, by Cristof Koch. These books, though entertaining and useful for limited purposes, are, like the field they represent, fundamentally flawed and misleading as overall models of the human mind and brain.
Neural theorists have long voiced critical views of this field. The computer metaphor is groundless—resting on an overly reduced interpretation of the brain operations, unjustifiable analogy, and hollow labels. It allows only later, analogical thinking. It incorporates only the surface level of the intrinsic multi-level hierarchy of the brain’s neurophysiology and its natural continuous psychological processes. It has no relation to underlying causes nor any predictive abilities. It gives no guidance regarding the ultimate nature and relations of consciousness and brain. The view imposes a biased, self-limiting overreduction of human consciousness restricted to cognitive information-processing terms, whereas in fact, and this is perhaps the blindness of the computer metaphor, our extracognitive sensibilities are in themselves much more than our secondary conscious cognitive representations and interpretations of them.
 

apophenia

Well-Known Member
The Persinger helmet experiment from the video just posted by shawn001actually makes my earlier point that there is a 'godspot' in the brain. Why do people see images of god so commonly when the Persinger helmet is used ? Why does ketamine produce 'contact with god' or angels, and not random images ? Because there is a part of the brain which is all about religious/mystical experience.

So my suggestion that the godspot may have evolved for reasons of behaviour modification is quite realistic.

That would mean that 'god' is real - a real biological attribute, with a crucial function.

Perhaps there is variation in how the godspot works, just as there is variation in human erotic behaviour, but the essential function (conscience in the case of the godspot) remains the same.

Jimminy Cricket lives !
 

apophenia

Well-Known Member
The video also repeats the classic error BTW - that computers have power similar to humans, based on the dubious 1 neurone = 1 transistor model. That is just fascination with our latest toy.
 

idav

Being
Premium Member
The computer metaphor is groundless—resting on an overly reduced interpretation of the brain operations, unjustifiable analogy, and hollow labels. It allows only later, analogical thinking. It incorporates only the surface level of the intrinsic multi-level hierarchy of the brain’s neurophysiology and its natural continuous psychological processes. It has no relation to underlying causes nor any predictive abilities. It gives no guidance regarding the ultimate nature and relations of consciousness and brain.

The video also repeats the classic error BTW - that computers have power similar to humans, based on the dubious 1 neurone = 1 transistor model. That is just fascination with our latest toy.

The computer metaphor is only inefficient when comparing to a human brain. The analogy Apophenia mentions is not correct either because one neuron or single celled organism is more like one entire computer. The jeopardy machine had to use something equivalent to 10,000 home computers which only scratched the surface of what the human mind is already capable of. This is why it takes a network until we can get one machine to be more powerful and smaller. We've got the basics so I don't think it will be long. I know people have said this before but they didn't know as much about the brain as we do now. It took us a long time just to beat the chess champions and even longer to beat jeopardy champions so I wonder whats next.
 

PolyHedral

Superabacus Mystic
The computer metaphor is only inefficient when comparing to a human brain. The analogy Apophenia mentions is not correct either because one neuron or single celled organism is more like one entire computer. The jeopardy machine had to use something equivalent to 10,000 home computers which only scratched the surface of what the human mind is already capable of. This is why it takes a network until we can get one machine to be more powerful and smaller. We've got the basics so I don't think it will be long. I know people have said this before but they didn't know as much about the brain as we do now. It took us a long time just to beat the chess champions and even longer to beat jeopardy champions so I wonder whats next.
I think Watson's memory bank is so large basically because IBM haven't worked out how associative memory works. (And also so it has the biggest advantage it can reasonably get)
 

Otherright

Otherright
I have been involved in a debate about whether or not science can account for self-awareness (The Debate of God). By self-awareness I mean the experience of 'being', not merely stimulus-response mechanisms.

I have encountered many times (in that thread and on other forums) the suggestion that science has explained self-awareness, and that it is 'merely' emergent behaviour which results from the complexity of interactions.

So, a very rough,quick analysis of the net gives me the following numbers - with quad IP addresses, the net can access 4 billion computers, and apparently that number of addresses will soon be reached. So we are in that order of magnitude. A typical current laptop using a core i5 processor has a CPU with around 380 million transistors, plus all the support chips, say 500 million transistors on the motherboard. Plus memory, somewhere around 4 gigabytes. So let's just say for arguments sake that each computer has about 10 billion 'brain cells'.
(I did say very rough quick analysis, but it will do for the sake of argument).

4 billion times 10 billion is 40 billion billion, or 40 X 10 to the power of 18.

Something like that anyway. Way more than the number of cells in a human body ( around 100 trillion ), and hugely more than the number of neurones in the brain ( around 100 billion is a common estimate, plus 100 billion glial cells).

Also, countless interfaced devices (cameras, microphones and all kinds of scientific equipment).

In other words, the internet is a very complex system, with computational power exceeding a human brain, a staggering array of electronic sensors, and a store of information which is orders of magnitude beyond encyclopedic.


So is there any evidence that this mind-boggling complexity has produced a self-aware system ? Does the internet know it exists ?

And perhaps more importantly, is there any way to determine if the net is conscious ?

Because if there isn't a way to determine this, then - science has no real understanding of self-awareness, only the vague notion of 'emergent behaviour', and this vague notion can not currently determine whether or not a complex system is aware of its existence.

BTW ... the numerical analysis of number of computers, number of transistors etc is not something I could possibly calculate correctly, so please focus on the general argument rather than disputing those numbers.

No, it isn't. Self -awareness has nothing to do with simple computation.
 

PolyHedral

Superabacus Mystic
That's what classical cognitive science thought as well. I'm not saying the whole field now believes that the "mind" is indeterministic, nondeterministic (or non-computable, a term used in the Penrose-Hammeroff Model), but the algorithmic approach has definitiely been abandoned in the classical computation sense (e.g., Turing and von Neumann), and there are a number of specialists in fields related to A.I. and cognitive science who believe that even the types of algorithms used in ANNs (or any type of algorithm) is sufficient.
The result of an ANN is still an algorithm, just one that's very hard to generate in an understandable manner.
Actually a form of his argument, or at least it's implication, is now widely accepted. Once we started to build computers and programs which could learn and exhibit extremely complex behavior, and yet not even come close to consciousness, scientists across fields began to pay more attention to what it means to understand.
I still see no reference anywhere for what it would mean for a computer to "understand." The goal has not been defined rigorously; it's therefore unfair to say it hasn't been achieved.

And they were late to the game. The work on metaphor in cognition by Lakoff and Johnson (1980) and Lakoff (1986) should have already made cognitive scientists aware what is involved in understanding, concepts, etc. Categorization, generalization, prototypicality, embodied cognition, etc., were all around, but these ideas were coming from linguists, and as the Chomskyan paradigm still dominated both linguistics and cognitive science, it took some time for the cognitive linguistic framework to gain much more widespread acceptance.
It'd be silly to deny that there is a lot of meaning and structure in language, and the brain's ability to parse it is probably very badly organised, but why would that suggest it's impossible to implement as a logic machine or some other sort of deterministic procedure?


First, the original argument was that the room could process symbols (like a computer) without understanding. The first main counter-argument (or at least the one which Searle felt valid enough to change his argument) was to take the human as the machine, and then his critique holds. The issue is that processing and understanding are not the same. Or, more technically, pattern recognition and conceptual representation are not the same.
Searle fails to define what "understanding" constitutes. By definition, the Chinese Room behaves identically to an actual Chinese speaker. Searle is perhaps correct in the opinion that the human cannot understand Chinese, since he is just mechanically executing instructions. However, for consistency, some component of the room must; the most logical choice is the "program" - the book's contents.

Because, as we learned when we started writing programs (and even building machines which took much more advantage of massive connectionalism of the neural system rather than simply simulate it through a program), there is a very large gap between recognition and understanding. Our most sophisticated learning machines/programs allow advanced responses, but (as we quickly learned) despite their ability to behave chaotically, adapt, and so on, conceptual representation and semantic memory is a whole different ballgame. So currently the issue among A.I. researchers is what a machine capable of "understanding" might even involve, if it is possible at all.
What do you call data structures, if not "conceptual representation?" What do you call computer algebra systems, or logical inference engines? In this case, we have a "duck" that can do algebra, and the computer can quack just as well as a mathematician with a pad of paper. What's the magic thing the mathematician is doing that the computer isn't?

Rather, what I was trying to say is that if we can create a conscious entity, it will mean creating something which is self-determining and has "free will" in that it's "mind" will allow it to chose actions which are at least partially determined by the "mind" itself in a non-computable manner.
There has been no physical suggestion of the universe being non-computable in any manner. Any variant of "downward causation" doesn't make sense without basically re-writing physics. Feel free to do that, if you can, but I've never seen anyone attempt it.
They're are embodied accounts of mathematics. But I'm still not sure you understand what I mean (which is my fault as I haven't really explained it; then again, the subject is complex and is hotly debated). For example, at very basic levels of language we use spacial and temporal notions to illustrate abstract notions.
Like I said, the abstractions are abstractions of things we directly have experience with - mostly space and time.
fMRI studies also seem to indicate that when we store concepts like "hammer" or "cup" at least part of this storage involves a motor program. Other studies indicate that abstract concepts have spatial directionality: sad is down, hope is up, etc.
I'd really like to see the citations for those.

Embodied cognition isn't just the notion that our thought is influenced by our environment, but that highly abstract levels of conceptual representation and categorization are extensions of concepts based in perceptual-motor experience.
Since we aren't self-reflexive enough to consciously invent conceptions from scratch, it seems almost tautological that that's true.


There is disagreement. But these are more details than anything else. They wouldn't prevent us from modeling consciousness if we had any idea how the brain does what it does which allows us to be self-aware, conscious, store abstract generalized concepts, categorize, etc. Things that 50 years ago (even 30 years ago) were thought to be straightforward and simple (like categorization) have since the 1990s attracted much more attention because of their complexity.
You're trying to engineer a device based on a vague specification. ("Details" are important; very minor differences differentiate Earth from Mars.) It should be obvious that that will never work.

It is. DeepQA (the underlying algorithms) is a "learning" connectionalist network (neural network). It's a supervised ANN, which learns by adjusting weights. During the actual game, the way it "decides" to answer a question is whether or not the weights sum to the "neural threshold."
Watson has to parse free-form text; that's basically impossible with a neural network. See next comment.

Bayesian models, fuzzy logic, etc., are ALL used in ANNs.
This and the previous comment don't agree with anything I have ever read on the subject or my own common sense as an engineer. Once you've got the actual details of Bayesian probability, fuzzy logic, or any other type of hypothesis engine (which I am informed by IBM is what DeepQA actually uses) you don't introduce order-of-magnitude inefficiencies by then running that inside an ANN; you just work through the logic on its own and read off the answer. This also allows you to deliver content in almost any form, as opposed to an ANN that is limited to a pre-defined number of semi-linear inputs. (i.e. the ANN is not automatically aware of the spatial relationship between pixels)
Actually Being Human is a BBC show. In all seriousness, you're assuming this, and from what I can tell the basis of beliefs about consciousness is an outdated and largely abandoned view. I could be wrong, of course, but so far you've mentioned a 30 year old book and a website.
A website written by a professional cognitive scientist, (and Yudkowsky's cohorts) not to mention Hofstadter's PhD in cog. sci. The details of the brain's software have not changed significantly, as far as I know.

Not by definition. That's simply the limit of our capacity to model systems.
At the point the initial conditions are relevant, nothing has happened yet. Of course it's impossible for a system to affect a state before anything has happened.

That's true. But there is good reason to think that there aren't.
Which is? I think we're talking about subtly different ideas of what an algorithm is.
 

LegionOnomaMoi

Veteran Member
Premium Member
The computer metaphor is only inefficient when comparing to a human brain.

It's actually not just inefficient, but misleading at best and fundamentally flawed and baseless at worse even when it comes animals with cortices.


It took us a long time just to beat the chess champions and even longer to beat jeopardy champions so I wonder whats next.
The chess champion thing was pretty unimpressive from an A.I. stand point. Chess follows very specific rules. Computers are great instruments at following very specific rules. That's why even basic chess games can have difficulty levels which most good chess players can't beat.

Watson is a bit different. Now we're moving into ANN territory with supervised training. But again, while the learning algorithms behind Watson's code (and some advanced hardware) allowed programmers to "train" it to weigh possible interpretations of a question and use the interpretation if it matched a threshold level of probabilistic success, Watson doesn't understand anything. The program can return an answer to a question like "who was the first president of the united states" but it does not know what "first" or "president" or other of the concepts involved in any questions actually mean. This type of research is very useful (the better our natural language processing algorithms, the better, for example, our search engines will be). It isn't getting us near consciousness or artifical programs with "understanding/knowledge"
 

apophenia

Well-Known Member
No, it isn't. Self -awareness has nothing to do with simple computation.

I recommend you read the whole thread if you have the time and inclination.
LegionOnomaMoi is a researcher into consciousness/awareness and idav is a network programmer (Polyhedral is also a computer professional I think), and the dialogue between them has been one of the most illuminating conversations on the subject you are likely to read outside of academia.
 

apophenia

Well-Known Member
BTW .. that post about the 'godspot' did not belong in this thread, I thought I was posting in the 'Debate of God' thread. Hadn't had my morning cup of tea yet ...:sleep:
 

religion99

Active Member
I have been involved in a debate about whether or not science can account for self-awareness (The Debate of God). By self-awareness I mean the experience of 'being', not merely stimulus-response mechanisms.

I have encountered many times (in that thread and on other forums) the suggestion that science has explained self-awareness, and that it is 'merely' emergent behaviour which results from the complexity of interactions.

So, a very rough,quick analysis of the net gives me the following numbers - with quad IP addresses, the net can access 4 billion computers, and apparently that number of addresses will soon be reached. So we are in that order of magnitude. A typical current laptop using a core i5 processor has a CPU with around 380 million transistors, plus all the support chips, say 500 million transistors on the motherboard. Plus memory, somewhere around 4 gigabytes. So let's just say for arguments sake that each computer has about 10 billion 'brain cells'.
(I did say very rough quick analysis, but it will do for the sake of argument).

4 billion times 10 billion is 40 billion billion, or 40 X 10 to the power of 18.

Something like that anyway. Way more than the number of cells in a human body ( around 100 trillion ), and hugely more than the number of neurones in the brain ( around 100 billion is a common estimate, plus 100 billion glial cells).

Also, countless interfaced devices (cameras, microphones and all kinds of scientific equipment).

In other words, the internet is a very complex system, with computational power exceeding a human brain, a staggering array of electronic sensors, and a store of information which is orders of magnitude beyond encyclopedic.


So is there any evidence that this mind-boggling complexity has produced a self-aware system ? Does the internet know it exists ?

And perhaps more importantly, is there any way to determine if the net is conscious ?

Because if there isn't a way to determine this, then - science has no real understanding of self-awareness, only the vague notion of 'emergent behaviour', and this vague notion can not currently determine whether or not a complex system is aware of its existence.

BTW ... the numerical analysis of number of computers, number of transistors etc is not something I could possibly calculate correctly, so please focus on the general argument rather than disputing those numbers.

Answer is no.

Reason is "Biological Substance" and "Physical Substance" are fundamentally incompatible.
 

apophenia

Well-Known Member
Answer is no.

Reason is "Biological Substance" and "Physical Substance" are fundamentally incompatible.

Biological substances are physical substances Also, there have already been successful experiments involving digital interfacing to CNS cells - and technology such as bio-gates, which are like transistors which are triggered by the presence of specific organic molecules. Bio-gates are used to sense organic molecules such as phosphates in the water supply, among other uses.

These issues are more related to a discussion about the possible development of cyborgs (cybernetic organisms - hybrids) than consciousness.

I found these pages for you to look at -

Brain
The Neuron–Semiconductor Interface - Bioelectronics: From Theory to Applications - Fromherz - Wiley Online Library
'BrainGate' Brain-Machine-Interface takes shape
 

religion99

Active Member
Biological substances are physical substances Also, there have already been successful experiments involving digital interfacing to CNS cells - and technology such as bio-gates, which are like transistors which are triggered by the presence of specific organic molecules. Bio-gates are used to sense organic molecules such as phosphates in the water supply, among other uses.

These issues are more related to a discussion about the possible development of cyborgs (cybernetic organisms - hybrids) than consciousness.

I found these pages for you to look at -

Brain
The Neuron–Semiconductor Interface - Bioelectronics: From Theory to Applications - Fromherz - Wiley Online Library
'BrainGate' Brain-Machine-Interface takes shape

You cannot create a single living cell , without atleast one living cell as input.
 

LegionOnomaMoi

Veteran Member
Premium Member
The result of an ANN is still an algorithm, just one that's very hard to generate in an understandable manner.
The result is not an algorithm. Rather, the internal adjustments which return the results are specified by algorithms.

I still see no reference anywhere for what it would mean for a computer to "understand." The goal has not been defined rigorously; it's therefore unfair to say it hasn't been achieved.
Actually there are a number of very rigorously defined concepts related to understanding (including consciousness, conceptual representation, self-awareness, intelligence, etc). There isn't an agreement for all related concepts, but one thing which is clear is that for a computer to understand it must have an internal representation of concepts. I provided a number of links to articles I made available.


It'd be silly to deny that there is a lot of meaning and structure in language, and the brain's ability to parse it is probably very badly organised, but why would that suggest it's impossible to implement as a logic machine or some other sort of deterministic procedure?
First, the brain's ability to "parse it" is excellent. The problem isn't that our brain or language is "badly organized" but that concepts are both abstract and usually very general. That's why most advanced computers with the most state of the art A.I. programming are so very far from "understanding." Computers are excellent at following very well-defined procedures. They are terrible at abstractions, inference, generalization, etc. When I define something in computer code, it refers to something specific, is stored specifically, and unless some user or programmer changes it then it will refer only to that very specific thing. That isn't conceptual representation. What's so vital to language and thought is the ability for gradient membership in categorization, abstractions, and prototypicality. I can refer to an animal, a bird, and a hawk, and understand that the hawk is also both a bird and an animal. What's more, I don't need to store some specific "hawk" concept, but a prototypical "hawk" which I can then apply to specific instantiations. That's fundamental to thought and language. Otherwise for every object, action, etc., we observed, we would have to store that specific observation seperately. I couldn't talk about anything. "Understanding" involves the ability to represent abstractions from specific instatiations. It allows me to refer to cars, people, faces, and everything else without specifically defining the exact nature of "car," which is essential because none exists.


Searle fails to define what "understanding" constitutes. By definition, the Chinese Room behaves identically to an actual Chinese speaker. Searle is perhaps correct in the opinion that the human cannot understand Chinese, since he is just mechanically executing instructions. However, for consistency, some component of the room must; the most logical choice is the "program" - the book's contents.
The whole paper IS Searle defining understanding. That's the point. I can't read chinese. Somebody creates an exhaustive manual for me so that whenever I am given a series of chinese characters, I can go to my manual, find out what the right Chinese characters constitute a meaningful response, write them down, and then hand my response back. However, I don't have a clue what the question I was asked was, nor do I have any idea what my response meant. That's how Searle is defining understanding in terms of the Turing test. It is possible, given a sophisticated enough number of specific processing procedures, for a computer to take in audio input, find the proper response procedure, return it, and have no idea what the input or output meant. The point is that understanding language involves understanding concepts, not just reading input and following specified procedures to return output.


What do you call data structures, if not "conceptual representation?"
I'm working on code right now to present stimuli to fMRI subjects. Matlab actually has a terrible approximation of objects (in the OOP sense) called structures. In any event, I need my script to record latencies, response times, etc., for any subject. So I create a function which reads in a list of stimuli, and create variables to store different input responses. This is not conceptual representation, nor is any much more complicated data structure. For any datum, the computer uses a specific sequences of bits corresponding to that specific datum. And this is true for all data in a computer. When I store the concept of "car," however, I'm storing a concept rather a memory of a specific car. I went over this in more detail earlier:
I don't store memory of particular faces. Or rather, it's not my memory of particular faces which allows me to easily recognize another human face, these :sarcastic:eek::(;):cool: faces, animal faces, or Picasso faces. The same is true for ALL concepts. In my memory, I store an abstract, conceptual "face" which allows me to recognize instances of actual faces, even ones as different as my brother's face and this : ) face. That's fundamentally different from how computers remember things. There's no computer program in the world that can be trained to recognize human faces, and then can be shown a racoon face or this :p face and identify it as a face. In fact, even if our facial recognition software was much, much better, it still couldn't do that. And this realization goes back to the Platonic notion of "forms" (eidea). Our ability to store not just abstract notions like addition or beauty, but abstractions of physical things (car, face, horse, chair, etc.). It isn't "recognition" because we can apply these definitions to things we have never encountered. The first time a person sees a cartoon face, they recognize it as a face. A child of a fairly young age who has never seen an elephant before will still call it an animal.



There has been no physical suggestion of the universe being non-computable in any manner. Any variant of "downward causation" doesn't make sense without basically re-writing physics. Feel free to do that, if you can, but I've never seen anyone attempt it.

I thought I already provided such an attempt, actually more than one. But again, here's the Penrose-Hammeroff model as explained in Hammeroff's journal article.



I'd really like to see the citations for those.

I'll do you one better. I'll upload actual research articles for you:

Surface-based information mapping reveals crossmodal vision-action representations in human parietal and occipitotemporal cortex

Somatotopic Representation of Action Words in Human Motor and Premotor Cortex

Action word meaning representations in cytoarchitectonically defined primary and premotor cortices



Since we aren't self-reflexive enough to consciously invent conceptions from scratch, it seems almost tautological that that's true.
The idea that things like verb tenses, modal verbs, and similar linguistic devices (and that's limiting embodied cognition to language, which it isn't) "seems almost tautological[ly] true" ?
 

LegionOnomaMoi

Veteran Member
Premium Member
You're trying to engineer a device based on a vague specification. ("Details" are important; very minor differences differentiate Earth from Mars.) It should be obvious that that will never work.

I'm not sure what you mean. First of all, the fact that I'm not clearly defining a lot of things is because I'd need a book to do so. That's what intro to cog sci textbooks are for. Not forum posts. It doesn't mean the field is using some "vague specification" for things I'm talking about.

Watson has to parse free-form text; that's basically impossible with a neural network. See next comment.
On what are you basing your understanding of what an ANN is?


Once you've got the actual details of Bayesian probability, fuzzy logic, or any other type of hypothesis engine (which I am informed by IBM is what DeepQA actually uses)
See, this is why I don't think you are understanding what I'm talking about. Koski & Timo wrote an entire book on bayesian neural networks Bayesian Networks: An Introduction (2009) and there are similar books (e.g., Approximation Methods for Efficient Learning of Bayesian Networks). Same thing with fuzzy logic (see, e.g., Fuzzy Logic and Neural Networks: Basic Concepts and Application, Fuzzy Neural Network Theory and Application, Flexible Neuro-fuzzy Systems: Structures, Learning, and Performance Evaluation).



A website written by a professional cognitive scientist, (and Yudkowsky's cohorts)
LessWrong? What professional cognitive scientist (Yudkowsky provided the seed, but from what I can't tell it's open to anyone). Also, it's neither an academic source nor is it even mainly about consciousness or cognitive science. It's literally about what it says: "the art of finding truth and becoming less wrong about things."

not to mention Hofstadter's PhD in cog. sci. The details of the brain's software have not changed significantly, as far as I know.
But how would you know, unless you are up to date on cognitive science research? Or even that talking about brain software has any meaning? Again:
the following except from a neuroscientist R. J. MacGregor who is and has been an active researcher in the field for at leat 50 years. From his latest book on consciousness and the brain (World Scientific, 2006):
The computer metaphor is groundless—resting on an overly reduced interpretation of the brain operations, unjustifiable analogy, and hollow labels. It allows only later, analogical thinking. It incorporates only the surface level of the intrinsic multi-level hierarchy of the brain’s neurophysiology and its natural continuous psychological processes. It has no relation to underlying causes nor any predictive abilities. It gives no guidance regarding the ultimate nature and relations of consciousness and brain.




At the point the initial conditions are relevant, nothing has happened yet. Of course it's impossible for a system to affect a state before anything has happened.

"initial conditions" can refer to any point in a system's state, it's just the point at which one begins to predict the future behavior. For example, when modeling the activity of a particular neuron, the state at T0 isn't when the neuron began to exist, it's simply when we begin to speak of it's activity through time. I can talk about the initial conditions of my brain right now. And if I am running some scanning procedure on a human brain, the initial conditions are simply when I start the procedure. In neurocomputational models, whether of an individual neuron or neural populations, initial conditions don't refer to a period before anything happened in the brain. And as I model the activity of neural populations, they exhibit behavior which appear to indicate self-determinism.



Which is? I think we're talking about subtly different ideas of what an algorithm is.
An algorithm is an explicitly defined set of rules/operations.
 

shawn001

Well-Known Member
You cannot create a single living cell , without atleast one living cell as input.

Life As We Know It Nearly Created in Lab


Now scientists have created something in the lab that is tantalizingly close to what might have happened. It's not life, they stress, but it certainly gives the science community a whole new data set to chew on.

The researchers, at the Scripps Research Institute, created molecules that self-replicate and even evolve and compete to win or lose. If that sounds exactly like life, read on to learn the controversial and thin distinction.


Life As We Know It Nearly Created in Lab | LiveScience

we will be able to pretty soon it seems.
 

idav

Being
Premium Member
The result is not an algorithm. Rather, the internal adjustments which return the results are specified by algorithms.



I'm working on code right now to present stimuli to fMRI subjects. Matlab actually has a terrible approximation of objects (in the OOP sense) called structures. In any event, I need my script to record latencies, response times, etc., for any subject. So I create a function which reads in a list of stimuli, and create variables to store different input responses. This is not conceptual representation, nor is any much more complicated data structure. For any datum, the computer uses a specific sequences of bits corresponding to that specific datum. And this is true for all data in a computer. When I store the concept of "car," however, I'm storing a concept rather a memory of a specific car. I went over this in more detail earlier:
This concept in the brain is stored as an algorithm, is it not?
 

idav

Being
Premium Member
It's actually not just inefficient, but misleading at best and fundamentally flawed and baseless at worse even when it comes animals with cortices.
It is only flawed when trying to achieve AI. I've suggested that achieving AI and awareness are two different things. At the basic level, despite how we analyze information in our brains, I'm interested in what the data in our minds is actually made of. I would think it is no different than any other data, nonmaterial yet based on physical constructs.
It isn't getting us near consciousness or artifical programs with "understanding/knowledge"
Really yet watson gets us "more into ANN territory"?
 

LegionOnomaMoi

Veteran Member
Premium Member
This concept in the brain is stored as an algorithm, is it not?
Let me start my answer with a fairly recent issue in mathematics that relates because of computers. The basic idea of a function such as f(x) is fairly simple. Basically, for each acceptable input (the domain) the funtion specifies a "rule" (or algorithm) which takes that input and returns a single output (an element in the codomain). Not long ago, mathematicians spoke of "multi-valued function" which could return multiple outputs for the same input.Well, great! Certainly humans don't have a problem assigning multiple "outputs" to a signle "input." But computers...let's just say the idea of multi-valued functions ran into problems when it came to computer science. If I specify a rule for a computer to follow (for every input x, return y) I have to be extremely specific. So, for example, although the domain for the sine function is -infinity to infinity, the codomain is not (it's between -1 and 1). Which means every computer which is or is capable of computing outputs for sine, someone had to make your computer treat the domain as -1 to 1, because it can't handle ambiguity. And this is mathematics, something computers are great at. But while we can easily understand periodic functions which are defined for any value x but only return a value y within a particular range, a computer cannot. In fact, if memory serves one of the reasons the square root function is only associated with positive values is becaue of computers, while some time ago the square root of, say, 100 could be +10 or -10, as both (when squared) equal 100. Computers, however, can't handle that kind of simple ambiguity.

That's an algorithm. It's what computers are great with. The reason that after years and years and years of computational linguistic, the best computers are worse than children at language is because they don't handle a lack of ambiguity well. Humans are fantastic at it.

Concepts lack specificity. The concept "car" can be associated with any number of outputs for a human, because our brains are fundamentally different from computers.
Really yet watson gets us "more into ANN territory"?
ANNs aren't intelligent and the don't store concepts. They are better than classical programming when it comes to certain tasks, because they mimic the structure of the brain. But they do so in such a simplistic fashion that they don't get us anywhere near "knowledge/understanding."
 
Last edited:
Top