That's what classical cognitive science thought as well. I'm not saying the whole field now believes that the "mind" is indeterministic, nondeterministic (or non-computable, a term used in
the Penrose-Hammeroff Model), but the algorithmic approach has definitiely been abandoned in the classical computation sense (e.g., Turing and von Neumann), and there are a number of specialists in fields related to A.I. and cognitive science who believe that even the types of algorithms used in ANNs (or any type of algorithm) is sufficient.
The result of an ANN is still an algorithm, just one that's very hard to generate in an understandable manner.
Actually a form of his argument, or at least it's implication, is now widely accepted. Once we started to build computers and programs which could learn and exhibit extremely complex behavior, and yet not even come close to consciousness, scientists across fields began to pay more attention to what it means to understand.
I still see no reference anywhere for what it would
mean for a computer to "understand." The goal has not been defined rigorously; it's therefore unfair to say it hasn't been achieved.
And they were late to the game. The work on metaphor in cognition by Lakoff and Johnson (1980) and Lakoff (1986) should have already made cognitive scientists aware what is involved in understanding, concepts, etc. Categorization, generalization, prototypicality, embodied cognition, etc., were all around, but these ideas were coming from linguists, and as the Chomskyan paradigm still dominated both linguistics and cognitive science, it took some time for the cognitive linguistic framework to gain much more widespread acceptance.
It'd be silly to deny that there is a lot of meaning and structure in language, and the brain's ability to parse it is probably very badly organised, but why would that suggest it's impossible to implement as a logic machine or some other sort of deterministic procedure?
First, the original argument was that the room could process symbols (like a computer) without understanding. The first main counter-argument (or at least the one which Searle felt valid enough to change his argument) was to take the human as the machine, and then his critique holds. The issue is that processing and understanding are not the same. Or, more technically, pattern recognition and conceptual representation are not the same.
Searle fails to define what "understanding" constitutes. By definition, the Chinese Room behaves identically to an actual Chinese speaker. Searle is perhaps correct in the opinion that the
human cannot understand Chinese, since he is just mechanically executing instructions. However, for consistency, some component of the room must; the most logical choice is the "program" - the book's contents.
Because, as we learned when we started writing programs (and even building machines which took much more advantage of massive connectionalism of the neural system rather than simply simulate it through a program), there is a very large gap between recognition and understanding. Our most sophisticated learning machines/programs allow advanced responses, but (as we quickly learned) despite their ability to behave chaotically, adapt, and so on, conceptual representation and semantic memory is a whole different ballgame. So currently the issue among A.I. researchers is what a machine capable of "understanding" might even involve, if it is possible at all.
What do you call data structures, if not "conceptual representation?" What do you call computer algebra systems, or logical inference engines? In this case, we have a "duck" that can do algebra, and the computer can quack just as well as a mathematician with a pad of paper. What's the magic thing the mathematician is doing that the computer isn't?
Rather, what I was trying to say is that if we can create a conscious entity, it will mean creating something which is self-determining and has "free will" in that it's "mind" will allow it to chose actions which are at least partially determined by the "mind" itself in a non-computable manner.
There has been no physical suggestion of the universe being non-computable in any manner. Any variant of "downward causation" doesn't make sense without basically re-writing physics. Feel free to do that, if you can, but I've never seen anyone attempt it.
They're are embodied accounts of mathematics. But I'm still not sure you understand what I mean (which is my fault as I haven't really explained it; then again, the subject is complex and is hotly debated). For example, at very basic levels of language we use spacial and temporal notions to illustrate abstract notions.
Like I said, the abstractions are abstractions
of things we directly have experience with - mostly space and time.
fMRI studies also seem to indicate that when we store concepts like "hammer" or "cup" at least part of this storage involves a motor program. Other studies indicate that abstract concepts have spatial directionality: sad is down, hope is up, etc.
I'd really like to see the citations for those.
Embodied cognition isn't just the notion that our thought is influenced by our environment, but that highly abstract levels of conceptual representation and categorization are extensions of concepts based in perceptual-motor experience.
Since we aren't self-reflexive enough to consciously
invent conceptions from scratch, it seems almost tautological that that's true.
There is disagreement. But these are more details than anything else. They wouldn't prevent us from modeling consciousness if we had any idea how the brain does what it does which allows us to be self-aware, conscious, store abstract generalized concepts, categorize, etc. Things that 50 years ago (even 30 years ago) were thought to be straightforward and simple (like categorization) have since the 1990s attracted much more attention because of their complexity.
You're trying to engineer a device based on a vague specification. ("Details" are important; very minor differences differentiate Earth from Mars.) It should be obvious that that will never work.
It is. DeepQA (the underlying algorithms) is a "learning" connectionalist network (neural network). It's a supervised ANN, which learns by adjusting weights. During the actual game, the way it "decides" to answer a question is whether or not the weights sum to the "neural threshold."
Watson has to parse free-form text; that's basically impossible with a neural network. See next comment.
Bayesian models, fuzzy logic, etc., are ALL used in ANNs.
This and the previous comment don't agree with anything I have ever read on the subject or my own common sense as an engineer. Once you've got the actual details of Bayesian probability, fuzzy logic, or any other type of hypothesis engine (which I am
informed by IBM is what DeepQA
actually uses) you don't introduce order-of-magnitude inefficiencies by then running that inside an ANN; you just work through the logic on its own and read off the answer. This also allows you to deliver content in almost any form, as opposed to an ANN that is limited to a pre-defined number of semi-linear inputs. (i.e. the ANN is not automatically aware of the spatial relationship between pixels)
Actually Being Human is a BBC show. In all seriousness, you're assuming this, and from what I can tell the basis of beliefs about consciousness is an outdated and largely abandoned view. I could be wrong, of course, but so far you've mentioned a 30 year old book and a website.
A website written by a professional cognitive scientist, (and Yudkowsky's cohorts) not to mention Hofstadter's PhD in cog. sci. The details of the brain's software have not changed significantly, as far as I know.
Not by definition. That's simply the limit of our capacity to model systems.
At the point the initial conditions are relevant, nothing has happened yet. Of course it's impossible for a system to affect a state before anything has happened.
That's true. But there is good reason to think that there aren't.
Which is? I think we're talking about subtly different ideas of what an algorithm is.