Ok, even I don't believe my title. And yes, this post presents arguments that are simplistic and which are easily answered. But it isn't trivial at all. Every argument that has a simple answer is because I have avoided the complex argument, and for the rest the seemingly simple solutions aren't adequate. So here goes:
Assume that for every thought, concept, memory, cognitive process, etc., there exists a physical representation of these. Or, more simply, do what is so often done and think of the brain as similar to a computer. There is a limited amount of storage space. Once you fill your available memory (I don’t mean RAM, I mean RAM, hard drive space, CPU cache, etc.), then either you can’t store or process any new information, or you have to over-write some data.
This seems reasonable enough for some simplistic premises. Far less has been used in the sciences and elsewhere to show far more. But here we immediately run into problems. Computers can’t deal well with infinities (how could they? They can only store finite amounts of information, perform finite steps, etc.). Consider the “counting numbers” 1, 2, 3, 4,…n. These never end. Yet aren’t you familiar with any counting number?
No. Not really. After all, you may be familiar with the number trillion, even googol, but you’ve never sat there and counted to a trillion, ensuring that your brain “stored” every one of them. But even so, what if I were to provide you the counting number 100,000,000,002? Can you provide me the next counting number? Sure. All you need to do is change the “2” to a “3”. So even though you may not have infinitely many “numbers” stored in your brain, given any one of infinitely many numbers you can (even if you’ve never thought of the number before) apply many rules to it: you can add 2, subtract 5, change a digit by +5, etc.
So what does this mean? Well, somehow your finite “hard drive” is capable of taking infinitely many of a particular kind of “input” and applying many operations to it, which means that somehow you are able to take any one of an infinite amount of data and not only understand it, but behave as if you knew it beforehand (you can, after all, accurately manipulate it arithmetically).
Yet still we can write this off as a combination of the concept of counting numbers and some set of finite rules. So, no need for infinite storage space. All that happens is that I give you a counting number, and because you have stored the concept “counting number” as well as arithmetic rules, you can manipulate infinitely many counting numbers without storing them. But this concept notion brings up a problem.
Imagine walking through a forest. Clearly, you are surrounded by trees. After all, it’s a forest. But no two of these trees looks alike. I know, who cares? Well, everything you see is information that must be processed/stored in your head. It’s a TON of information. Imagine that you’ve never walked in this forest before. That means every tree is new to you. How do you know they are trees?
Let me explain. Obviously you know they are trees, because you have a concept of what trees are. But you’ve never seen these particular objects in this forest you call “trees”. Ever. So there is something stored in your head, the concept “tree” that allows you to apply the “identify-as-a-tree" rule to instances of “trees” you’ve never seen. Infinitely many. How? Somehow you have stored enough information about “trees” such that, given any out of infinitely many, you can apply the rule. Are you storing the entire infinitely many possible trees in your single tree concept to do this? Impossible (given the initial assumption). That would require infinite “hard drive” space in your brain, and that would literally mean infinite space.
Perhaps this doesn’t seem to follow. So let’s return to computers. How would a computer recognize input as being a “tree” or not? It would have to have a specific, exact, immutable representation of “tree” that it would apply to any input and calculate whether it qualified. Another way of saying this is that there is no “tree recognition” algorithm, facial recognition algorithm, etc., that can do what the brain can: given some internally stored “concept” X, be able to recognize any possible instance of “x”. Alternatively, given the best “tree recognition” algorithm, I can show my “tree recognizing” program a tree that it will fail to recognize because it must use one exact representation of all trees.
But it gets worse. The brain can not only apply rules like “recognition” to infinitely many possible instances of some concept, it can do this for a novel stimuli without a known concept:
“Some time ago I was in a local pub with friends when the following brief conversation occurred as a late arrival approached the group.
Greg: “Hi, is Nicole still here?”
Megan: “She left about two beers ago.”
When Megan gave her reply to Greg’s question, everyone present seemed to have a reasonable understanding about the approximate time that Nicole left the pub (about 30 minutes ago) as I immediately inquired as to how they interpreted Megan’s indirect response.”
Gibbs, R. Jr., (2007). Experimental tests of figurative meaning construction. In Radden, G., Köpcke, K-M, Berg, T., & Siemund, P. (Eds.). Aspects of Meaning Construction (pp. 19-32). John Benjamins.
There is nothing that we store in our “hard drives” about beer that tells us it can be units of time. Likewise, if one were a server at a restaurant and were assigned table 10, and were some fellow server to tell us “Table 10 wants their check”, we would immediately realize that the table itself wanted nothing, but that the people (whom we don’t know and have never met before tonight) wanted it.
The first example provides an example of infinitely many objects that we can use our brains to interpret. We say that this doesn’t require infinite “hard drive” space because there is one category (counting numbers) that uses finite rules (even if it can somehow handle infinitely many unknowns with these finite rules).
Next, we have one, specific concept “tree” that can handle infinitely many unknown instances. Given any information in some forest, this stored “concept” corresponds to infinitely many instances. Yet somehow it is merely one concept “stored” in our “hard drive”.
Finally, there are infinitely many instances of concepts we don’t know that we can understand because there is some relationship to new information and information stored that allows us to understand new concepts without rules.
Ok, no problem. Except one. In a computer, every bit/”bit” of information is stored in exactly one place and represents/stores precisely one thing. There is no way for a computer to recognize potentially infinitely many examples of a concept, and we haven’t a clue how WE can understand novel examples of concepts without rules. If all information is literally stored, physically, in our brain, then why aren’t don’t our brains extend infinitely in space?
Assume that for every thought, concept, memory, cognitive process, etc., there exists a physical representation of these. Or, more simply, do what is so often done and think of the brain as similar to a computer. There is a limited amount of storage space. Once you fill your available memory (I don’t mean RAM, I mean RAM, hard drive space, CPU cache, etc.), then either you can’t store or process any new information, or you have to over-write some data.
This seems reasonable enough for some simplistic premises. Far less has been used in the sciences and elsewhere to show far more. But here we immediately run into problems. Computers can’t deal well with infinities (how could they? They can only store finite amounts of information, perform finite steps, etc.). Consider the “counting numbers” 1, 2, 3, 4,…n. These never end. Yet aren’t you familiar with any counting number?
No. Not really. After all, you may be familiar with the number trillion, even googol, but you’ve never sat there and counted to a trillion, ensuring that your brain “stored” every one of them. But even so, what if I were to provide you the counting number 100,000,000,002? Can you provide me the next counting number? Sure. All you need to do is change the “2” to a “3”. So even though you may not have infinitely many “numbers” stored in your brain, given any one of infinitely many numbers you can (even if you’ve never thought of the number before) apply many rules to it: you can add 2, subtract 5, change a digit by +5, etc.
So what does this mean? Well, somehow your finite “hard drive” is capable of taking infinitely many of a particular kind of “input” and applying many operations to it, which means that somehow you are able to take any one of an infinite amount of data and not only understand it, but behave as if you knew it beforehand (you can, after all, accurately manipulate it arithmetically).
Yet still we can write this off as a combination of the concept of counting numbers and some set of finite rules. So, no need for infinite storage space. All that happens is that I give you a counting number, and because you have stored the concept “counting number” as well as arithmetic rules, you can manipulate infinitely many counting numbers without storing them. But this concept notion brings up a problem.
Imagine walking through a forest. Clearly, you are surrounded by trees. After all, it’s a forest. But no two of these trees looks alike. I know, who cares? Well, everything you see is information that must be processed/stored in your head. It’s a TON of information. Imagine that you’ve never walked in this forest before. That means every tree is new to you. How do you know they are trees?
Let me explain. Obviously you know they are trees, because you have a concept of what trees are. But you’ve never seen these particular objects in this forest you call “trees”. Ever. So there is something stored in your head, the concept “tree” that allows you to apply the “identify-as-a-tree" rule to instances of “trees” you’ve never seen. Infinitely many. How? Somehow you have stored enough information about “trees” such that, given any out of infinitely many, you can apply the rule. Are you storing the entire infinitely many possible trees in your single tree concept to do this? Impossible (given the initial assumption). That would require infinite “hard drive” space in your brain, and that would literally mean infinite space.
Perhaps this doesn’t seem to follow. So let’s return to computers. How would a computer recognize input as being a “tree” or not? It would have to have a specific, exact, immutable representation of “tree” that it would apply to any input and calculate whether it qualified. Another way of saying this is that there is no “tree recognition” algorithm, facial recognition algorithm, etc., that can do what the brain can: given some internally stored “concept” X, be able to recognize any possible instance of “x”. Alternatively, given the best “tree recognition” algorithm, I can show my “tree recognizing” program a tree that it will fail to recognize because it must use one exact representation of all trees.
But it gets worse. The brain can not only apply rules like “recognition” to infinitely many possible instances of some concept, it can do this for a novel stimuli without a known concept:
“Some time ago I was in a local pub with friends when the following brief conversation occurred as a late arrival approached the group.
Greg: “Hi, is Nicole still here?”
Megan: “She left about two beers ago.”
When Megan gave her reply to Greg’s question, everyone present seemed to have a reasonable understanding about the approximate time that Nicole left the pub (about 30 minutes ago) as I immediately inquired as to how they interpreted Megan’s indirect response.”
Gibbs, R. Jr., (2007). Experimental tests of figurative meaning construction. In Radden, G., Köpcke, K-M, Berg, T., & Siemund, P. (Eds.). Aspects of Meaning Construction (pp. 19-32). John Benjamins.
There is nothing that we store in our “hard drives” about beer that tells us it can be units of time. Likewise, if one were a server at a restaurant and were assigned table 10, and were some fellow server to tell us “Table 10 wants their check”, we would immediately realize that the table itself wanted nothing, but that the people (whom we don’t know and have never met before tonight) wanted it.
The first example provides an example of infinitely many objects that we can use our brains to interpret. We say that this doesn’t require infinite “hard drive” space because there is one category (counting numbers) that uses finite rules (even if it can somehow handle infinitely many unknowns with these finite rules).
Next, we have one, specific concept “tree” that can handle infinitely many unknown instances. Given any information in some forest, this stored “concept” corresponds to infinitely many instances. Yet somehow it is merely one concept “stored” in our “hard drive”.
Finally, there are infinitely many instances of concepts we don’t know that we can understand because there is some relationship to new information and information stored that allows us to understand new concepts without rules.
Ok, no problem. Except one. In a computer, every bit/”bit” of information is stored in exactly one place and represents/stores precisely one thing. There is no way for a computer to recognize potentially infinitely many examples of a concept, and we haven’t a clue how WE can understand novel examples of concepts without rules. If all information is literally stored, physically, in our brain, then why aren’t don’t our brains extend infinitely in space?