Let’s play a game. Let’s call it a “thought experiment” — an experiment, like in science, that you only carry out in your mind.
Imagine, if you will, that a human had suffered brain damage; perhaps a bullet wound to the brain, or perhaps a tiny aneurysm. It’s unimportant what the cause of the damage is, the damage is what matters. Now, this damage had blown away, say, the proprioception center in our brain responsible for sensing hand position. As a result, it was nearly impossible to grasp things precisely; without a sense of our hand’s position in space and how much force we’re exerting, we’d crush things, or drop things.
Now, due to some miracle of modern medicine, not only do we know where this chunk of nerves in the brain is, we know exactly how it works and — wonder of wonders — we can craft an exact replacement for it out of silicon. This may sound outlandish, but scientists are understanding more and more of the brain’s pathways, and images like the above (of the rattus with the wires for the brain) aren’t from science fiction, but from science. It’s just a matter of time; like all electronics, version 1.0 will be buggy and huge and inefficient, but it’ll keep getting smaller and cheaper and more reliable. It’s just a matter of time before we have the technology to replace a chunk of nerves with some electronic equivalent. And brain science is proceeding by leaps and bounds with things like fMRI and TMS (transcranial magnetic stimulation). Before too many years, these two must meet, and it will be possible to replace chunks of the brain with silicon (or whatever material is used).
Remember, this is just an experiment. A hypothetical; the technology exists to replace the damaged chunk of brain tissue that makes your hand shake and clumsily drop things. So hey, we do it, and your hand works again! Better than works, it’s now incredibly precise. Better than human! More human than human, you might say.
Ok, now due to incredibly bad luck, your brain is damaged again, and this time it’s the parts responsible for your balance and gait. You can’t walk straight, you stumble like a drunkard, you fall. Again, science to the rescue! Your new electronic brain bits let you balance and leap like an acrobat. Huzzah!
A series of unfortunate events later, and more and more of your brain is getting rewired with silicon. Now it’s bits affecting your speech, your mannerisms. A stutter is corrected. Facial tics are removed. A wild depression and fits of rage due to a bad nail gun accident (ala Phineas Gage) is fixed, and you are once again calm, loving, and level-headed, through the wonders of science and medicine. This is all the hypothesis; the preconditions. The experiment — the question — is this: how much of your brain would you have to replace electronically, before you stopped being human… before you stopped being you?
Now, this has all been a trick, you see. The philosophical debate involved (issues of mechanism and biological determinism are at play) is far beyond the scope of this blog (but do feel free to comment if you think you can solve problems that philosophers since Descartes and further have been struggling with). What’s interesting to me are your reactions to the question.
You see, Descartes and other Christians (and folks of other stripes) feel as though no matter how perfectly you emulated the replaced tissue, it wouldn’t be you. In other words, they claim that there’s some fundamental “thing” — call it a “soul” — that makes you you. Science-minded believers in souls wouldn’t like this experiment because it forces them to eventually ask the question: where in the brain does the “soul” (or essence of humanity, call it what you will) live? In other words, which part of the brain will, when replaced with a machine, make “you” stop being “you.” Asking that produces uncomfortable cognitive dissonance (you cannot simultaneously believe that the brain is the origin of all human traits and believe that there is something quintessentially “other” or non-physical about the brain that cannot be replicated with a sufficiently sophisticated machine).
Tendencies for these people would be to point at, say, the woefully inadequate state of computer technology, or our poor understanding of brain structures. These are both valid arguments for why such a thing is not possible today, but they don’t answer the question for tomorrow (or whenever the technical problems are resolved) — they are a form of deflection.
Similarly, there will be a tendency for some folks to say “well, we don’t understand what makes us human, so it’s a silly question” — a form of argument from ignorance (“I don’t know this right now, so I’ll never know, and nobody else will either.”) In other words, the rebuttals to this thought experiment mostly involve rhetorical fallacies — bad ways of arguing. We have a conclusion that we want to reach — that there’s something ineffable, undefinable, irreplaceable about humans which a machine could never replicate — and so we pick and choose evidence and arguments to support that conclusion. We don’t look at the world like a scientist should — devoid of biases, following wherever the evidence leads — instead, we use the trappings of science and learned discourse to simply reinforce our own biases. Lest anyone think I’m claiming to be above all this, I’m not; this tendency, this confirmation bias, is a human trait. We all do it instinctively, we all point at things we believe support our preconceptions, and ignore things that refute them.
Does any of this matter, or is it all dry argument in a void? Yes, it matters very much. Our ability to learn and grow is limited precisely by how much we’re open to new ideas, even when those ideas threaten or contradict our preconceptions. If we are fooled and blinded by our own confirmation biases, we are closed off to new ideas, new possibilities. Critical thinking — the art of learning how to think carefully about things — isn’t just an academic sport practiced by pedantic arseholes like me, it’s a fundamental skill that we all need to practice in order to lead fuller, happier lives.
And that, dear readers, is my point.
PS: for the record, my bias is towards biological determinism. I don’t think there’s anything “special” about human cognition; replicate the brain perfectly in silicon or software, and you’d have a human mind. Mind and brain are one and the same; inseparable. That’s my bias, that’s what I instinctively seek to confirm.