Daniel Dennett sorta zombies
In The Atlantic: “‘A Perfect and Beautiful Machine’: What Darwin’s Theory of Evolution Reveals About Artificial Intelligence”, an adapted excerpt from Daniel Dennett’s new book about the work of Alan Turing. I’ve mentioned Dennett before in ZombieLaw post on Zombies in Philosophy of Mind. In this Atlantic piece, Dennett updates his explanation of anti-essentialism by pairing Darwinian theory with Turing computers.
To this day many people cannot get their heads around the unsettling idea that a purposeless, mindless process can crank away through the eons, generating ever more subtle, efficient, and complex organisms without having the slightest whiff of understanding of what it is doing.
Noticeably Dennett does not use the word “zombie” or even refer back to the old philosophical-zombie debates of which he was a part. He was never pleased to include zombies in professional philosophy and his previous writings on the subject were to deny their conceivability. Here, Dennett favors a notion of “sorta understand” and gradual consciousness. He writes:
The Pre-Turing world was one in which computers were people, who had to understand mathematics in order to do their jobs. Turing realized that this was just not necessary: you could take the tasks they performed and squeeze out the last tiny smidgens of understanding, leaving nothing but brute, mechanical actions.
So in the Post-Turing world:
we see the reduction of all possible computation to a mindless process.
Dennett cites Turing’s 1950 paper “Computing Machinery and Intelligence” as demonstrating that non-human computers could learn, because in this view of what it means to learn, comprehension is not required.
Call this the bubble-up theory of mind, and contrast it with the various trickle-down theories of mind, by thinkers from René Descartes to John Searle (and including, notoriously, Kurt Gödel, whose proof was the inspiration for Turing’s work) that start with human consciousness at its most reflective, and then are unable to unite such magical powers with the mere mechanisms of human bodies and brains.
The Central Processing Unit of a computer doesn’t really know what arithmetic is, or understand what addition is, but it “understands” the “command” to add two numbers and put their sum in a register — in the minimal sense that it reliably adds when called upon to add and puts the sum in the right place. Let’s say it sorta understands addition. A few levels higher, the operating system doesn’t really understand that it is checking for errors of transmission and fixing them but it sorta understands this, and reliably does this work when called upon. A few further levels higher, when the building blocks are stacked up by the billions and trillions, the chess-playing program doesn’t really understand that its queen is in jeopardy, but it sorta understands this, and IBM’s Watson on Jeopardy sorta understands the questions it answers.
This “sorta understand” verbiage is similar to the idea of zombie-consciousness that was used in mind-body debates from the 1990s. In today’s article Dennett writes as if the mind-body problem has been solved (or nearly so) by Turing’s proof of computability. The essentialists would certainly argue that there is still something missing but Dennett shrugs them off like a Darwinian does a Creationist. For Dennett:
There is no principled line above which true comprehension is to be found
Thus, there is no principled line between man and p-zombie. He admits to appreciating the “discomfort” but insists that there is no essential mind. We are mere bodies acting unconsciously and interpreting action through a construction of intentionality; bootstrapping creative zombies.
And fictional zombies help perpetrate the myth of creativity by positing the dualist opposition. Further the idea that school teach comprehension is a way of excluding alternative cultural comprehension. So we use culturally biased tests and teach to the test as a way of having cultural tests, not real comprehension tests but wait, for Dennett there is no difference.
Still that doesn’t really explain why so many of us feel like there is. Why we all know what we mean by book-smart. And why it seems we are (at least sometimes) in creative control. Are there evolutionary advantages to these perceptions and belief in a firm contrast between comprehension and non-comprehension, between zombie and creative? Dennett’s Darwinian-Turing-gradualism is incompatible with the dualist dichotomies in the rhetoric of individuality and individual action – of competence, guilt, responsibility, ownership, etc. And he cannot explain why so many seem to feel that consciousness is something more than unconscious computation.
What does it mean to strive for Justice or Love or Humanity when there is no principled line? For a Darwinian-Turing-Liberal it must mean a never-ending struggle against a Camus-style Plague. A Liberalism that is always dying, and in its “sorta” death, bootstrapping more unprincipled lines for the next iteration. A Hegelian world constantly re-creating itself, as if ex nihilio, by creating artificial dichotomies and in synthesizing what was always the same, appearing to transcend the sum of its parts.