Will robots and computers become conscious?

I place myself in the same camp as philosophers who insist there is a “hard problem” to consciousness that is frequently skirted in discussions about artificial intelligence, as well as in film and fiction about robots and computers becoming conscious. This issue most recently got triggered in me by Tom Ashbrook’s discussion with Martine Rothblatt.  See On Point, September 11, 2014.


Martine Rothblatt

Imitation is NOT identity. Suppose technology eventually had the ability to make a machine that duplicates me—the talk, walk, speech patterns, reflexes, quirks, emotional expressions—is that then ME? Hardly. It is a very sophisticated animated image. That is all. In principle, it is no more me than a photograph or video. This can be easily proven. Suppose my spouse took up with my clone and left me. If that thing were me, I should have no objection.

There is a deeper issue here: the language of science is the 3rd person—that is where things are observed, weighed, measured, and discoursed upon. But the mind is only viewed from the 1st person. I am the only witness to my experience; scientists are restricted to observations of my brain. Science has been attempting to reduce the WHO (1st person) to terms of a WHAT (3rd person) and make the former disappear. Science assumes that my experience must be a sort of epiphenomenon and illusion produced by neurons. Basically, the scientist says, “Your neurons are real, but you are not.” (Of course, he must accept that his experience of himself is as unreal as my experience of me.) Perhaps we do not come to terms with the “Taboo of Subjectivity” (see book by B. Alan Wallace) because we still fall under the shadow of a sort of atavistic Behaviorism that continues to push that mind is merely the sum of its observable expressions. This, because it is the only approach our method will sanction. In the meanwhile, we must endure absurd fantasies about ‘artificial consciousness’ from people like Rothblatt.