Wednesday, May 4, 2011

Thinking


Perhaps more than anything else, our thinking will ultimately be what defines our humanity in opposition to artificial intelligence.  This is far from a new idea; Christian recalls Aristotle’s belief that “thinking is the most human thing we do” (81).  At first that might sound counterintuitive; how could there be artificial intelligence that does not think?  But in order to reach that question, we must first define thinking. 

And as Lanier, Kurzweil, and Christian all point out, human beings do not understand enough about the ways in which our own brains work for us to create an accurate description of thinking.  The truth of the matter is we are still guessing through what processes thinking occurs.  Even though our ability to think has made us the dominant life form on earth, we still do not know how we do it. 
            We do, however, know exactly how computers “think.”  In my class presentation I discussed the ideas behind the computational theory of mind and I believe it is useful to examine a bit more.  A slight oversimplification of the theory is that our brain acts a as system processing unit through which “data” (our perception of the world around us) is taken in and computed; subsequently the output of that computation is thought.  The most important aspect of that theory is the process of computation, which requires the solving of a problem through the use of an algorithm.  Which is exactly how we’ve programmed computers to “think.”  Kurzweil points out that the end goal of computation “is to solve a problem, with the solution expressed as a sequence of symbols” (117).  He goes on to cite various ways that scientist are experimenting with computation in areas other than traditional silicon chips, from DNA computers to quantum computing.  What remains, however, is that computers are, for the time being, stuck in algorithm-based thinking.  And this is the sticking point for not only myself but Lanier and Christian as well.
            Lanier does not outright dismiss the notion of computational theory of mind, and he points out that computation is certainly useful in certain areas.  But as Christian astutely observes, computers, at the most basic level, are capable only of math (108).  He also rightly asserts that, as our technologies progress, more and more of the world can be translated in and out of mathematical equations.  But once again, we can see computers and artificial intelligence being constrained by the processes behind their functions.  Maybe someday in the future, when we’ve explored every possible molecule of the human brain and still do not completely understand where our thinking comes from, we will finally take solace in the fact that our ignorance of ourselves is our most human characteristic: our fallibility.
The building blocks of life...

...or the end of life as we understand it?

Perception


A pattern that both Lanier and Christian recognize in the human-machine relationship is the odd reversal of difficulty in various tasks.  The unconscious acts that essentially make sense of our world as we understand it are exceedingly difficult for computers to replicate; a computer might be able to be programmed enough to be the greatest chess player or Jeopardy! contestant in the world, but programming a computer to hold a convincingly human conversation has yet to happen despite decades of attempts.  Ironically enough, some of our most mundane, taken-for-granted abilities are proving the most difficult for computers to replicate.

            I believe that much of this complication of our perception is a result of senses ability to interpret our reality.  It’s an offshoot of the idea that our experience defines us; perhaps the constant immersion and interaction with our surroundings, combined with billions of years of evolution, have given us our uniquely human abilities.  Kurzweil sees the advancement of technology as an extension and improvement of biological evolution in a concept he terms the “six epochs.”  The premise is that there have been/will be six major steps in evolution of life.  As with all of Kurzweil’s findings, the six epochs evolve in exponential growth, decreasing the amount of time at each successive stage (15).  In order, they are: physics and chemistry, biology, brains, technology, the merger of technology and human intelligence, and the universe wakes up.  Right now we are supposedly on the tail end of the fourth epoch, technology (20).
            The premise that technological advancement is just continuing biological evolution most disturbs me in that it presumes man is machine and machine is man; it is precisely because of this assumption that Lanier and Christian see our humanity slipping away from us.  We do not yet fully understand the mind-body connection, but it will certainly be a factor when we begin to see our physical selves being colonized by machines under the guise of mental enhancement.




A Brief Hypothetical

Because so much of our understanding of humanity is theoretical, I’d like to begin this entry with a new twist on the very old hypothetical of if a tree falls in the forest and no one is around, does it still make a noise.  If, for whatever reason, all human beings ceased to exist tomorrow, would the world wide web still exist?  Automated bots would still be sending hundreds of millions of spam emails to flood our inboxes daily, Farmville crops would continue to wither away under a virtual sun, and Google’s AdSense would still be trying to define you in the most marketable, mathematically-predicable ways.  And yet, would any of it matter?  As Lanier says, “a computer isn’t even there unless a person experiences it” (26).  Without us there to interpret and make meaning out of the symbols being exchanged and changing, are they still even real?
Even something as simple as black and white together can imply a vast array of meaning depending on our own interpretation, including good and evil, night and day, or east and west, just to name a few.

Experience


What better place to begin than birth?  Specifically, I want to start off by looking at the concept of neoteny.  Neoteny is an evolutionary concept “in which the characteristics of early development are drawn out and sustained into an individual organism’s chronological age” (Lanier 179).  In other words, the more neoteny a species has, the more development is needed to reach maturity.  A human baby is pretty useless for anything other than eating, crying, and crapping.  Yet that baby will soon begin to walk, then talk, then read, and on and on until you are left with a fully-functioning adult human capable of hundreds of thousands, if not millions, of extremely complex tasks that no other animal is capable of.  I was hesitant to include “or machine” in that last sentence, as time will tell if there will be any areas of human life computers are unable to reproduce.  Lanier brings neoteny into the conversation about technology by showing how our neoteny is “expanding” on a cultural and technological level as our lifespan increases.  He also gives a prime example of the importance of neoteny in developing intelligence: the cephalopod
.
            However, my interest in neoteny is focused on the emergence of artificial intelligence.  I began to wonder, can true human-levels of intelligence and emotion really be programmed?  As humans, we rely intensely on our upbringing to mold us from a tabula rasa into active, thinking members of society.  Is it not a leap to assume that AI machines, if and when they emerge, might require some form of “robotic childhood?”  When that first fully AI robot is switched on, how will it know who it is?  Is it programmed with false memories, or is it the ultimate tabula rasa?  Both prospects are equally disturbing to me, in that it seems you either end up with an AI who “thinks” itself to be something it is not or an AI who is totally susceptible to whoever or whatever “raises” it.  And if glossing over this issue by saying the creation of an AI “personality” is unnecessary, then that is not truly human-level intelligence.  Will robots need to grow up?




Introduction


There’s a popular saying that people living in a golden age don’t often know it.  In the following blog entries I hope to examine and explore what it means to be human in the digital age through my own scholarly and artistic perception.  I am inspired by three somewhat contradictory books, but will bring in other communication scholarship to better augment my understanding of the concepts in relation to our class.  The first is Ray Kurzweil’s 2005 book The Singularity is Near.   
The overarching idea behind the book is that, according to Kurzweil, virtually all indicators related to technological development show an exponential growth over time; we are quickly approaching the “bend” in this trend where the power, and subsequently the intelligence, of computers will surpass all of humanity.   
One of many graphs that appears in Kurzweil's book showing the exponential growth of computing power.
What goes up must come down, right?
What exactly this singularity will entail (or even when it will occur) is obviously unknowable at this time, but Kurzweil does his best to attempt to answer these questions.  In his view, the singularity represents “a profound and disruptive transformation in human capability,” and will occur in 2045 (136).  Kurzweil does present a large body of pretty strong evidence, but his claim that “we will transcend biology, but not our humanity” rings hollow when examining his book on the whole (136).  To read Kurzweil meticulously dissect the computational power of the human brain into bytes is interesting, but when he does the exact same thing to a rock (yes, a rock) ten pages later, it would be hard to argue that he’s anything but dismissive towards any notion of innate human intelligence.  Indeed, time and time again Kurzweil manages to compare the computational power of some of our most basic human parts, such as our genome, to wholly nonhuman entities, like the Microsoft Word program.  I understand the path Kurzweil takes to make these comparisons, but that does not make it any less insulting to see the whole of human diversity compared to the word processor I typed this article on.  Perhaps my willingness to remain skeptical in the face of a mountain of technical evidence is just the last bit of my humanity throwing its arms in the arm and shouting “HELL NO!”  Whatever the case, Kurzweil’s vision of the future provides many intriguing, if not alarming, ideas about defines humanity in each of us.
The second body of thinking I draw from, and that originally inspired me to delve deeper into some of these issues, is Jaron Lanier’s You are not a Gadget.  By contemplating how machines and technology use us, Lanier makes a compelling case that the current climate of internet and technology strip us of our humanity and creativity.  Despite being a well-established pioneer of virtual reality and computer programming, Lanier is very dismissive of futurists like Kurzweil who subscribe to a belief in the singularity.  Indeed, Lanier compares the singularity to Christian Evangelical ideas about Rapture and claims they have one particular thing in common: that “they can never be verified by the living” (26).
 
The final source of my inspiration is Brian Christian’s 2011 book The Most Human Human.  The book follows Christian’s journey as he attempts to win the coveted “Most Human Human” award at the annual Loebner Competition, a challenge pitting computer against human in the famed Turing Test.  As Christian prepares for the show down, he explores many ideas about what exactly it means to be human, particularly through our communication.  His explorations into humanity are less scientific and more people-based than Kurzweil’s.  Christian, much like  Lanier, examines how we are sacrificing bits and pieces of our humanity to act in more computational ways.  For instance, he discusses how picking up women has become based on a type of information commodity rather than an actual emotional and physical attraction; men (not all, including myself) have taken to essentially “pre-programming” the most successful pick-up lines/stories/acts and simply following the script from there (123-4).  Beyond the “creepster” aspect of it, there’s a real loss of humanity in these kinds of interactions.  Christian clearly points to computers as the source of this transformation, and there’s an element of the reclamation of our humanity in the face of technology throughout his book.  If Kurzweil is the prosecution for technology in the philosophical courtroom of our class, then Christian is clearly the defense for humankind.
            With that conceptual blend fresh in your mind, I believe it is important to realize that the images I’ve worked to create alongside the relevant scholarship are purely my own imagining and extension of the readings.  In fact, many of the concepts I’m working with, at least according to Kurzweil, are either unknowable or unable to be represented.  For instance, the computers that Kurzweil claims will be melding with our biological bodies will be microscopic, perhaps no larger than a single blood cell.  My thinking is that perhaps by attempting to understand these concepts in our own terms in our own time, we can better understand the implications for the future.