It’s difficult to work with computers for long without thinking about the brain as a computer and the mind as a program. The very idea of explaining mental phenomena in computational terms is obvious and exciting to some and outrageous and disturbing to others. For those of us working in artificial intelligence, it’s easy to get carried away and extrapolate from current technology to highly capable, intelligent and even emotional robots. These extrapolations may turn out to be fantasies and our robots may never exhibit more than rudimentary intelligence and sham emotions, but I think not.
Scientific knowledge about brains and cognition has expanded dramatically in recent years and scholars interested in the nature of mind are scrambling to make sense of mountains of data. Computer software is surpassing human ability in strategic planning, mathematical reasoning and medical diagnosis. Computer-controlled robots are cleaning offices, conducting guided tours in museums, and delivering meals in hospitals. However, a lot more is at stake here than solving engineering problems or unraveling scientific puzzles.
The various claims concerning the nature of mind are complicated and hotly contested. Many of them date back thousands of years. Some are believed to threaten the moral, religious and social fabric of society. It’s feared by some that if individuals believe certain claims — whether or not they are true — they will no longer be bound by prevailing social and moral contracts. In this chapter, we’ll examine a few of these claims and reflect on their implications for what it means to be human.
This chapter has more bibliographical references than any other in this book. It also has fewer answers. If you want to learn about computer architecture, I can point you to a book that will tell you most of what is known about that subject. If you want to learn about minds and machines, there are many claims and little agreement. In some cases, the exact framing of the question is not even agreed upon, and the meanings of the terms used in the debates are often difficult to pin down.
Here’s some advice for reading the remainder of this chapter. Beware of acceding to a claim (“we are machines”) before understanding what it really means. Scrutinize colloquial terms (“intelligence”) used in technical arguments. Try to disentangle what you believe to be true (perhaps you side with Einstein in believing that the rules governing the universe are deterministic) from what you want to be true (you may choose act as though you are responsible for your actions even though this outlook is hard to reconcile with your belief in a deterministic universe). Remain skeptical and realize that these are hard questions without universally agreed-upon answers. The philosophical, psychological, ethical and sociological issues raised in this chapter are among the most interesting facing us as humans; if it’s any consolation (or incentive), there is hope that some of them may be resolved in our lifetimes.
Who am I? What am I? What is, could be, or should be my relationship to other beings? And, of course, what constitutes a “being” with which it makes sense for me to have a “meaningful” relationship in the first place? Such big questions. Such weighty issues. I used to think that I would eventually answer these questions and resolve the issues to my satisfaction. Now I’m reconciled to the fact that some of the questions are too imprecisely posed to admit succinct, precise answers and that others are beyond what science can answer and may remain forever so.
Part of the difficulty in pinning down the essential “me” is that I am changing, and so who and what I am are changing. This problem of the ephemeral, transient me is not a trivial one that I can resolve by just sitting down and seriously considering the question of my being for a day or a week or a year. I’m different for having thought about who I am, indeed I’m different for having recently thought about who I am and I’ll be different again if for a while I don’t think about who I am. I associate me with a process: the evanescent silvery thread of my thoughts and associations arising from but not satisfactorily explained by a sequence of chemical reactions in my brain.
By “not satisfactorily explained” I don’t mean that additional insights in chemistry or physics will be needed to explain the machinery of the mind, though indeed they may. Rather, I mean that, like so many other complex phenomena arising from the interaction of simpler processes, the results of such interactions are hard to summarize more concisely than what’s needed to describe the entire sequence of events, in the case of my thoughts a blow-by-blow account of molecules bouncing off one another. The very fact that I can contemplate such a process is remarkable; the fact that I can’t fully comprehend it is unremarkable and fully to be expected.
There are, however, questions I think I can answer. I believe that I’m a machine, a biological robot equipped with a very practical array of sensors and effectors and controlled by a biological computer. In its ability to carry out the calculations governing my external behavior and determining my internal state, my biological computer is no more or less powerful than the computer I’m using to compose this paragraph. Like the computer in my laptop, my biological computer is a universal computer(in Turing’s sense) except for the finiteness of its memory, a shortcoming that is inevitable but not particularly worrisome for many everyday calculations.
One argument that we’re no more than machines has its origins in the atomismof Democritusand other ancient Greek philosophers and runs as follows. We’re composed of atoms and molecules whose behavior is governed by the laws of physics. Nothing more than this mechanical description is required to account for our actions, including any observable manifestations of so-called intelligent behavior. One possible problem with this argument is that the governing laws of physics may rely on complex quantum-mechanical effects that brains somehow exploit. Even if we are machines in the sense of molecular machines, some aspects of our behavior may rely on components that can be realized only by the particular arrangements of molecules comprising our human bodies.
Getting from “we’re molecular machines” to “we can be realized on conventional computing hardware” takes another leap of faith. Some arguments for the latter rely on the claim that a universal computer can simulate any process that can be described mathematically and, since conventional computing hardware can simulate a universal computer, it can simulate the biological processes of the brain. But even if, as many scientists believe, we can eventually describe biological processes mathematically, we don’t know for sure that a universal computer can simulate, say, a quantum-mechanical process in finite time.
To avoid the possible pitfalls of simulating minds at the molecular level, some scientists claim that we’ll soon be able to simulate the information-processing capabilities of individual neurons and map out the configuration of all neurons using brain-imaging techniques. Arguing that quantum-mechanical effects are irrelevant at this level of description, they believe that they’ll be able to simulate the activity of whole brains using their techniques. So far, they’ve only been successful in simulating the neural circuitry of organisms equipped with just a dozen or so neurons. And, of course, even if they’re successful, you may not find a wiring diagram of the brain any more satisfying than a blow-by-blow account at the molecular level.
From a purely computational standpoint, my laptop and my brain are most interestingly distinguished in terms of their software. I would love to have more memory and a faster, more reliable processor, but I don’t lose any sleep over the limitations of my biological computing machinery. On the other hand, I’ve spent many a restless night pondering how we accomplish the simplest things and struggling to get a robot to perform similarly. From a roboticist’s perspective, chess is easy compared with catching a ball or climbing a tree.
I’m not surprised that other biological robots (not just humans) roaming this planet have the machinery required to perform arbitrary computations within the constraints of their physical memory. But I believe that my software sets me apart from most other organisms in enabling me to exploit my computing machinery better. My genes equipped me with firmware that let me perform quite remarkable feats right out of the box. My parents, teachers and colleagues gave me some very useful software titles that run, with some adjustment, on my existing hardware. What is most remarkable about my software, however, is my ability to run software of my own design on my built-in computing machinery. What can I say? “Very cool!”
I’m also quite comfortable with — indeed view as inevitable — the prospect of future organisms, biological or otherwise, possessed of more powerful computing machinery, in the sense of more memory and faster processors, whose abilities to make sense of and manipulate their environment eclipse ours. After all, Intel has to watch out for Advanced Micro Devices, Motorola and IBM. Faster chips, better algorithms, new designs are fueled by joint efforts of humans and machines whose abilities are improving exponentially. Competition, innovation and selection are at work everywhere else we look. Why should the future be any more secure for the present self-proclaimed masters of the universe?
I claim that the exact way in which my internal calculations are carried out is largely irrelevant to our discussion. Certainly my brain can perform computations in parallel in a way that the computer on my desk cannot. But a single-processor computer can compute anything that a parallel computer can. Speed and memory capacity do matter; there are calculations that I could in principle carry out but will not attempt in my head, even with paper and pencil to supplement my internal memory. My brain performs some calculations faster than any currently known algorithm can, but the computer on my desk is faster at other calculations and its speed and memory are increasing at an exponential rate and will continue to do so for the foreseeable future. Soon computational power will be available in sufficient quantity and proximity to eclipse the computational capabilities of the human brain.1
The last paragraph should have set off warning bells in your head. Perhaps the human brain has capabilities beyond those that can be accounted for computationally. But before we look at what possibly can’t be accounted for computationally, let’s consider what might be. It’s clear that if the computer on my desk is ever to achieve a human level of autonomy and ability, it will require sophisticated software, like the firmware etched into my genes. I think it’s only a matter of time before computers exhibit many human characteristics, including the ability to adapt to situations, pursue complex goals, plan for contingencies, and formulate models to predict the consequences of their own and others actions.
Some of the necessary software components already exist — at least in a primitive form — and others are currently under development. There exist algorithms for learning, planning, pattern recognition and problem solving that surpass those of humans in certain narrow domains. It will take some time to surpass humans in their natural environment — natural selection has had a long time to tinker with our physical bodies and the neural circuitry for animating them — but our vaunted logic is not likely to be a stumbling block. The first artificial beings to win our admiration in the intellectual arena may not have bodies at all; they may exist as disembodied robots, purely computational entities circulating in the World Wide Web.
However, I expect that the most interesting artificial intelligences will have bodies and sophisticated sensors and effectors to interact with the physical world. I admit we have a way to go in developing embodied intelligences that rival humans, but even so the current generation of robots can walk and roll about, avoid obstacles, recognize humans and perform useful work. But there’s a difference between building a robot to perform a particular task and building a robot that is humanlike in its abilities.
Even if you’re willing to believe that humans are nothing more than computing machinery, you’re likely to protest that the software, broadly construed, governing our behavior is of a sort that can never run on traditional computing hardware. What of the ability to feel not only pain and pleasure but sadness, happiness and the full gamut of human emotional responses? How about the ability to be aware of your surroundings, to be conscious of your role in events, to remember the past, predict and plan for the future, and respond emotionally and practically to the present, the memory of the past and the anticipation of the future? I’m not interested in building an artificial intelligence to replace or even precisely mimic human behavior, but as an engineer I’m willing to concede that some of these characteristics are likely to prove useful in any sort of artificial intelligence.
In particular, I have trouble imagining a reasonably powerful autonomous robot that doesn’t make use of some of the attributes associated with emotions. There are good engineering reasons to build machines that continuously monitor and update their internal parameters summarizing, for instance, whether the machine has recently been subjected to damage, encountered something identified as a threat, suffered a loss or achieved a goal. In addition, any robot that successfully interacts with a complex environment is going to need a model of that environment and, if it interacts with humans or other robots, it will need models, both general and specific, enabling it to predict human or robot behavior. Augmenting a model of the environment to include a model of the robot itself (considered by some a prerequisite for consciousness) constitutes an interesting twist, but it’s an obvious and relatively simple extension from a technical perspective. Using a model to explain the past, determine what to do in the present or predict and plan for the future is conceptually straightforward.
The words “consciousness” and “feeling” have aspects I’ve heard philosophers and cognitive scientists try to explain. I’ll admit that these are fascinating and that I haven’t a clue how to handle them computationally. I think of most of these aspects, say the mental states that follow upon a particular stimulus like pricking my finger with a pin, as manifestations arising from the complex interplay of relatively simple processes governing memory, sensation and the ability to formulate and use models. As a machine curious about my own functioning, I’m interested in reading about such epiphenomena and will continue to follow the relevant philosophical discussions. As an engineer trying to design a robot, I don’t have any interest in explicitly writing software to ensure that such phenomena manifest themselves. I won’t be surprised if they arise spontaneously out of the interaction of simpler capabilities, and I won’t think it a serious deficit if they don’t.
Let’s pause for a moment and reflect on the claims made so far. Right off the bat, I gave up on a “complete” explanation of cognition — it’s just too complicated to account for what, at least introspectively, happens in my head. I implied that eventually science will provide a blow-by-blow account of what happens at the molecular level, and admitted this might be the best explanation a self-reflecting being could hope for. I then shifted gears and focused on computational capabilities such as planning and modeling, arguing that eventually computers would surpass us in such capabilities. Admitting there are characteristics of human cognition that current computational theories cannot account for, I brushed these characteristics aside as epiphenomenal and irrelevant to engineers building robots.
You may feel that I’m brushing aside all the really interesting parts of cognition. What would satisfy you as an account of cognition? And would some accounts leave you feeling diminished, less special, less interesting? I’ve claimed that we’ll eventually develop software capable of learning, planning, pursuing goals and other activities often associated with intelligent behavior. I’ve reduced emotion such as remorse, sadness and grief to mechanisms whose primary purpose is to encourage machines to anticipate and avoid the causes of such emotional states in the future.
This engineering perspective may seem largely irrelevant to an adequate account of human cognition and the prospect of highly intelligent machines may seem demoralizing. I happen to find the challenge of trying to build such machines and the prospect of succeeding enormously stimulating. The idea that we could even conceive of such an endeavor seems a testament to how special and interesting we are. And the prospect of someone or something smarter than I am seems both inevitable and desirable. Alas, I can’t explain why I feel that way.
One of my main interests in this chapter is to explore how we might, could, even should relate to alien intelligences and therefore, given my broad use of the words “alien” and “intelligence”, to machines and one another. I’m interested in the moral and ethical issues that arise in our treatment of alien intelligences. I don’t intend to be proscriptive or for that matter prescriptive; I simply want you to think seriously about the implications of the view that we are no more or less than machines.
I like order and complexity. I dislike inefficiency and the thoughtless squandering of energy or destruction of useful artifacts, whether made by humans or other natural processes. I appreciate diversity for its role in exploration and change. I love mechanical and social systems whose parts behave in accord with locally consistent and largely self-serving rules of behavior and the whole manifests a global coherence and purpose not evident in the parts. I especially appreciate mechanical and social systems in which the parts can look beyond their immediate needs to arrive at a consensus opinion, a truce, an equilibrium state in which, as long as all or most of the parts adhere to an agreed-upon pact, all of the parts gain and the whole functions more efficiently. These are more aesthetic judgments than ethical principles, but balance, order, symmetry, coherence, spontaneity, diversity and complexity all impact how I relate to other beings.
When it comes to making moral judgments, I’m not particularly interested in what something is made of, its physical appearance, or its history apart from what it does, what it has done and what I expect it to do in the future. I try not to be dismissive of those associations, fondnesses, memories or relationships that I don’t understand but that one way or another are conducive to things I can otherwise appreciate. I may not be able see why people lavish affection on some inanimate objects but, if this affection provides sustenance for an otherwise productive life, then I respect both their right to bestow their affections where they choose and indeed the object itself as being part of what I appreciate.
I used the word “respect” in the previous paragraph to refer to things that I take into account in making moral judgments. My willingness to treat all machines, biological or otherwise, on an equal footing, all else equal, stems from a computational and informational view of what’s important. For example, I respect human bodies in large part for their potential and for the connections, emotional and informational, that they’re involved in. In this view, a newborn baby demands respect in large part for the investment required to create such a complex and potentially productive entity and for the emotional ties already established with the parents by the time of its birth. By a similar argument, a severely demented and infirm adult, unrecognizable from his or her younger self, deserves respect for the web of connections to surviving friends and family. In each case, arguments concerning the rights and responsibilities of the individuals hinge not on their physical bodies but on the processes in which those bodies are involved.
So the above explains why I respect other people, or at least those who work together cooperatively, engage in webs of interaction that provide sustenance to others in productive endeavors and generally contribute to order, efficiency, diversity and complexity. How do my aesthetic choices and my ideas about what to consider in making moral judgments imply that I should relate to other biological organisms and computing machinery? I can appreciate an ant colony for its role in a complex ecology that sustains me and because it pleases me aesthetically. I can appreciate an electromechanical robot for its complexity and beauty. But aside from the indirect sustenance that an ant colony provides, these are purely aesthetic reasons: I think ants are fascinating and I love to tinker with robots. Ethical questions arise when I have to justify making hard choices in doling out scarce resources.
What if it’s a matter of destroying an ant colony or putting in a new driveway? Sacrificing the lives of thousands of experimental animals or finding a cure for cancer? Retaining the services of a faithful but outdated family robot or recycling its valuable parts to build a new, more efficient model? How do the ant colony, the experimental animals, the faithful robot measure up against a new driveway, a cure for cancer or a spiffy new robot?
In college I was for a time a believer in utilitarianism,2 the philosophy or ethical doctrine that counsels the greatest good for the greatest number. I never advocated any particular variant of utilitarianism but I was intrigued with the notion of a calculus for determining what constitutes the greatest good. I used to believe that the most difficult part was grounding the calculus, establishing unassailable first principles. I read Kant’sCritique of Pure Reason twice, thinking it held the key, before I concluded that Kant, being much smarter than I, was simply able to construct a compelling argument of sufficient complexity that I couldn’t find its circularity.
At one point I despaired of establishing such unassailable principles and allowed a certain arbitrariness to creep in, which I called, somewhat derisively, the “I’m-wired-that-way” excuse. There are some reactions that I can’t disassemble further but accept as expedients for getting on with life. Generally speaking, I prefer life to death, order to disorder, pleasure to pain. But these are almost cartoonish in their simplemindedness. The most difficult part of constructing a utility calculus or any other basis for dealing with people is maintaining consistency and designing a coherent policy to deal with the difficult cases. How does the doctor or family of a comatose patient determine how much pain is too much? How does the rescue worker in a burning office building choose between saving fourteen adults trapped in a conference room on the fourteenth floor and three children in a basement day-care center?
As I got older I came to believe that my “wired-that-way” excuse was no more than a pretext for avoiding difficult cases. My calculus evolved into a complex decision process combining internal deliberation and external probing and negotiation, my obligation to deliberate and investigate bounded only by time and my own computational limitations. The details of this decision process are complicated, personal and constantly in flux. I can imagine situations in which I would favor the death of a human being over the destruction of a toy if the psychological damage to the people linked to that toy was catastrophic enough. I can imagine cases in which thousands, even millions of people should be inconvenienced to preserve a colony of ants.
In most cases, the benefit to humans remains paramount despite my aesthetic raptures about complexity and order. But I hold cats, dogs, whales, and even ants and their respective webs of nonhuman interactions in high regard. Even from my humancentric perspective, in some cases the rights of animals should win out over the rights of humans. Why? In part because of my aesthetic judgments about order and complexity, but also in part because what I see of animals’ computational capabilities leads me to believe they have a mental life of some complexity and hence deserve my attention and efforts at engagement and negotiation.
And what about the rights of nonbiological machines? There are easy cases in which I can imagine siding with machines whose value accrues as a consequence of their relationship to humans. But philosophers love to pose difficult — twisted some would say — cases that make us feel uncomfortable even though they are merely hypothetical. Here’s an example. Suppose there’s a nice family of robots, little robots and big robots, complicated as you please, all caring for one another in a web of supportive relationships. Indeed, the social organization they’ve created is a model self-sustaining, peaceful, productive and successful society. As individuals, each one is significantly more intelligent than any human who ever lived.
Unfortunately, an asteroid is streaking toward earth and only two options are available. Either the asteroid will land on the family of robots, blowing them to oblivion, or the planetary defense system circling the earth can deploy the only laser capable of bearing on the incoming asteroid to deflect the asteroid so that it lands on a family of human ne’er-do-wells squatting listlessly and counterproductively a few miles away. Which is it going to be? Exemplary robots or no-account humans?
The asteroid example seems easy until you really put yourself in the role of the person in charge of the planetary defense system. What if you were merely doling out government farm subsidies or apportioning an education budget over robot and human school districts? Are these easier or harder to decide given that there are no “lives” at stake? I set up the asteroid example so that there was no benefit for humans in the robots’ continued existence except perhaps an abstract appreciation of their more perfect social system. What if small advantages to robots could eventually lead to larger advantages for robots and ultimately to the relegation of humans to an intellectual and economic ghetto or, worse, extinction? It’s easy to play these academic parlor games with hypothetical cases and nonexistent robots, but some of you may someday sit on a planning board or city council and pass laws that will allow or deny opportunities and rights for machines.
Some computer scientists sidestep the issues raised in these examples by claiming that we humans will evolve right alongside robots: biological and electromechanical prostheses will augment our abilities so that soon it may be hard to distinguish human from machine. Already, people calling themselves cyborgswalk around using powerful computers, tiny head-mounted displays, wireless data links and wide-spectrum cameras to enhance what they see and maintain a constant connection to the web and to one another. But despite the likelihood of some form of co-evolution, I expect the same basic ethical and moral issues will still arise, most likely with strange twists and turns that we can’t anticipate now. The questions of what is right and wrong, what is just and fair and moral will be there waiting for us, whether or not we choose to prepare ourselves. In any case, thinking about how you relate to robots is an excellent exercise for thinking about how you relate to humans. Unless, of course, you think you’re so special.
Being free to choose (“having free will”) seems a prerequisite for taking responsibility for your actions and a requirement for moral obligation. If you can’t choose your actions, then you can’t be held responsible for them. Without freedom, not only are you not accountable for your actions, you can’t control your destiny. Then what’s the point? A lot seems to hinge on your being free. Once again I’m going to sidestep the complex moral and ethical issues involved with responsibility and focus on the simpler issue of agency — who (or what) is in control of me.
It would seem from what the biologists and sociologists tell us that some aspects of how we think and act depend on our genes and others depend on what we picked up from our family, friends and acquaintances as we grew up. If that weren’t enough outside intervention, any perceived shred of control remaining seems an illusion if one agrees with the findings of modern physics. If the universe is governed by physical laws and we’re just scattered bits of the universe, then isn’t my life and my future completely determined by these laws? (This view, closely related to the atomism of the ancient Greeks, is called causal determinism or simply determinismand is often associated with the French physicist and mathematician Pierre-Simon Laplace (1749–1827).) Where is there any wiggle room for me to choose?
In answering this question, it helps to be clear about what you really want. It’s easy to get spooked by the image of physical laws tugging at your puppet strings and making you dance against your wishes. It also helps to be clear about this “me” whom we want to be in control. By “me” I mean the sum total of all my experiences, my flesh sensitized by a million touches and my brain etched by the traces of past thoughts and activated by my current thinking. All I really want is for this “me” to be the sole agent in charge of my actions.
But that’s exactly what the laws of physics ensure. It’s my thoughts, my feelings, my past that determine how I act — no more and no less. That the future is determined — the script already written — doesn’t bother me. I have all the freedom I could possibly want. Indeed, it would upset me if it were otherwise.
If the universe is governed in part by random perturbations, say quantum effects, that is somewhat less desirable from a decision-making standpoint. But I can cope with uncertainty up to a point by identifying statistical patterns and acting so as to hedge my bets. I can deal with an uncooperative world. What I can’t countenance is having my choices, my remaining room to maneuver, influenced by factors aside from who I am and how I came to be that way.
What about the parts of me that are determined by my genes, by my siblings and friends growing up, by where I went to school? These are factors I had little control over. Am I stuck with how they conspired to shape me? To some extent they mark me for life and determine at least in part how I will respond to the future. But they can be overcome to some extent by exploiting the fact that we’re (self-) programmable machines.
I say this without total conviction just now, having recently experienced situations in which, despite extensive preparation on my part, I found myself deviating from my carefully planned behavior and reverting to primitive, deeply wired responses I did not resist: a failed New Year’s resolution to exercise three days a week, a lapse in the plan to cut down on my daily caffeine intake. I suppose there are people who have no wish to change the way they are; however, for many, I expect it’s important to believe that change, positive change, is possible. As a computer scientist aware of the power of programmable machines, it’s natural for me to want to tinker with my own programming.
Why would programming your own responses be any different from programming a robot with a fixed set of sensors and manipulators and an existing software layer providing basic services and low-level drivers for all the robot’s sensors and effectors? Well, aside from the fact that we don’t have access to the source code for our brains, the biggest problem is that the low-level driver code for existing robots is fundamentally different from the subroutines and basic firmware governing behavior. For one thing, our firmware appears to be constantly adapting; every time you think about something, you change the way you think about it; every time you do something, you change how you do it and your motivations and predilections for doing it in the future.
Imagine writing code to get a robot to walk across the room, code that relies on a particular set of low-level driver routines. Suppose you finally get the robot to perform as you wanted, but when you try your code again the next morning, you find that the drivers have perversely rewritten themselves so that your code now makes the robot seek out the battery charger, plug itself in and go into sleep mode. We’re always changing, but we don’t always have complete control over those changes. People talk about being compelled to do things, about being wired to respond one way rather than another, about being unable to overcome their biases and predispositions.
Why would anyone design a system with basic components so resistant to change? Just posing the question suggests the answer: The components are resistant to change because they are too critical to the organism’s health to let them be altered casually. You really don’t want to start playing around with the systems that govern your breathing, heart rate, digestion and reactions to extreme temperatures or rapidly approaching projectiles.
In what sense are we free to act if we are constrained by our low-level hardware? The computer scientist Drew McDermott claims (McDermott01, p. 98) that “a system has free will if and only if it makes decisions based on causal models in which the symbols denoting itself are marked as exempt from causality.” McDermott’s notion of causal model is similar to the sort of environmental model posited above for hypothesizing, evaluating and predicting the consequences of action. Being exempt from causality in this case simply means that the system is running a program using a causal model in which the actions performed by the system are determined by the program. Implicitly, the system believes (or acts as if it believes) that it is in control of itself.
But how can I be in control of myself if I can’t even understand how I think and reason? What about those impulsive low-level drivers that “make me” miss my exercise class or sneak an extra afternoon espresso? The answer is that I can control myself without knowing everything there is to know about psychology and neurophysiology, much as I can control my car without knowing a lot about automotive engineering. Even when you’re driving with the cruise control on, you feel that you’re in control of the car. Your internal model distinguishes between those aspects over which you have absolute control (turning off at the next exit) and those over which you have only indirect or incomplete control (acceleration on the hills).
What possible mechanism might we employ to exert control over our biases and predispositions? We have the ability to process language, use logic to follow and critique arguments, construct models of our environment, hypothesize, evaluate and predict outcomes, absorb information, form plans, and formulate and perform experiments to test hypotheses. I admit that it’s quite extraordinary of evolution to have contrived for us to have these capabilities, but have them we do and some of us exercise them every day.
So, it’s simple, right? We just list and evaluate a set of possible outcomes, evaluate them to determine the ones we’d prefer, and then formulate and carry out plans to realize those outcomes. I have written code to make robots carry out a sequence of steps very much like this, and the fact that I can articulate the steps means that I could probably carry them out myself with one caveat: I may not feel like it! Does this mean I’m stuck, unable to “overcome” my programming and my hardwired biases and predilections? No. It just means that some of my programming is of a sort that requires a very different kind of reprogramming.
You’ve probably heard of B. F. Skinner’stheory of behaviorismand his concept of operant conditioning. The idea is that by repeated exposure to a stimulus coupled with an appropriate (positive) reinforcing signal, an organism can be trained to respond to the stimulus even in the absence of the reinforcing signal. Complex responses (behaviors) can be “shaped” by reinforcing their component parts. Skinner has gotten a bad rap since, as often when a scientist single-mindedly pursues a theory, he tended see operant conditioning and behaviors determined by simple stimulus-response associations at work in every aspect of human behavior. But Skinner had one thing right — operant conditioning is an effective tool in the repertoire of a self-programmer.
Suppose I’ve had a bad experience in school and so I’m reluctant to attend class. But logic and a quick look around at people whose lifestyles I find appealing tell me that a good education is a ticket to realizing the outcomes I most want. Still, it’s really tough to overcome my aversion to being in school. So I engineer situations such that the stimulus, being at school, is associated with a positive experience, such as taking a class I really like or working with a supportive teacher, and eventually my reluctance and aversion are extinguished and my interest and attraction become the dominant responses. Believe it or not, you can even do this in your head by simulating the stimuli and reinforcement signals, as it were: you simply imagine being at school and having a good time or learning amazingly useful stuff.
It sounds too good to be true but it works, and most of those “power of positive thinking” seminars you read about in the back of airline magazines are based on this simple idea. Robotics researchers use a similar technique called “reinforcement learning” in which a robot replays its past experiences, both pleasant and unpleasant, over and over in order to modify its behavior. In some cases, the robot formulates a particularly undesirable experience, like falling down a flight of stairs, as part of an effort to ensure it never has that experience.
So I can use self-inflicted operant conditioning to rewire parts of my firmware if I can recognize what I want to change and devise a conditioning method to instill the target stimulus-response behavior. You need to know what makes you tick and hence there are probably limits on how much you can change your behavior, but it’s possible to make significant alterations. With a little insight, you could probably condition those perverse adaptive motor drivers I mentioned earlier to get your robot to walk across the room reliably.
So, for me, self-programming is the combination of searching for and recognizing desirable outcomes, using conditioning when appropriate to get my desires in line with whatever seems reasonable and then means-ends analysis to figure out how to bring about the desirable outcomes. Sounds so simple, doesn’t it? Well, it isn’t, but then neither is programming a silicon-based machine to do anything halfway interesting.
All this talk about self-programming is leading to a point about self-reliance and freedom. However difficult it is to program ourselves (and it’s clearly nothing like writing code for a web server), we can change our behavior just by thinking it. You’ll have to wrestle with your sense of right and wrong but you can’t claim to lack control over your actions. If anything, understanding what we are capable of accomplishing as subtle and adaptable computing machines should make us expect more of ourselves. The power inherent in computing machinery should convince us that our limitations are few, our potential is extraordinary and our responsibilities are open-ended.
Think it odd for an engineer, scientist, or specifically a computer scientist to talk about themes usually the province of philosophy or theology? Not so. Scientists, mathematicians and academics of all stripes tend not to be shy of tackling big questions. In the fall of 1999, Don Knuth gave six public lectures at MIT about the interactions of faith and computer science (see Things a Computer Scientist Rarely Talks About for an edited transcription). The final lecture concerned computer programmers as creators of new universes and computational complexity as an approach to thinking about free will.
Over a hundred years earlier, William James(1842–1910) gave an address to Harvard Divinity School students entitled “The Dilemma of Determinism” (included in his Essays on Faith and Morals) and emphasizing similar issues. It is clear that we make decisions every day in an attempt to order our lives and thus, according to James, we must believe (or at least act as though we believe) in free will; the alternative is unacceptable.
In addition to being a noted philosopher, James was a scientist and, in particular, a very influential psychologist (see Louis Menand’s The Metaphysical Club: A Story of Ideas in America for an absorbing account of his life and times and those of his contemporaries including Charles Sanders Peirce, whose writings on the foundations of probability, statistics and the scientific method helped shape how we think today). The philosophically inclined will find it very instructive to read Knuth’s account of determinism and free will and then James’s account.
I’m very interested in the questions raised in this chapter and I read whatever I can find on them. For insights into the connections among genetics, evolution, psychology, sociology, I recommend Steven Pinker’s How the Mind Works and Matt Ridley’s Genome: The Autobiography of a Species in 23 Chapters. For predictions about simulating human minds and creating robotic intelligence, check out Hans Moravec’s Mind Children: The Future of Robot and Human Intelligence and Robot: Mere Machine to Transcendent Mind, Ray Kurzweil’s The Age of Intelligent Machines and The Age of Spiritual Machines, Neil Gershenfeld’s When Things Start to Think and Rod Brooks’ Flesh and Machines.
If you’re unsure whether or not you’re conscious or indeed what that might mean, I recommend Daniel Dennett’s Consciousness Explained and Brainchildren: Essays on Designing Minds and Drew McDermott’s Mind and Mechanism. If you found my account of emotions and machine cognition unsatisfying, take a look at David Gelernter’s The Muse in the Machine for a computer scientist’s theories about how emotions influence memory and creative thought and Antonio Damasio’s Looking for Spinoza for a neuroscientist’s speculations on the role of emotions and feelings in cognition.
The issues of how we’re wired and whether and to what extent our attitudes and behavior are determined by our genes are very much in the popular press these days. For interesting accounts of what it means to have free will, whether we are free in any sense of the word, and the implications of various attitudes toward freedom, read Daniel Dennett’s Elbow Room: The Varieties of Free Will Worth Wanting and Freedom Evolves. The issues concerning free will and various forms of determinism are highly controversial, so proceed with caution. You would be well advised to track down dissenting views for the ideas that appeal to you most strongly and not to depend entirely on my suggestions for relevant reading. There are good reasons that many of these questions have remained unresolved for millennia.
I’m somewhat embarrassed to say that I have no particular favorites among modern texts on ethics and morals. In college, I read John Stuart Mill(Utilitarianism) and William James (Pragmatism and Essays on Faith and Morals) along with a host of other moralists and ethicists, from Aristotle to Hume. Recently, I reread James and the work of some of his contemporaries, including Oliver Wendell Holmes, Jr., and I particularly enjoyed James’s “The Moral Philosopher and the Moral Life” (in Essays on Faith and Morals) and various of Holmes’ writings that translate his particular brand of moral philosophy into political and legal policy.
I remain intrigued by Nietzsche’s statement “God is dead”and his prediction that, without God as a basis for guilt, a “total eclipse of all values” would lead in the twentieth century to conflicts of unprecedented brutality and scope. World War I and II were certainly conflicts of terrible violence and broad scope, but it’s not clear that these events were caused by any changes in our values. I avoided talking about values and morality here in part because I’m not satisfied with existing biological and computational accounts of morality, for example, in Steven Pinker’s The Blank Slate. You might want to look at some of the earlier work on morality and ethics before sampling the latest offerings. I find it heartening that the writings of Aristotle, James, Holmes, Hume, Mill, Nietzsche, and other long-departed philosophers are still so relevant today.
Of late I’ve been particularly interested in approaches to moral philosophy influenced by evolutionary biology and the theory of strategic games as they apply to cooperation and altruism. Daniel Dennett, a philosopher familiar with such influences, lists (Dennett03, p. 218) the key components of human morality as “an interest in discovering conditions in which cooperation will flourish, sensitivity to punishment and threats, concern for reputation, high-level dispositions of self-manipulation that are designed to improve self control in the face of temptation, and an ability to make commitments that are appreciable by others.” All these components involve interaction of one sort or another in discovering, experimenting with, and negotiating the terms of our shared moral pact. It’s not hard to see computation lurking in each of these key components.
Many of the issues raised in this chapter are currently beyond what science can answer and some of them may remain forever so. Not everything admits or warrants a scientific explanation. Our attitudes concerning who we are and how we relate to one another and to the universe tend to be very personal. At the same time there may very well be moral absolutes whose truth we can convince ourselves of. Grappling with the difficult issues raised in this chapter is an important way to come to terms with ourselves and learn to understand and appreciate the perspectives of others.
As far as I know, there are no definitive answers to the big questions about “life, the universe and everything” (borrowing the title of Douglas Adams’ third volume in Hitchhiker’s Guide to the Galaxy). These questions require (and merit) a lifetime investment, and one of the best ways of seeking the answers is to listen to others, particularly others who don’t hold the same opinions as you. I defer to Don Knuth on a wide range of questions concerning computer science and mathematics, but I disagree with (or perhaps misunderstand) his views on determinism and free will. Still, I get a lot out of listening to what he has to say. In reading William James and Charles Sanders Peirce, I find some of what they had to say embarrassing (James was the president of a society for psychical research from 1894–1895 and told his brother, Henry, to watch for evidence of James’s continued presence after his death) but much of their work is as relevant today as it was over one hundred years ago. I suppose that in some very real sense James is with us today as much as he was in his lifetime (as is the American Society for Psychical Research, so who am I to scoff and snicker?).
Some of you may feel that the idea of being “merely” machines somehow diminishes us. The fact is that we are truly remarkable machines and by understanding ourselves better we demonstrate yet another aspect of what makes us so remarkable. By thinking of ourselves as computational machines, we gain greater insight into how we work and what we are capable of achieving. By thinking of others as machines, we learn to appreciate what’s important in our relationships with them. And by acknowledging our self-programmability, we admit to a level of self-determination and responsibility that distinguishes us, at least for the time being, from any other machine. Just as I’m different for having thought about who I am, so we are all different for having thought about what sort of machines we are and imagining what sort of machines we might become.
Alan Kay, who dreamed up the idea of the laptop computer, once said, “The best way to predict the future is to create it.” Kay was one of the inventors of the Smalltalk programming language, a pioneer in object-oriented programming, and the architect of the modern graphical user interface. He, along with a host of other computer scientists, helped invent our future by imagining what computers might do and then realizing their ideas in hardware and software. With advances in computer science, molecular biology and neuroscience, the canvas prepared for the next generation of scientists and engineers is rich in possibilities. We’ve only just begun to tap the power of computing and hence the power to transform ourselves and humanity. It’s a fascinating time to be an intelligent machine.
1 Ray Kurzweil, inventor, scientist, and author of The Age of Intelligent Machines, has predicted that a $1,000 personal computer will match the computing speed and capacity of the human brain by around the year 2020.
2 John Stuart Mill (1806–1873) is perhaps the best known exponent of the ethical theory of utilitarianism, which counsels us always to choose the act among those available that brings about the greatest good for or does the least harm to the greatest number.