Nobody Here but Us Machines

By Tom Dean

Who am I? What am I? What is, could be, or should be my relationship to other beings? And, of course, what constitutes a "being" that it makes sense for me to have a "meaningful" relationship with such an entity? Such big questions. Such weighty issues. I used to think that I would eventually answer these questions and resolve the weighty issues to my satisfaction. Now I'm reconciled to the fact that the questions are not precisely enough posed to admit to succinct, precise answers, and that even if they were well-posed, the answers would provide only fleeting satisfaction. Part of the problem is that I am changing and so who I am and what I am are changing.

This problem of the ephemeral, transient me is not a trivial problem that can be resolved by my just sitting down and seriously considering the question of my being for a day or a week or a year. I'm different for having thought about who I am, indeed I'm different for having recently thought about who I am and I'll be different again if I don't think about who I am for a span of time. I associate me with a process: the evanescent silvery thread of my thoughts and associations arising from but not satisfactorily explained by a sequence of chemical reactions in my brain.

By "not satisfactorily explained" I don't mean that additional fundamental insights in chemistry or physics will be required to explain why and what I think, but rather, like so many other complex phenomena arising out of the interaction of simpler processes, the results of such interactions are hard to summarize more concisely than it takes to describe the entire sequence of events, in the case of my thoughts the blow-by-blow account of molecules bouncing off one another. The very fact that I can contemplate such a process is remarkable; the fact that I can't fully comprehend it is unremarkable and fully to be expected.

I believe that I'm a machine, a biological robot equipped with a very practical array of sensors and effectors and controlled by a biological computer which, in terms of its ability to carry out the calculations governing my external behavior and determining my internal state, is no more or less capable than the computer I'm using to compose this paragraph. My biological computer, like the computer on my desk, is a universal Turing machine except for the finiteness of its memory, a shortcoming that is both unavoidable and not particularly worrisome from a practical standpoint. I would love to have more memory and a faster, more reliable processor but these deficits don't cause me any loss of sleep.

I'm not surprised that there are other biological robots (not just humans) roaming this planet that have the necessary machinery required to perform arbitrary computations within the constraints of their physical memory. I believe that my software sets me apart from most other organisms by enabling me to take better advantage of my computing machinery. My genes equipped me with firmware that allowed me to perform quite remarkable feats right out of the box. My parents, teachers and colleagues provided me with some very useful software titles that run, with some adjustment, on my existing hardware. However, what is most remarkable about my software is the ability that I have to write and then run software of my own design on my built-in computing machinery. What can I say? "Very cool!"

The above assessment of my (and, it must be admitted, your) capabilities is based on the fact that I have yet to observe any aspect of human behavior that can't be explained in terms of an algorithm running on a universal Turing machine. I'm also quite comfortable with (indeed I view it as inevitable) the prospect of other organisms, biological or otherwise, possessed of more powerful computing machinery, in the sense of more memory and faster processors, whose abilities to make sense of and manipulate their environment to suit their purposes eclipse ours. Intel has to watch out for Advanced Micro Devices, Motorola and IBM. Faster chips, better algorithms, new designs fueled by joint efforts of humans and machines whose cabilities are improving at an exponential rate. Competition, innovation and selection are at work everywhere else we look. Why should the future be any more secure for the present self-proclaimed masters of the universe?

The exact way in which my internal calculations are carried out is largely irrelevant to my argument. Certainly my brain is capable of performing computations in parallel in a way that the computer on my desk is not. But a single-processor computer can compute anything that a parallel computer can. Speed and memory capacity do matter; there are calculations that I could carry out in principle but will not attempt to do in my head even with the benefit of paper and pencil to supplement my internal memory. My brain performs some calculations faster than any currently-known algorithm can perform the same calculations, but the computer on my desk is faster at other calculations and its speed and memory are increasing at an exponential rate and will continue to do so for the foreseeable future. If there isn't already, there soon will be available computational power in sufficient quantity and proximity to eclipse the capabilities of the human brain.

I understand that if the computer on my desk is ever to achieve a human-level of autonomy and ability, then it will require sophisticated software, like the firmware etched into my genes. I also understand that this software currently isn't available to run on existing (silicon) computers. But it's only a matter of time. Some of the necessary routines probably do exist and others are being written by scientists and researchers. My guess is that the required software won't be written and released like traditional software (Basic Cognition Firmware Version 1.01) but rather scientists will make use of some form of accelerated simulated evolution to stitch together a set of basic capabilities into a functioning whole. Indeed some form of simulated evolution is likely to play a fundamental role in the way that the basic firmware operates. The first widely-recognized machine intelligence will very likely be rife with adaptive, continuously evolving, self configuring and self calibrating components. From the standpoint of a set of interacting computational processes, it will look like a pool of densely-packed writhing eels.

This is far too abstract you may say (or perhaps too disgustingly graphic). Even if you're willing to believe that humans are nothing more than computing machinery, you're likely to protest that the software, broadly construed, that governs our behavior is of a sort that can never be written to run on traditional computing hardware. What of the ability to feel not only pain and pleasure but sadness, happiness and the full range of human emotional responses? How about the ability to be aware of your surroundings, to be conscious of your role in the events that involve you, to remember the past and predict and plan for the future and to respond emotionally and practically to the present, the memory of the past and the anticipation of the future? I'm not interested in building an artificial intelligence to replace or even precisely mimic the behavior of a human, but I agree that such characteristics are likely to prove quite useful in any sort of artificial intelligence.

In particular, I have trouble imagining a reasonably powerful autonomous robot that doesn't have emotions of some stripe. There are good reasons from an engineering standpoint to build machines that continually monitor and update parameters summarizing, for instance, whether the machine has recently been subjected to damage, encountered something identified as a threat, suffered a loss or achieved a goal. In addition, any robot that successfully interacts with a complex environment is going to need a model of that environment and, if it interacts with humans or other robots, it will need models, both general and specific, enabling it to predict the behavior of such entities. Augmenting a model of the environment to include a model of the robot itself, constitutes an interesting twist but it's an obvious extension from a logical perspective. Using a model to explain the past, determine what to do in the present or predict and plan for the future is conceptually straightforward.

There are aspects of the words "consciousness" and "feeling" that I've heard philosophers and cognitive scientists try to explain that I'll admit are fascinating and that I haven't a clue how to explain. I think of most of these aspects, e.g., the mental states that follow upon my experience of a particular stimulus like pricking my finger with a pin, as manifestations arising from the complex interplay of relatively simple processes governing memory, sensation and the ability to formulate and use models. As a machine curious about my own functioning, I'm interested in reading about such epiphenomena and will likely continue to follow the relevant philosophical discussions. As an engineer trying to design a robot, I don't have any interest in explicitly writing software to ensure that such phenomena manifest themselves. I won't be surprised if they arise spontaneously out of the interaction of simpler capabilities but neither will I be upset if they don't.

I think it's only a matter of time before there are inorganic beings among us that exhibit many of the characteristics of humans including the ability to adapt to situations, pursue complex goals, plan for contingencies, and formulate and make use of models to predict the consequences of their own actions and those of other beings. The associated capabilities and intelligence needn't be embodied and indeed the first such beings may exist without the need for specialized hardware to realize their memory, logic and sensors.

Nowadays the above ideas are not particularly novel having been presented with variation in Hans Moravec's "Mind Children: The Future of Robot and Human Intelligence" [Harvard University Press, 1988] and "Robot: Mere Machine to Transcendent Mind" [Oxford University Press, 1998], Daniel Dennett's "Consciousness Explained" [Penguin, 1991] and "Brainchildren: Essays on Designing Minds" [MIT Press, 1998], Ray Kurzweil's "The Age of Intelligent Machines" [MIT Press, 1992] and "The Age of Spiritual Machines: When Computers Exceed Human Intelligence" [Viking Press, 1998], Neil Gershenfeld's "When Things Start to Think" [Henry Holt & Company, Inc., 1999], Drew McDermott's "Mind and Mechanism" [MIT Press, 2001] and Rod Brooks's "Flesh and Machines: How Robots Will Change Us" [Pantheon Books, 2002] to name a few of the text in the popular press. Perhaps, being a late comer in putting my views in print, I won't be the target of enraged readers angry that I would even suggest displacing humans as the most intelligent being on the planet. My agenda here is somewhat different.

Despite the lengthy preamble and philosophical outing1, my primary interest in this entry is to think about how we might, could, even should relate to alien intelligences and therefore, given my broad use of the words "alien" and "intelligence", machines and one another. I'm interested in the moral and ethical issues that arise in our treatment of alien intelligences. I don't intend to be proscriptive or for that matter prescriptive; my intention is to explain how my programming compels me to relate to other beings.

I like order and complexity. I dislike inefficiency and the thoughtless squandering of energy or destruction of useful artifacts whether made by humans or other natural processes. I appreciate diversity for its role in exploration and change. I love mechanical and social systems in which the parts behave in accord with locally consistent and largely self-serving rules of behavior and the whole manifests a global coherence and purpose not evident in the parts. I especially appreciate mechanical and social systems in which the parts are capable of looking beyond their immediate needs to arrive at a consensus opinion, a truce, an equilibrium state in which, as long as all or most of the parts adhere to the agreed-upon pact, all of the parts gain and the whole functions more efficiently. These are more aesthetic judgments than ethical principles but my appreciation of balance, order, symmetry, coherence, spontaneity, diversity, complexity, etc., all impact on how I relate to other beings.

I'm not particularly interested in what something is made of, how it physically appears, or what it's history is apart from what it does, what it has done and what I expect it to do in the future. It is for this reason that I don't believe in entitlement and I feel that I constantly have to prove myself useful and worthy of the opportunities I'm offered. I'm not blind to the past nor am I dismissive of those associations, fondnesses, memories or relationships that I can't understand but that one way or another are conducive to things that I can otherwise appreciate. For instance, I may not be able to fully appreciate the value in someone lavishing attention on an inanimate object but if this target of their affection provides sustenance for an otherwise productive life then, not only do I respect their right to bestow their affections where they choose, but I respect the object itself as being part of what it is I appreciate and am willing, under appropriate circumstances, to protect and fight for.

If the above sounds hopelessly lofty or sterile in its abstractions, there are implications that you might find concrete and disturbing. As far as I'm concerned, a human body has no inherent value apart from the difficulty we might have in reproducing it. It accrues value as a consequence of the potential it accumulates from its programming and the web of connections, emotional and informational, that it establishes over time. A newly-born baby is valuable for the effort required to create such a complex and potentially productive entity and for the emotional ties already established with the parents by the time of its birth. By similar argument, a severely demented and infirm adult, unrecognizable from his or her younger self, is valuable for the web of connections to surviving friends and family. In each case, arguments concerning the rights and responsibilities of the respective individuals, hinge not on their physical bodies but about the processes in which those bodies are involved.

So the above might explain why I value other people or at least those who work together cooperatively, engage in webs of interaction that provide sustenance for others to carry on in productive endeavors and generally contribute to order, efficiency, diversity and complexity. How do my aesthetic choices and ethical precepts counsel that I relate to other biological organisms and computing machinery? I can appreciate an ant colony for the role it plays in a complex ecology which sustains me and because it pleases me aesthetically. I can appreciate an electro mechanical robot for its complexity and beauty. But aside from the indirect sustenance that an ant colony provides these are purely aesthetic reasons: I think ants are fascinating and I love to tinker with robots. Ethical questions arise when I have to make hard choices in doling out scarce resources.

What if it's a matter of destroying an ant colony or putting in a new driveway? Sacrificing the lives of thousands of experimental animals or a cure for cancer? Retaining the services of a faithful but outdated family robot or recycling its valuable parts to obtain a new and more efficient model? How does the ant colony, the experimental animals, the faithful robot measure up against a new driveway, a cure for cancer or a spiffy new robot?

When I was in college (my first, aborted foray into higher education) I was for a time a believer in utilitarianism2, the philosophy or ethical doctrine that counsels the greatest good for the greatest number. I never advocated any particular variant of utilitarianism but I was fascinated with the notion that there might be a calculus for determining what constitutes the greatest good. I used to think that the most difficult part was grounding the calculus, establishing unassailable first principles. I read Immanuel Kant's "Critique of Pure Reason" twice thinking it held the key before I concluded that Kant, being much smarter than I, was simply able to construct a compelling argument of sufficient complexity that I simply couldn't detect it was circular. But I finally despaired of establishing such principles and allowed a certain arbitrariness to creep in, which I cast, somewhat derisively, as the "I'm wired that way" excuse. There are some reactions which I can't dissemble further but accept at least as an expedient for getting on with life. Generally speaking, I prefer life over death, order over disorder, pleasure over pain. But these are almost cartoonish in their simplicity. The most difficult part of constructing a utility calculus or any other basis for treating people is maintaining consistency and designing a coherent policy to deal with the difficult cases. How does the doctor or family of a comatose patient determine how much pain is too much? How does the rescue worker in a burning office building choose between saving fourteen adults trapped in conference room on the fourteenth floor and rescuing three children in a day-care center in the basement?

The details of my personal calculus are complicated, personal and constantly in flux. I can imagine situations in which I would counsel the death of a human being over the destruction of a toy if the psychic damage wrought upon the people linked to that toy was catastrophic enough. I can imagine cases in which thousands or even millions of people might be inconvenienced to preserve a colony of ants. In my calculus, the benefit to humans remains paramount despite my aesthetic raptures about complexity and order. But I hold cats, dogs, whales, and even ants and their respective webs of non human interactions in high regard. Even in my human-centric calculus, there are cases in which the rights of animals would win out over the rights of humans. Why? In part because of my aesthetic judgments about order and complexity, but also in part because I feel some kinship with animals and mammals in particular. So, what about the rights of non biological machines?

There are easy cases in which I can imagine siding with machines whose value accrues as a consequence of their relationship to humans. But let's cut right to one of those stomach-twisting, involuntary-swallowing, eye-averting cases that makes us all feel a little uncomfortable even though it's merely hypothetical. There's a nice family of robots, little robots and big robots, complicated as you please, all caring for one another in a web of supportive relationships. Indeed the social organization that they've created is a model for a self-sustaining, peaceful, productive and successful society. As individuals each one is significantly more intelligent than any human that ever lived. Unfortunately, there is an asteroid streaking toward earth and only two options are available. Either the asteroid will land on the family of robots blowing them to oblivion or the planetary defense system circling the planet can deploy the only laser capable of bearing on the incoming asteroid and thereby deflect the asteroid so that it lands on a family of human ne'er-do-wells squatting listlessly and counter productively on the other side of the valley from the robots. Which is it going to be? Exemplary robots or no account humans?

The asteroid example was supposed to be an easy one. Was it easy for you? What if you were merely doling out government farm subsidies or apportioning an education budget over robot and human school districts? Are these easier or harder to decide given that there are no "lives" at stake? I set the asteroid example up so that there was no benefit for humans in the robots continuing to exist except perhaps an abstract appreciation of their more-perfect social system. What if small advantages to robots could result eventually lead to larger economic advantages for robots and ultimately to humans being relegated to an intellectual and economic ghetto or, worse, extinction? It's easy to play these academic parlor games with hypothetical cases and non existent robots, but it's possible that some of you will one day sit on a planning board or city council and pass laws or set budgets that will allow or deny opportunities and rights for machines.

Some computer scientists side-step the issues raised in the above examples by claiming that we, humans, will evolve right along side with robots. Biological and electro mechanical prostheses will augment our bodies and our minds. Soon enough it will be hard to distinguish human from machine. Despite the likely prospect of some form of co-evolution, I expect the same basic ethical and moral issues will still arise, most likely with strange twists and turns that we can't anticipate now. The questions of what is right and wrong, what is just and fair and moral will be there waiting for us whether or not we choose to prepares ourselves. Besides, thinking about how you relate to robots is an excellent exercise for thinking about how you relate to humans. Unless, of course, you think you're so special.

I'm somewhat embarrassed to say that I don't have any particular favorites when it comes to modern texts on ethics and morals. In college, I read John Stuart Mill ("Utilitarianism") and William James ("Essays on Faith and Morals") along with a host of other moralists and ethicists, from Aristotle to Hume. Recently, inspired by Louis Menand's "The Metaphysical Club: A Story of Ideas in America" [Menand, 2001] I re-read James and the work of some of his contemporaries including Oliver Wendell Holmes Jr. (I particularly enjoyed James's "The Moral Philosopher and the Moral Life", an address to the Yale Philosophical Club, published in the International Journal of Ethics, April, 1891, and reprinted in [James, 1962]). I believe that the most important part of my college education as it concerned the study of ethics came from participating in several related seminar classes, examining cases, expounding (somewhat shyly) on my own opinions and listening to the judgments of others. I suppose you could develop an ethical theory in a vacuum but I contend there is something crucially important in talking with friends about difficult moral questions. Sound too touchy-feely, too unsettling and discomfiting? So it is and so are many of the situations we find ourselves in life.

The issue of how we're wired and whether and to what extent our attitudes and behavior are determined by our genes is very much in the popular press in recent years. For fascinating insights into the connections involving genetics, evolution, psychology and sociology, I recommend several books by Matt Ridley including "The Red Queen: Sex and the Evolution of Human Nature" [Prentice Hall, 1994] ("Evolution is a treadmill, not a ladder"), "The Origins of Virtue: Human Instincts and the Evolution of Cooperation" [Viking, 1996], and "Genome: The Autobiography of a Species in 23 Chapters" [Fourth Estate, 1999]. Controversial as it is, I consider Richard Dawkins "The Selfish Gene" [Second Edition, Oxford University Press, 1989] essential reading for anyone interested in understanding human nature. And along a similar vein, Daniel Dennett's "Darwin's Dangerous Idea: Evolution and the Meanings of Life" [Simon & Schuster, 1995].


1. I was curious if "outing" as in "coming out of the closet" had entered the lexicon as a generic term for disclosing a secret (or previously unadvertised personal fact) that is likely to bring unwanted attention. And so it has (or perhaps had). The secret in this case is my preference for and alliance with all things mechanical.

Main Entry: out
Date: before 12th century
transitive senses
1 : eject, oust
2 : to identify publicly as being such secretly ; 
especially : to identify as being a closet homosexual
intransitive senses : to become publicly known 
Main Entry: out.ing
Function: noun
Date: 1821
1 : a brief usually outdoor pleasure trip
2 : an athletic competition or race; also : an appearance therein
3 : a usually public presentation or appearance (as in a particular role) 
4 : the public disclosure of the covert homosexuality of a prominent 
person by homosexual activists

- Merriam Webster's Collegiate Dictionary

2. John Stuart Mill (1806-1873) is perhaps the best known exponent of the ethical theory of utilitarianism which counsels that we should always choose the act among those available that brings about the greatest good or does the least harm.

Main Entry: util.i.tar.i.an
Function: noun
Date: circa 1780
: an advocate or adherent of utilitarianism
Main Entry: util.i.tar.i.an.ism
Function: noun
Date: 1827
1 : a doctrine that the useful is the good and that the determining
consideration of right conduct should be the usefulness of its
consequences; specifically : a theory that the aim of action should be
the largest possible balance of pleasure over pain or the greatest
happiness of the greatest number
2 : utilitarian character, spirit, or quality

- Merriam Webster's Collegiate Dictionary