Pet Rocks, Robotic Pets and Things that Think

An exercise proposed by Tom Dean and revised by Roger Blumberg

In CS148 students build "robopets". A "robopet" is a mobile robot that exhibits certain behaviors and is controlled by relatively a simple logic unit. The logic unit is often implemented as a finite-state machine whose internal state consists of a series of parameters, and because these can control the behavior of the robopet they are often given descriptive names by the programmer, such as "happiness", "hunger", "curiosity", "loneliness", "fear", etc. A typical robopet is equipped with sensors that allow it to observe properties of the external world, such as light levels, temperature, and whether and where on its body it is being touched. Other sensors allow a robopet to monitor its internal physical state and its own behavior (e.g. its battery level or the direction and rate of rotation of its drive wheels).

Robopets have a set of procedures that can be invoked to allow the robot to interact with the world. One procedure might enable a robot to seek out a particular stimulus, such as food or light or petting. Another procedure might enable a robot to avoid a particular stimulus, say darkness or loud sounds. These procedures can be simple or complex, they can be used in isolation or combination, and they can be unresponsive to external stimuli or wonderfully responsive, i.e., adaptive.

Finally, a robopet has a set of rules which, when applied to the current state, result in the robot altering the current set of procedures (adding some and removing others) governing its behavior. These rules also stipulate how to update the robot's "emotional" parameters on the basis of the robot's current state and sensory stimuli. The rules implement the state-transition function of the finite state machine. (Imagine a finite state machine as a dim-witted but conscientious homunculus armed with a chalk board listing the current values of the emotional parameters, and a rule book that it consults at each tick of the clock to enable or disable the set of procedures governing the robot's behavior.)

In fact, a robopet's behavior arises out of the interplay of multiple active procedures (which in and of themselves can produce complex, difficult-to-predict and often surprising behavior) and the robopet's interaction with its environment. Up to a point, a robopet might seek out petting even though it hungry. It might also seek pain of one sort to avoid pain of another sort. It may seem that if you wrote the program you'd naturally understand how the robot would behave in all circumstances, but you can also imagine that the number or complexity of the rules could be such that the programmer could never forsee all the possible scenarios, interactions and resulting behaviors of even a relatively simple robopet.

Students who build robopets often anthropomorphize their robots and this is revealed when they talk about their behavior. Fellow students find it easy to play with robopets at some length even though they've been told how the robots are programmed. Imagine, for example, that the set of rules cause the robopet to exhibit behavior that people recognize as "altruistic", "affectionate", "loyal", "single-minded", "curious", "lazy", "sly", "aggressive", "playful". Similarly, Rod Brooks, in Flesh and Machines, reports that the students, relatively sophisticated budding cyberneticians, who programmed the Cog and Kismet robots often attributed cognitive and affective characteristics to their creations and behave with the robots as if the robots "deserve" some degree of attention and empathy; they clearly do "relate" to these automata.

That humans "relate" in this way to "inanimate" objects is not exactly new. Years ago, Joseph Weizenbaum wrote a program called "Eliza" that attempted to simulate the experience of talking to a psychotherapist, and he was flabbergasted at how many people became emotionally attached to his simple text-only program. More recently, and perhaps even more surprising, the short-lived "pet rock" craze in the United States showed that people could develop emotional attachments and attribute behavioral predicates to quintessentially inanimate objects!

But whereas most people would agree that pet rocks either exhibit nothing we should properly call "behavior" (or exhibit only one such behavior!), it seems that robots like robopets clearly exhibit a set of behaviors. Indeed, some reductionists believe that real pets, for example domestic cats and dogs, are no more complicated than robopets. And some would go so far as to say that humans are governed by more complex variants of the same basic computational principles governing robopets, i.e., some might hold the view that only the set of rules (decision logic, control software) is different in each "organism".

Before we continue to investigate this idea, however, let's go back to pet rocks. Why is it that we hesitate to attribute behavior to rocks, but have little trouble attributing behavior to robopets, animals and humans? What about plants? Certainly we can make sense of someone who says not just that their jade plant is growing, but that it is "happy"; but, do we really believe plants can be happy or are we just interpreting such a statement as a figure of speech? To see where you stand on these issues, think about the similarities and differences between the situations in which you might utter each of the following sentences:

  1. My spider plant is really happy today.
  2. My cat is mad at me today.
  3. My robopet wants to be petted today.
  4. My two-month old sister doesn't like the swing.

Now back to computational principles, robopets and people. Most of us will admit that certain computer programs are more sophisticated, useful, or complicated than other programs; so what might it mean for humans and household pets to be differentiated only on the basis of their "programs"? Suppose that all biological systems could be described in terms of automata with increasingly complicated and sophisticated rule sets, procedures and internal state vectors? Suppose further that all such systems can be laid out on a single continuum from simplest to the most complex (it might be infinite in one direction). One question that immediately presents itself is: How should we measure "complexity" in order to arrange the systems along the continuum?

"Complexity" is one of those concepts that seems transparent until you think about it a while. While it's obvious that we should distinguish complexity from simple "size" (e.g. the smallest insect is clearly more complex than a salt lick the size of the Empire State Building), it's less obvious how to describe complexity so that it matches our intuitions and insights about the differences between organisms and/or machines. For example, how would you explain why an adult whale is or isn't complex than a human infant? How would you explain why a robopet is or isn't more complex than a spider plant?

As an exercise consider the following "systems":

  1. an ant
  2. an elm tree
  3. a two-month old infant
  4. a robopet rabbit
  5. a whale
  6. you

Kolmogorov complexity allows us to measure and create a scale of complexity, but it is not the only measure we might use. Define two different "scales of complexity" so that, for each one, a different system would be obviously "most complex". Share your scales with others and see if there is general agreement about which system is "really" most complex. (Can the group come up with six different scales of complexity so that each of the different systems is considered "most complex" according to one of the scales and the meaning term "complexity" hasn't been changed beyond recognition?)

One reason for disagreement in these matters is that different people use the term "complex" to describe very different kinds of things. If you look up the word "complex" in the Oxford English Dictionary (available on-line at Brown) you'll find that it's used historically to describe things as different as "ideas", "sentences" and "societies". In comparing the scales people have come up with in our exercise, you'll see that sometimes people are talking about physical properties of the systems, sometimes behavioral properties, sometimes social properties, and sometimes properties that are some combination of these. Finally, depending on how we define complexity our scale will either be ordinal or interval, and may or may not be continuous.

For the sake of discussion, however, let's suppose we have a generally agreed-upon measure of complexity and suppose too we can agree where to place insects, cats, monkeys, plants, robopets and humans on that scale.

Do you imagine this scale is continuous (i.e. do you imagine there is some possible system corresponding to every point on the scale)? Perhaps there are discontinuities introduced by particular cognitive characteristics that bring about an abrupt increase in capability. Might these divisions or discontinuities provide a basis for different treatment by nearby entities along the continuum? Is is possible that all such divisions are arbitrary? Are there bracketed intervals of the continuum that constitute classes of entities that are the qualitatively the same? Perhaps the entities in these classes should have some sort of affinity for one another? What about entities in different classes?

Try to make sense of the following question: How should we relate to other entities along this scale? When I say "relate", I mean things like would you be willing to: help them if you saw someone tormenting them; end their "life" (i.e. to kill them, or turn them off in the case of a machine); accord them the right to vote; allow them to receive unemployment compensation from the state; yield to them when you were passing in a hall and they are encumbered while you are not; perform experiments on them to improve the lot of other entities in your own position on the scale; require that they be subject to and protected by the laws of the land?

When I say "should", I mean that you personally feel compelled. You'll probably have to examine what you mean by "should" and "relate" multiple times in trying to make sense of and then answer the above question. For starters, choose a point along the scale where you understand what would be meant by "relate", e.g., one of the roaches hiding behind the refrigerator in your kitchen, the humanoid robot in the movie "Terminator", or David, the artificial intelligence in the movie "A.I.". Now ask yourself whether you think systems at that point on the scale should be able to vote for or run as presidential candidates in democratic elections, and see how you respond without thinking long or carefully. Then go back and examine the question and your answer more carefully.

Clearly the differences of opinion about the right answers to these questions reflect differences in values, both personal and social. Some might say they will inevitably reflect a (human) species bias as well. That is, if there were a robot endowed with the capacity to think about and answer these questions, do you imagine its answers would be the same as ours? (Like the term "complexity", this question may seem less clear as one considers it for a while).

Here are two questions that reveal interesting things about how our values enter into our judgments about humans and machines. Answer each one quickly, without thinking too much about it, and then more carefully upon reflection. Share your answers with others and see whether any consensus is possible in the group.

  1. Would you think differently about a person who was upset because her cat ran away (or preferred to sit in the lap of someone else) and another person who was upset because her robocat ran away (or preferred to sit in the lap of someone else)? How would you explain your answer to someone who claimed not to agree with you?
  2. Suppose in the future that schools are populated by students who are either: 1) entirely human; 2) entirely robotic; and 3) combinations of the two (i.e. cyborgs). Suppose too that some classes are limited in size, and you are the teacher that must make decisions about who is and is not allowed to enroll in your limited-enrollment course. Under what circumstances might you use the human/robot/cyborg distinction to make such decisions?

As a last exercise, suppose that a particular society decides that rights, obligations, and privileges ought to be distributed based on the increasing complexity of the members of that society (i.e. the most complex members of society are accorded the most and most important rights, are subject to the greatest and most important obligations, and receive the greatest privileges). Revisit the measures/scales of complexity you used in the 6-system exercise above, and construct a measure/scale that you think would make for a society that was especially "just". Would your choice have been the same if the goal were to create a society that was "humane" or "efficient" or "free" or "wealthy" or "fun to live in"?


Back to the Syllabus