CS009: Computers and Human Values
Department of Computer Science, Brown University
Notes, September 16th -- Roger B. Blumberg
Moravec III:
Complexity, Simulation, Explanation, Justification, Robopets, Respect, and Otherness (!).
Introduction
As we wrap up our discussions of Moravec and the Robopet exercise,
it's worthwhile to call attention to several concepts the clarity
of which might be questioned. Among these are:
- Complexity: What is complexity and why is it important
to Moravec's discussion? If there are so many ways to characterize
and measure complexity, might we conclude that the concept itself
will eventually be replaced by several others?
- Explanation: On Wednesday, someone insisted that we
could never evaluate a robot's "intelligence" until we had a
better understanding of what our own intelligence amounts to.
What would an
explanation
of intelligence look like? What are your criteria for a "good"
explanation, either in science or in everyday life?
Aside:In Sunday's NYTM,the cognitive scientist Steven Pinker is asked
whether all of human nature, "..even culture itself,
[is] reducible to evolutionary biology?" He begins his reply:
"I prefer the word unification to reduction." What is/are the
explanatory goal(s) and distinction(s) he has in mind here?
- Simulation: According to a number of
contemporary
cultural theorists, one of the
things that characterizes our "post-modern" age is the rise
of simulation, the blurring of distinctions between the
simulated and the "real", and the appearance of imitations
that try to improve upon the real or simply lack an
"original" altogether (e.g.
Celebration, FL).
But for our purposes, and in the concrete case(s) of AI,
we might ask whether our ability to accurately simulate a
behavior presumes a "good" explanation of that behavior,
and whether the successful simulation of a human quality
or behavior in a machine (e.g. one that passes the
Turing
Test) is enough to justify the claim that that machine
ability/behavior is "real".
- Justification: The
Robopet exercise was meant to
inspire, among other things, some thinking about ethical
judgements. The difference between our discussion of these
judgements in class and the way we might talk
about them in normal conversation, is that we'll be as
interested in the beliefs underlying these judgements as
the judgements themselves. The biggest issue in Moral
Philosophy, after all, isn't the question of right and wrong
but the question of how we can/should justify
our moral beliefs and claims.
- Respect: The Robopet exercise also raises
explicitly the question of respect, which is perhaps a
more complicated term than it first appears to be. Although
the roots of the word imply deference and heirarchy, a more recent view
argues that symmetry and reciprocity in relationships is at the
core of respect.
- Otherness: Questions about ourselves, our responsibilities,
and the limits of our freedom, originate most often in our
encounters with others. There is a large
literature on "Otherness" and the thoughts and behaviors encounters
with "The Other" can and should inspire (e.g. is my encounter with
another person inherently a struggle to avoid becoming an "object"
for her/him, or is the encounter inherently ethical, inciting
responsibility first and thought second -- "Before it is gaze,
the other is a face," writes Finkielkraut in The Wisdom of
Love). Whatever their arguments, most analyses of
"otherness" assume the basic humanity of the the Other (or at least
that the Other is a living being, in the case of animals), but the
Robopet exercise asks us to consider our encounter with the Other
when the Other is a machine. What, if anything, does that change
in our sense of what is involved in encountering the Other?
Moravec's Mind Children, chapters 3-6
We'll finally get to the chapter summaries of Moravec, and ask
the presenters to incorporate their Robopet remarks into their
presentations.
Tom Dean has been kind enough to join us again today, and
his own response to the
Robopet exercise is extremely interesting.
A(nother) Brief Introduction to Arendt's
The Human Condition
Arendt's book is one of the most difficult we will read
this semester, and perhaps we can share strategies for reading
the text successfully on Wednesday. In the meantime, here are
some questions that might help you identify and make sense
of her arguments:
- In the Prologue (pp. 4-5), Arendt discusses the
idea/myth that automation has freed (or can free) humans
from labor's "toil and trouble", and ends with the
remark that ".. we are confronted with ..
the prospect of a society of laborers without labor .."
In light of her (later) distinction between work and labor,
how would you explain this remark to someone who hadn't
had the pleasure (or whatever) of reading pp. 1-50?
- In Chapter 1, Arendt distinguishes between "human nature" and
"the human condition". What is the importance of the distinction
and what is purpose of thinking about the latter?
- In Chapter 2 (p. 38), Arendt writes "We no longer think
primarily of deprivation when we use the word 'privacy,' and
this is partly due to the enormous enrichment of the private
sphere through modern individualism." How do the classical
(e.g. Ancient Greek) notions of the "private" and the
"public" differ from those Arendt thinks characterize the
modern age? How does your own sense of the relationship
between the public, the private, the social and the political
compare with Arendt's and/or with the Ancient Greeks?
For Next Time:: 1) Send your written Robopet remarks to
the CHV-L list, and make sure to read all the responses carefully
(you might use this as a first opportunity to think about topics
for your first essay); and 2) Read the first 50 pages of
The Human Condition if you've not already done
so, and prepare at least one question about the reading.
Back to the
Syllabus