In class, we talked about other number systems besides binary (base two) and decimal (base ten). I passed around a replica of an ancient clay tablet on which was transcribed the nine times table. The tablet is an example of a Babylonian mathematical cuneiform18 text. The table was particularly interesting because it was written in sexagesimal (base sixty) notation. Use the web to learn why the sexagesimal numbering system made sense for the ancient Assyrian astronomers and mathematicians. You might also want to learn about the work of Otto Neugebauer, who helped unravel the puzzle of these clay tablets.
Consider the robot shown on the right in the following graphic:
![]() |
This robot, based on Braitenberg's Vehicles [Braitenberg1986], is designed to seek light; it's said to be phototropic (the one on the left is photophobic). It should be easy to figure out how the light sensors drive the motors to steer the robot toward a light source in this simple design. Programmers and roboticists try to map designs (in the form of programs, hardware and electronic circuits) onto desired behaviors. But sometimes it's difficult to predict the behavior of even simple designs.19
Suppose we add another pair of sensors to our robot; these sensors detect sound and we'll assume that they're added to create a robot with an aversion to sound (a person with an irrational fear of sounds is said to be acousticophobic):
![]() |
What do you think that this robot will do when placed in an environment with variously placed sources of sound and light? What would it do with a light behind it and a noise ahead of it? How about light and noise emanating from the same point? Figuring exactly how such a robot will behave requires that you understand how the various signals from the sensors combine to drive the motors. This can be difficult when you have continuous inputs and continuous outputs.
Continuous measurements like light and sound can be tamed somewhat by thresholding. Instead of thinking about a continuum of measurements, suppose that we have a device that simply registers a 1 if the light (sound) is greater than a fixed level (the threshold) and 0 otherwise. Now we only have to worry about two cases: light and dark (quiet and noisy). This simple abstraction can make it easier to design complex systems involving lots of inputs.
We still have to specify what to do when different combinations of inputs occur and that's where logic gates and Boolean logic come in. Here are the basic logic gates described in Chapter 9:
![]() |
In the design for our light-seeking, noise-avoiding robot, there are four inputs, RIGHT_LIGHT, LEFT_LIGHT, RIGHT_SOUND, and LEFT_SOUND, and two outputs, RIGHT_MOTOR and LEFT_MOTOR. We'll assume that all of the inputs and outputs take on binary values: 0 and 1. To completely determine the behavior of our robot, we could fill in all sixteen of the rows in the following truth table (this is a multiple-input, multiple-output truth table):
|
But this'll get tedious (and prone to error) if we have to handle many more inputs and outputs. So instead, we'll create a hierarchical design for our controller, starting with the following high-level description:
![]() |
This design is hierarchical because it's realized in multiple levels so that some components on a given level are treated as black (they're actually white in this case) boxes or modules whose details are hidden. The details of these modules are revealed at lower levels in the hierarchical design. In the above diagram, we introduced two additional values, SOUND and NO_SOUND, to represent some of the intermediate results of our controller. We can drill down a little deeper into the module responsible for handling the input from the light sensors:
![]() |
There's still a module at this level lacking detail but it should be clear what this module is supposed to do. In the following table, we assign names to the intermediate outputs of this module:
|
This module is pretty easy to implement in logic gates (you don't actually need any logic gates). The design for corresponding module responsible for handling the input from the sound sensors is a little trickier. Here's one possible truth table for the inputs and outputs:
|
And here's one way that you might implement this truth table in logic gates:
![]() |
Now in order to actually control a real robot we could actually wire a bunch of logic gates together to implement our controller. This is time consuming and will likely require that you rewire your controller several times as you debug your design. One alternative is to simulate your logical design on a digital computers. It should be clear to you that you can implement any logical circuit with a program. Here we implement NAND and XOR gates in Scheme:
> (define (nand input_1 input_2) (not (and input_1 input_2))) > (nand #t #t) #f > (define (xor input_1 input_2) (or (and input_1 (not input_2)) (and (not input_1) input_2))) > (xor #f #t) #f > (xor #f #t) #t > (xor #t #f) #t
It may seem like overkill to use a computer (often built from millions of primitive logic gates) to implement a simple circuit, but it's definitely convenient and nowadays you can buy a complete computer on a single chip for only a few cents in large quantities.
It turns out that you can compute any reasonable mathematical function with a large enough circuit and this implies that you can compute most anything you'd really want to compute. You should convince yourself that it's possible to add numbers using circuits. In addition to the presentation in the book, you might want to check out some of cool web sites that describe full adder circuitry and let you experiment using a java applet.
However, just because it's possible to build a circuit to implement any mathematical function, it's often more convenient and more efficient to compute complex functions in stages, storing intermediate values along the way and then using them later in the computation when needed. That's where memory and methods for organizing memory come into play and registers and the register-machine model in particular.
There is a famous story about the mathematician Carl Friedrich Gauss (1777-1855) in which he amazes his teachers in elementary school when, asked to sum the numbers from 1 to 100, Gauss immediately produced the answer by realizing that the sum was 50 pairs of numbers each pair summing to 101:20
(1 + 2 + ... + 100) = (1 + 2 + ... + 50) + (51 + 52 + ... + 100) = (1 + 100) + (2 + 99) + ... (50 + 51) = (50 * 101)
Basically, Gauss recognized that a particular way of organizing a sum represented as a string of symbols gave rise to a more efficient method of computing that sum (at least it was more efficient given his particular abilities in adding, multiplying and dividing numbers).21 Arithmetic as implemented in modern computers involves a number of similar clever tricks. Clearly, people were summing lists of numbers long before computers, but when engineers and mathematicians began designing computers they found that the methods they were taught in school for performing computations were not always the best methods for computers.
For one thing, it turned out to be convenient to represent numbers in binary (base two) notation rather than decimal (base ten) notation. Decimal notation may seem more concise because a given number requires fewer digits than the corresponding binary number, but those digits are more complex requiring more symbols (0, 1, 2, ... 9) than binary (0, 1). Although Arabic text is written right-to-left, Arabic numbers (which have been adopted by most of modern mathematics) are written left-to-right; that is with the most significant digits appearing first (left most) and the least significant appearing last (right most). This convention need not be followed by engineers in representing binary numbers in computers; indeed, there are computers in which numbers are represented with the most significant bits first and those with the most significant bits last.22
There are other representational conventions that figure even more prominently. How might you represent negative numbers in binary? We could introduce another symbol - in addition to 0 and 1, but this complicates the design of computer memories in which the notions of off and on map nicely to the binary digits. We could reserve one binary digit to indicate the sign of the number (0 for + and 1 for -), and this is what is done in certain circumstances. However, integers are typically represented using what's called two's complement arithmetic: a representational trick not unlike Gauss's method for summing the integers 1 to 100. We obtain the two's complement of an integer by inverting the bits in the binary representation of the integer and adding one. I wrote a little Scheme code to help illustrate:
> (load "numbers.scm") > (define seven (integer->bits 7)) > seven (0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1) > (invert-bits seven) (1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0) > (increment-bits (invert-bits seven)) (1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 1) > (define negative-seven (increment-bits (invert-bits seven))) > negative-seven (1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 1) > (increment-bits (invert-bits negative-seven)) (0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1)
Notice that the two's complement of a number in two's complement notation is just the original number. In this way, two's complement functions just like negation: (- (- n)) = n. Now why would we go to all this trouble to represent negative numbers. Arithmetic is important in computing, and adding numbers is pretty basic in arithmetic. In Chapter 9 we described how other arithmetic operations are carried out using addition as a subroutine. There are circuits dedicated to adding integers and its convenient if those same circuits can be used to subtract (add negative numbers) as well as add. By representing negative numbers in two's complement notation, the same machinery can add and subtract. Here's an example illustrating how this works:
> (define first-addend (integer->bits 9)) > first-addend (0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 1) > (define second-addend (integer->bits -7)) > second-addend (1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 1) > (integer->bits 7) (0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1) > (invert-bits (integer->bits 7)) (1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0) > (increment-bits (invert-bits (integer->bits 7))) (1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 1) > (define sum (add-bits first-addend second-addend)) > sum (0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0)
If the sum is a negative number, then it automatically ends up in two's complement notation and we can convert this representation back to the normal binary representation used for unnegated integers by inverting the bits and adding one:
> (define first-addend (integer->bits -9)) > first-addend (1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 1 1) > (define second-addend (integer->bits 7)) > second-addend (0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1) > (define sum (add-bits first-addend second-addend)) > sum (1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0) > (bits->integer sum) -2 > (increment-bits (invert-bits sum)) (0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0)
May not seem like a huge thing, but when you do something a few billion times a second the savings can add up. For more a detailed explanation and additional examples, see Thomas Finley's web page on two's complement arithmetic.
There are lots of other interesting representations used by computers. Eight-bit binary numbers called bytes are often treated specially. The ASCII (ASCII stands for ``American standard code for information interchange'') code is one of several standards for encoding characters. ASCII maps the binary numbers 1-255 to 256 characters:
> (bits->integer '(0 1 0 0 0 0 0 1)) 65 > (integer->char (bits->integer '(0 1 0 0 0 0 0 1))) #\A> (bits->integer '(0 1 1 0 0 0 0 1)) 97 > (integer->char (bits->integer '(0 1 1 0 0 0 0 1))) #\a
There are also standards for representing floating-point numbers. Typically, numbers are stored in a fixed-format representation involving a fixed number of bits, most commonly one of 16, 32 or 64. Only so many integers can be encoded in a fixed number of bits. Floating-point numbers allow us to approximately represent numbers that are in some sense larger (or smaller) than the numbers we could represent if we simply were to represent their bits in a fixed-format representation. But just as in the case of integers and negative integers in particular, what you see when you work with floating-point numbers with a computer doesn't necessarily reflect what's going on internally.
> 2.75e+7 27500000.0 > (* 2.75 (expt 10 7)) 27500000.0
The numbers are displayed with the fractional part (2.75) in decimal notation and the exponent (7) also in decimal notation and assuming base ten. Internally, things are somewhat different. In the IEEE Standard 754 single-precision representation for real numbers, one bit is reserved for the sign of the number (0 for a positive number and 1 for a negative number), the exponent is stored in the next eight bits, and the mantissa occupies the remaining twenty-three bits.23 I wrote a little read-eval-print loop to illustrate some of the different representations for numbers:24
> (numbers) On this machine the native encoding of numbers is big endian and this procedure displays binary numbers with the most significant bits appearing on the left. Enter a single character or an integer or a real number: 97 Character: a Decimal: 97 Binary: 1100001 Octal: 141 Hexadecimal: 61 Integer byte string: 00000000000000000000000001100001 Floating point byte string: 01000010110000100000000000000000 Enter a single character or an integer or a real number: 27500000.0 Character: NA Decimal: 27500000 Binary: 1101000111001110111100000 Octal: 150716740 Hexadecimal: 1a39de0 Integer byte string: 00000001101000111001110111100000 Floating point byte string: 01001011110100011100111011110000 Enter a single character or an integer or a real number: -1.2 Floating point byte string: 10111111100110011001100110011010 Enter a single character or an integer or a real number: 1/8 Floating point byte string: 00111110000000000000000000000000
Notice that in addition to binary and decimal representations we include octal (base eight) and hexadecimal (base sixteen). Why do you think the latter are popular with computer scientists? Also notice that the numbers read-eval-print loop doesn't attempt to display fractional numbers (e.g., 1/2 or -1.2) as characters or in integer formats.
Here's a review of the basic machine instructions (or rather their assembly code symbolic names) for the simple register machine described in the Chapter 9. I've thrown in a couple of new instructions that will be useful if you want to write some assembly code of your own. In particular, I've added the HALT instruction which stops the computer thus preventing it from executing any subsequent instructions:
ADD I J K -> K = J + I BRANCH N L -> if N <= 0 then jump to memory location L DECR I -> I = I - 1 HALT -> the computer halts INCR I -> I = I + 1 JUMP L -> jump to memory location L LOAD I L -> load the contents of location L into register I MULT I J K -> K = I * J SUB I J K -> K = J - I STORE I L -> store the contents of register I in location L
Here's a graphic illustrating the various components of the register machine model (the so-called von Neumann architecture) which I'll explain more carefully in class:
![]() |
Here's a simple assembly program that computes the sum of two numbers:
LOAD 1 FIRST LOAD 2 SECOND ADD 1 2 3 STORE 3 RESULT HALT FIRST: 17 SECOND: 31 RESULT: 0
Write down what memory looks like before the code is executed and then simulate the program showing how memory changes as the instructions are executed. Now do the same for the following program. What does this program compute?
LOAD 1 INIT STORE 1 COUNTER LOOP: LOAD 1 COUNTER LOAD 2 RESULT ADD 1 2 3 STORE 3 RESULT LOAD 1 COUNTER DECR 1 STORE 1 COUNTER BRANCH 1 STOP JUMP LOOP STOP: HALT COUNTER: 0 RESULT: 0 INIT: 10
Here's a more detailed explanation of how the robot assembly program in Chapter 9 works.
The annotated assembly code listing:
START: JUMP LOOP # jump past sensor and constant locations RSV: 0000 # read right sensor value here LSV: 0000 # read left sensor value here RMP: 0000 # write right motor power level here LMP: 0000 # write left motor power level here OFF: 0000 # store motor-off constant here ON: 0100 # store motor-on constant here LOOP: LOAD 1 RSV # load right sensor value into register 1 LOAD 2 LSV # load left sensor value into register 2 SUB 1 2 3 # register 3 = register 2 - register 1 After the SUB instruction, register 3 contains register 2 minus register 1 or LSV - RSV in this case. Register 3 will be greater than zero just in case the left sensor value is greater than the right sensor value. LOAD 1 OFF # load motor-off constant into register 1 LOAD 2 ON # load motor-on constant into register 2 BRANCH 3 RGT # if the left sensor is greater than the right If the left sensor value is greater than the right sensor value, that means the light source is off to the left and we want the robot to turn to the left. We can drive the robot to the left by making the right motor spin faster than the left motor. If register 3 is greater than zero, then according to our definition of the BRANCH instruction, the program counter will be incremented normally and the instructions at LFT and LFT + 1 will be executed setting the right motor to ON and the left motor to OFF. If register 3 is less than or equal to zero, then the program counter will be set to RGT and the instructions at RGT and RGT + 1 will be executed setting the left motor to ON and the right motor to OFF. LFT: STORE 2 RMP # then turn the right motor on STORE 1 LMP # and turn the left motor off JUMP LOOP # and then jump to beginning of the loop RGT: STORE 2 LMP # else turn the left motor on STORE 1 RMP # and turn the right motor off JUMP LOOP # and then jump to beginning of the loop
I've provided Scheme code to emulate simple assembly code programs of the sort described in Chapter 9. Look at the examples at the end of the program listing to figure out how to write and execute your own programs.
Download all the code fragments in this chapter as a file in zip format.
18 Cuneiform refers to the wedge-shaped script used in ancient Mesopotamian and Persian inscriptions.
19 If you're interested in experimenting with Braitenberg vehicles, you might search the web for programs that simulate such vehicles, e.g., check out those by Wiseman or Gerken.
20 For another example of out-of-the-box thinking about numbers that yielded powerful insights into mathematics, investigate Georg Cantor's diagonal method for showing that the natural numbers cannot be put in a one-to-one correspondence with the real numbers. Cantor used this method to show that there are infinite sets that are larger than other infinite sets. The diagonal method is also relevant to theoretical computer science having been used to establish important results by Alan Turing and Kurt Gödel.
21 In a homework exercise, you were asked to write the Scheme function summorial to compute the sum of the numbers 1 through n. Using Gauss's trick, you can now write a more efficient summorial function:
(define (slow-summorial n) (if (= n 0) 0 (+ n (slow-summorial (- n 1))))) (define (quick-summorial n) (* n (+ n 1) 1/2))
Why do you think that latter function is characterized as being more efficient than the former? Could you quantify how much more efficient?
22 Search the web to learn more about ``big endian'' and ``little endian'' conventions for storing multi byte numbers in computers and see if you can understand the arguments put forward by the proponents for each convention.
23 The exponent is a signed integer in the range from -126 to 127 representing an offset from a bias chosen so as to convert all integers in the desired range. In IEEE single-precision numbers, the bias is 127 so that an exponent of 4 is represented as -123 in binary. Exponents can represent powers of 10, 16 or 2.
24 Here's a simple example of a read-eval-print loop in case you're interested in writing your own. Read-eval-print loops can be used to write interpreters for your own languages or implement your own shell variants:
(define (read-eval-print-loop) (printf "Input: ") (let ((input (read))) (if (equal? input 'quit) 'bye (and (printf "Output: ~A~%" (eval input)) (read-eval-print-loop)))))