Written Assignment 1 FAQ

Problem 1

We have discussed how the definition of substitution results in an inefficient operator: in the worst case, it can take time at least quadratic in the size of the program (where we can define the program size as the number of nodes in the abstract syntax tree). We talked about delaying substitution using a cache. However, as we discussed in class, implementing the cache using a stack doesn't seem very much more efficient.

Answer the following two questions.

  1. Provide a schema for a program (similar in style to the one we saw in class for the non-linearity of substitution) that illustrates the non-linearity of the stack-based cache implementation. Explain briefly why its execution time is non-linear in its size.

  2. Describe a data structure for a substitution cache that a FWAE interpreter can use to improve its complexity, and show how the interpreter should use it (if the interpreter must change to accommodate your data structure, describe these changes by providing a pseudocode version of the new interpreter). State the new complexity of the interpreter, and (informally but rigorously) prove it. You don't need to restrict yourself to the subset of Scheme we are using in this course you may employ all your knowledge of Java, say. However, the responsibility for providing a clear enough description lies on you; remember that simple code is often the clearest form of description.

Problem 2

The program

{let {x 4}
  {let {f {fun {y}
    {+ x y}}}
    {let {x 5}
      {f 10}}}}

should evaluate to 14 by static scoping. Evaluating x in the environment at the point of invoking f, however, yields a value of 15 for the program. Ben Bitdiddle, a sharp if eccentric student, points out that we can still use the dynamic environment, so long as we take the oldest value of x in the environment rather than the newest and for this example, he's right!

Is Ben right in general? If so, justify. If not, provide a counterexample program and explain why Bens evaluation strategy would produce the wrong answer. (A bonus point for explaining why Ben's way of thinking about environments is conceptually wrong, irrespective of whether or not it produces the right answer.)

Problem 3

Is Java eager or lazy? Write a Java program to determine the answer to this question. The same program, run under the two different regimes, should produce different results. You may use any Java features you want, but keep your program relatively short; we will penalize you for programs we consider excessively long or obfuscatory. (Tip: It's possible to solve this problem with a program no more than a few dozen lines long.)

You must turn in both the source code to your program (in printed or written form) as well as an answer to the question of whether Java is eager or lazy, and an explanation of how your program determines this. That is, you should provide a brief and unambiguous answer (e.g., ``Java is lazy'') followed by a description of what result would obtain under each regime, along with a brief explanation of why that regime would generate that result.

In general, it would be a good idea to discuss your plan of attack with the course staff. This will help you avoid falling into a trap of measuring the wrong entity, and will improve your understanding of eagerness and laziness.

Problem 4

In our lazy interpreter, we identified three points in the language where we need to force evaluation of expression closures (by invoking strict): the function position of an application, the test expression of a conditional, and arithmetic primitives. Doug Oord, a fairly sedentary student, is rather taken with the idea of laziness. He suggests that we can reduce the amount of code we need to write by replacing all invocations of strict with just one. In the interpreter from 2003-09-24, he removes all instances of strict and replaces

 [id (v) (lookup v env)] 
 [id (v) (strict (lookup v env))] 

Doug's reasoning is that the only time the interpreter returns an expression closure is on looking up an identifier in the environment. If we force its evaluation right away, we can be sure no other part of the interpreter will get an expression closure, so removing those other invocations of strict will do no harm. Being lazy himself, however, Doug fails to reason in the other direction, namely whether this will result in an overly eager interpreter.

Write a program that will produce different results under the original interpreter and Doug's. Write down the result under each interpreter, and clearly identify which interpreter will produce each result. You may assume that the interpreted language features arithmetic, first-class functions, with, if0 and rec (even though these aren't in our in-class lazy interpreter). Hint: Be sure to compare this behavior against that of the lazy interpreter of the sort we've written in class, not the behavior of Haskell!

Problem 5

No lazy language in history has also had state operations (such as mutating the values in boxes, or assigning values to variables). Why not?

The best answer to this question would include two things: a short program (which we assume will evaluate in a lazy regime) that uses state, and a brief explanation of what problem the execution of this program illustrates. Please be sure to use the non-caching (ie, original) notion of laziness. If you present a sufficiently illustrative example (which needn't be very long!), your explanation can be quite short.

Problem 6

CPS the following Scheme functions. You don't need to CPS primitives such as empty?, first, rest, cons, cons? and <. You may also assume that the function argument to both of the functions is in CPS.

;; filter: (x -> bool) (listof x) -> (listof x)
(define (filter f l)
    [(empty? l) empty]
    [else (cond
            [(f (first l)) (cons (first l)
                                 (filter f (rest l)))]
            [else (filter f (rest l))])]))

;; fold: (a b -> b) (listof a) b -> b
(define (fold f l acc)
    [(empty? l) acc]
    [(cons? l) (fold f (rest l) (f (first l) acc))]))

Now change the following function application expressions to use their corresponding CPSed versions.

(filter (lambda (x) (< x 3))
        (cons 1 (cons 4 empty))) ;; this evaluates to (list 1)

(fold + (list 1 2 3 4) 0) ;; this evaluates to 1 + 2 + 3 + 4 = 10


  1. In problem 6 we're asking you to rewrite the Scheme expressions in continuation-passing style (in much the same way that map is rewritten in lecture 2003-10-1). You don't need to show each step in the transformation (we'll only be looking at the final result) and you shouldn't do anything formal (ie, the Fischer transformation).