You can download the lectures here. We will try to upload lectures prior to their corresponding classes. You can also directly access all the recordings here

  • Introduction
    tl;dr: What are we doing here? A Lean demo
    [demo file] [recording]

    We’ll talk about what Lean is and see what it can do, and also go over some organizational points about the course.

    Takeaways: verified programming is fun and powerful, and this course is very experimental!

  • The basics of Lean syntax
    tl;dr: We'll see in a more principled way how to interact with the Lean system.
    [demo file] [recording]

    In this lecture we’ll learn the basics of the Lean programming and specification language: types and terms, type inhabitation, and writing and evaluating very simple functional programs. No proving yet!

    We covered HHG 1.1 and 1.2.

  • The basics of Lean syntax, continued
    tl;dr: We'll see in a more principled way how to interact with the Lean system, and start on tactic proofs
    [ch 1 demo file] [ch 2 demo file] [recording]

    We’ll finish Chapter 1 of the HHG, and get a head start on Chapter 2, where we’ll actually start proving some theorems. Today’s topics: inductive types (continued), function definition and evaluation, specifications, and basic tactics. We talked about intro and apply, and how tactic mode is like a proving minigame.

    More on Chapter 2 next time!

  • Backward (tactic) proofs
    tl;dr: We'll learn all about Lean's tactic mode and how to smell our way through logical arguments
    [ch 2 demo file] [recording]

    We’ll dive into the meat of the HHG Ch. 2: what are some of the moves available to us in the tactic proving minigame, beyond intro and apply? How do we deal with logical connectives: and, or, not, and so on?

  • Backward (tactic) proofs, continued
    tl;dr: We'll learn all about Lean's tactic mode and how to smell our way through logical arguments
    [ch 2 demo file] [recording]

    We’ll continue talking about tactic proofs. How do we deal with equality? What about the natural numbers?

    We’ll also talk about classical vs constructive logic.

  • Forward proofs
    tl;dr: We'll learn about forward and structured proofs, another input method, and see the connection to Lean's logical foundations.
    [ch 3 demo file] [recording]

    We’ll see another way to write proofs in Lean, incorporating forward reasoning.

    Structured (“proof-term”) proofs are a little closer to the underlying logic. Surprise: proofs in Lean are, literally, just terms in the type theory.

  • Dependent types
    tl;dr: It's time for foundations: beyond simple types to DTT and PAT
    [ch 3 demo file] [recording]

    The type theory that Lean is based on, the Calculus of Inductive Constructions, is an instance of dependent type theory. In DTT, we follow the PAT principle: propositions as types, proofs as terms. (Buzzword: the Curry-Howard correspondence!) We’ll look deeper today into these foundations.

  • Functional programming: data structures
    tl;dr: We'll see more about the inductive command and some variants, and how this interacts with proving
    [ch 4 demo file] [recording]

    Chapter 4 of the Hitchhiker’s Guide introduces some paradigms – inductive types, structures, recursive definitions, type classes – that might be familiar from other functional programming languages. The interesting thing for us is how these paradigms interact with writing proofs. For instance, how do we mix properties into data structures?

  • Functional programming: type classes, lists, trees
    tl;dr: We'll see more about functional programming in Lean, in particular using type classes for polymorphism
    [ch 4 demo file] [recording]

    Type classes are a language feature inspired by Haskell with equivalents in Scala, ML, and other languages. They allow us a kind of ad hoc polymorphism: we can define functions on types that implement certain interfaces, and can declare that certain types implement these interfaces, without bundling the interfaces into the data type itself.

    We’ll see how this interacts with some of the data structures we like to use, as we implement and specify functions on these types.

  • Inductive predicates
    tl;dr: How do we define new propositions? Where do our logical connectives come from?
    [ch 5 demo file] [recording]

    We’ll cover ch. 5 of the Hitchhiker’s Guide today, on inductive predicates. This will complete what we need to know about foundations for now: inductive predicates give us a way to introduce new propositions and prove things about them.

    Inductive predicates are also the source of most of the propositional symbols we’ve used so far – and, or, exists, eq, ….

  • Big-step operational semantics
    tl;dr: We'll define a toy programming language and start proving things about its behavior.
    [ch 8 demo file] [recording]

    We’re jumping ahead to Chapter 8 today! Time to start putting what we’ve learned into practice. We’ll define the syntax of a toy programming language inside of Lean, discussing the difference between shallow and deep embeddings. Using inductive predicates, we’ll define a transition system and use this to prove things about the execution of programs in this toy language.

  • Small-step operational semantics
    tl;dr: We'll see another way to specify the behavior of a toy program.
    [ch 8 demo file] [recording]

    The big-step semantics we saw on Monday aren’t fine-grained. We can’t reason about intermediate states. An alternative is using a small-step semantics, where our program execution path is broken down much further. This comes with upsides and downsides.

  • Denotational semantics
    tl;dr: An alternative way of describing the behavior of programs interprets them as mathematical objects.
    [ch 10 demo file] [recording]

    Operational semantics define the meaning of a program by the process it follows to evaluate. In contrast, denotational semantics define the meaning of a program as a mathematical object, a relation between possible inputs and outputs. Today we’ll cover all of Ch 10 of the HHG.

  • Algebraic structures
    tl;dr: We'll start on some pure mathematics: groups, rings, defining instances of such...
    [ch 12 demo file] [recording]

    We’ll jump ahead again to chapter 12, where we’ll start talking about algebraic structures. But we’ll also improvise a bit here. After we see some basic structures, we’ll define some mathematical types of our own.

  • Numbers and sets
    tl;dr: We'll talk a bit more about type classes, and dealing with numbers and sets
    [ch 12 demo file] [recording]

    We’ll continue the Ch 12 material we started last week, including a little more with the complex number playground. We’ll also talk about embeddings between different numerical structures, and some different kinds of “set-like” objects.

  • Logical foundations
    tl;dr: We'll look a little deeper into some of the logical features of Lean, in particular having to do with the type universe `Prop`.
    [ch 11 demo file] [recording]

    As this course has progressed, we’ve gotten some insight into the foundations of Lean and its type theory. But some features have remained mysterious. In the next few lectures we’ll poke some more at this foundational theory. Today we’ll be focusing in particular on the type universe Prop, what we’re allowed and disallowed in this universe compared to the others.

  • Logical foundations, continued
    tl;dr: We'll talk about some other foundational constructs, including quotient types and classical axioms.
    [ch 11 demo file] [recording]

    We’ll continue with chapter 11 today, talking about more foundational constructs. As we discussed last class, there’s a grab bag of features that we can take or leave: proof irrelevance, impredicative Prop, the axiom of choice, and others. Why should we be convinced that the collection we choose is consistent? We’ll introduce the notion of a model of the type theory to answer questions like this.

  • Quotients, rationals, and reals
    tl;dr: More foundations in action: we'll see how quotient types can be used to implement rationals and (exact) real numbers.
    [ch 13 demo file] [recording]

    The last bit of Ch. 11, on quotient types, is very relevant to what we want to do next! We’ll wrap up that discussion (including talking a bit about the computability properties of quotients) and then immediately use quotient types to define some familiar things. Rational and real numbers are interesting mathematically, and for programming purposes, they can be a very convenient tool for writing specifications. Even if we don’t compute with real numbers they’re useful to have around.

  • Real numbers
    tl;dr: Yet another quotient, now that we have the rationals: we'll construct the reals and talk about some related computability concerns.
    [ch 13 demo file] [recording]

    We finished last week with the rational numbers. Now we need to complete them to get the reals. This will take yet another quotient. The reals bring to light some computability issues that we’ve touched on briefly before: what does it mean to compute with real numbers? How do we do it in normal languages? If time permits, we’ll look at mathlib’s implementation of the reals and see some generalizations.

  • Monads and tactics
    tl;dr: A complete change of pace: we'll start talking about metaprogramming and the monadic interface to writing tactics.
    [ch 7 demo file] [recording]

    Lean has a very powerful framework for writing custom tactics. These tactics are written in Lean itself, with a number of catches to make this possible. Today we’ll see the fundamentals of this approach. We’ll learn the (very) basics about monads, a technique used in some functional languages to simulate programming with side effects. (But this isn’t an FP class and we’re not going to dwell on monads, beyond what we need to know.)

    Chapter 6 of the HHG is a more detailed introduction to monads. We’ll cover a bit of this, but mainly take an alternate approach to Chapter 7.

  • Monads and tactics
    tl;dr: We'll continue with metaprogramming, in particular how to manipulate expressions.
    [ch 7 demo file] [recording]

    More from chapters 6 and 7 of the HHG: we’ll look at the expr type, which reflects Lean expressions as a Lean datatype. There’s a big API around creating, modifying, and using expressions – unsurprisingly, since this is what’s meta about metaprogramming!

  • Monads and tactics
    tl;dr: We'll cover some more advanced topics in tactic writing, including parsing in begin/end blocks
    [ch 7 demo file] [recording]

    There are lots of subtleties to writing metaprograms that we’ve skimmed over so far. In particular, there’s a disconnect between the syntax we use when writing tactics and the syntax we use within begin...end blocks. We’ll touch on these subtleties today. Time permitting, we’ll talk about design strategies for metaprograms, including certification and proof by reflection.

  • Linear arithmetic
    tl;dr: To wrap up our coverage of metaprogramming, we'll take an in depth look at `linarith`
    [ch 7 demo file] [recording]

    The tactic linarith solves linear programs over ordered rings. It’s a great example of a “large” metaprogram that shows off a number of interesting design principles. We’ll talk both about the algorithm it implements and the strategy used in implementing the tactic itself.

  • Lecture
    tl;dr: Guest lecture: Jeremy Avigad on verifying blockchain computations
    [zoom link] [recording]

    Our final class will have a guest lecturer: Jeremy Avigad will tell us about a project he and others have been working on to verify smart contract executions. The work he’ll talk about is also described in this paper (code). No prior experience with blockchains will be assumed: I know nothing about them myself!

    Jeremy will join us via Zoom, which I’ll project to the class. Join in person or remotely, whichever you prefer. After Jeremy’s guest talk, we’ll have some time to chat about your final projects.