Lecture 1: Course Overview and Computer Organization

🎥 Lecture video (Brown ID required)
💻 Lecture code
❓ Post-Lecture Quiz (due 11:59pm, Monday, January 30).

Course overview

Welcome to CS 300 / CSCI 0300: Fundamentals of Computer Systems!

This is an intermediate systems course in the CS department, brought to you by these folks.

You can find all details on policies in the missive, and the plan for the semester, including assignment due dates, on the the schedule page. The first lab (Lab 0) and Project 1 (Snake) are out today.

We use post-lecture quizzes instead of clicker questions to help you to review the lecture material. These quizzes are due at 11:59pm Eastern time the day before the next lecture.

We will look at topics arranged in four main blocks during the course, building up your understanding across the whole systems stack:

  1. Computer Systems Basics: programming in systems programming languages (C and C++); memory; how code executes; and why machine and memory organization mean that some code runs much faster than other code that implements the same algorithm.
    Project: you will learn programming in C and data representation in memory via by building a game (Snake), and you will develop your own debugging memory allocator, helping you understand common memory bugs in systems programming languages (DMalloc).

  2. Fundamentals of Operating Systems: how your computer runs even buggy or incorrect programs without crashing; how it keeps your secrets in one program safe from others; and how an operating system like Windows, Mac OS X, or Linux works.
    Project: you will implement memory isolation via virtual memory in your own toy operating system (WeensyOS)!

  3. Concurrency and Parallel Programming: how your computer can run multiple programs at the same time; how you can share memory even if running programs on multiple processors; why programming with concurrency is very difficult; and how concurrency is behind almost all major web services today.
    Project: you will implement the server-side component of a key-value store, which is a core piece of infrastructure at large internet companies (e.g., Instagram, Facebook, and Airbnb all make heavy use of key-value stores). To make your store handle requests from many users at the same time, you will support multithreading, which lets you scale to the resources offered by modern servers.

  4. Distributed Systems: what happens when you use more than one computer and have computers communicate over networks like the Internet; how hundreds of computer can work together to solve problems far too big for one computer; and how programs running distributedly can survive the failure of participating computers.
    Project: you will implement a sharded key-value storage service that can (in principle) scale over hundreds or thousands of machines.

The first block, Computer Systems Basics, begins today. It's about how we represent data and code in terms the computer can understand. But today's lecture is also a teaser lecture, so we'll see the material raise questions that you cannot yet answer (but you will, hopefully, at the end of the course!).

Machine organization

Why are we covering this?
In real-world computing, processor time and memory cost money. Companies like Facebook, Google, or Airbnb spend billions of dollars a year on their computing infrastructure, and constantly buy more servers. Making good use of them demands more than just fast algorithms – a good organization of data in memory can make an algorithm run much faster than a poor one, and save substantial money! To understand why certain data structures are better suited to a task than others, we need to look into how the computer and, in particulary, its memory is organized.

Your computer, in terms of physics, is just some materials: metal, plastic, glass, and silicon. These materials on their own are incredibly dumb, and it's up to us to orchestrate them to achieve a task. Usually, that task is to run a program that implements some abstract idea – for example, the idea of a game, or of an algorithm.

There is an incredible amount of systems infrastructure in our computers to make sure that when you write program code to realize your idea, the right pixels appear on the screen, the right noises come from the speakers, and the right calculations are done. Part of the theme of this course is to better understand how that "magic" infrastructure works.

Why try to understand it, you might wonder? Good question. One answer is that the best programmers are able to operate across all levels of "the stack", seamlessly diving into the details under the hood to understand why a program fails, why it performs as it does, or why a bug occurs. Another reason is that many of the concepts we encounter once we pull back on how systems work, it turns out that even the details of specific systems (e.g., Windows, OS X, Linux) vary, a surprisingly small number of fundamental concepts explain why computers are as powerful and useful as they are today.

Let's start with the key components of a computer:

These components need to work together to achieve things, and we need to understand them all in order to understand why code and systems behave the way they do.

Today, we will particularly focus on memory. A computer's memory is like a vast set of mailboxes like you might find them at a post office. Each box can hold one of 256 numbers: 0 through 255. This is called a byte (a byte is a number between 0 and 255, and corresponds to 8 bits, which each are a 0 or a 1).

A "post-office box" in computer memory is identified by an address. On a computer with M bytes of memory, there are M such boxes, each having as its address a number between 0 and M−1. My laptop has 8 GB (gibibytes) of memory, so M = 8×230 = 233 = 8,589,934,592 boxes (and possible memory addresses)!

       0     1     2     3     4                         2^33 - 1    <- addresses
    +-----+-----+-----+-----+-----+--     --+-----+-----+-----+
    |     |     |     |     |     |   ...   |     |     |     |      <- values
    +-----+-----+-----+-----+-----+--     --+-----+-----+-----+

Powers of ten and powers of two. Computers are organized all around the number two and powers of two. The electronics of digital computers are based on the bit, the smallest unit of storage, which a base-two digit: either 0 or 1. More complicated objects are represented by collections of bits: for example, a byte is 8 bits. Using binary bits has many advantages: for example, error correction is much easier if presence or absence of electric current just represents "on" or "off". This choice influences many layers of hardware and system design. Memory chips, for example, have capacities based on large powers of two, such as 230 bytes. Since 210 = 1024 is pretty close to 1,000, 220 = 1,048,576 is pretty close to a million, and 230 = 1,073,741,824 is pretty close to a billion, it’s common to refer to 230 bytes of memory as "a gigabyte," even though that actually means 109 = 1,000,000,000 bytes (SI units are base 10). But when trying to be precise, it's better to use terms that explicitly signal the use of powers of two, such as gibibyte: the "-bi-" component means “binary.”

All variables and data structures we use in our programs, and indeed all code that runs on the computer, need to be stored in these byte-sized memory boxes. How we lay them out can have major consequences for safety and performance!

You might observe that our boxes can only hold numbers up to 255, but we need to store numbers larger than that. To achive this, the computer interprets multiple adjacent boxes as a single number, using the concept of positional notation, albeit in the binary system. Two boxes together consist of a "low" value (1 byte, 0 to 28-1) and a "high" value (1 byte, 0 to (28 × 28) - 1 = 216 - 1). This way, we can represent numbers between 0 and 216 - 1, which is 65,535. The data type of two such boxes is called a short; 4 adjacent bytes are an int and represent an integer between 0 and 232 - 1 (about 4 billion).

So, we can represent integers up to about 4 billion using 4 bytes each. But what if we needed to store the address of another post-office box (i.e., memory location) – for example, because we're constructing a data structure like a linked list? On my laptop, there are about 8 billion boxes, so we must be able to represent and store addresses up to 8,589,934,592 in boxes adjacent to those four storing the number. To achieve this, we need more than 4 bytes of address.

Next time, we will look into how to store addresses in memory and how to build data structures from them, and get start programming in C.