Lecture 21: Deadlock, Networking
» Lecture video (Brown ID required)
» Lecture code
» Post-Lecture Quiz (due 11:59pm Monday, April 27)
We saw last time how programs that lock multiple resources can encounter a situation called "deadlock" where multiple threads each hold some locks, but no thread holds all the locks required to make process.
Deadlock is an insidious problem, and the only way to avoid deadlocks in your programs is to have a strict locking order that the entire program abides by.
To illustrate this, let's look at a problem somewhat similar to our pass-the-ball
problem, and then come back to the
A classic computer science problem that illustrates deadlock is the Dining Philosophers Problem.
In this problem, some number of philosophers sit in a circle, with a fork between each pair of them. In other words, for N philosophers, there are N-1 forks. Philosophers either think or eat. To eat, a philosopher needs to acquire two forks – one from their left and one from their right. Only one philosopher at a time can hold a fork.
The forks here symbolize resources that are protected by locks, and the philosophers symbolize threads. Eating stands for compute work in critical section protected by two locks, and thinking represents other compute work outside critical sections.
Now, imagine each philosopher follows an algorithm where they grab the fork to their right first, and then the fork to their left.
This can lead to a situation where each philosopher holds one fork, but none can actually get a second one, because each philosopher's left-side fork is already held by another philosopher, whose right-side fork it is. A deadlock!
One solution that imposes a strict lock order is to number the forks, and impose a rule that each philosopher always grabs the fork with the higher number first, independent of whether it is their left-side or their right-side fork.
With this locking protocol, each philosopher will grab their right-side fork first, except for the first philsopher, PA who sits between the highest-numbered and the lowest-numbered fork. PA will grab the highest-numbered fork on their left first. If PF tries to grab that fork as its first fork, but PA has already taken it, then PF blocks until F5 becomes available, but crucially does so while not holding any fork (lock) themselves. Therefore, even if every other philosopher manages to grab one fork, deadlock cannot occur, as one philosopher (in the example below, PB) is able to grab both forks and eat. Consequently, PA eventually gets to eat, releasing the fork that PF is waiting for, and likewise for every other philosopher.
This approach can be formalized in terms of a lock order graph. If the graph is free of cycles, there cannot be a deadlock.
Back to the ballgame
In the ballgame, we need to apply similar logic to break the deadlock. The problematic situation occurs when a player has already locked one mutex (in this case, their own ball state mutex) and then seeks to lock another mutex (the destination player's ball state mutex).
Unfortunately for us, it is not trivial to use a lock ordering where we always lock the lower-numbered player's state first, since each player needs to lock their own ball state to check if it holds a ball before even considering the destination player's state.
There are several ways we could solve this problem:
- always lock destination player's mutex first, even if not holding the ball;
- read state, unlock, get lower numbered lock, get other lock, recheck ball state;
try_lock()on the mutex and give up lock on own ball state if failed.
passtheball-fixed.cc, we use the third approach (
While this is the smallest change from the prior code, it does result in inefficiency because
we sometimes lock a mutex (using an expensive atomic instruction) only to then fail to lock
the other mutex and give up and try again. But observe that the first approach also
unnecessarily takes a lock in the (arguably more common) case that the player doesn't actually
have a ball! The second approach has no inefficiency, but is more difficult to implement.
We are now moving on to the last block of the course, which covers distributed systems. The largest distributed system in existence is one we all use every day: the internet, a global networking between many computer systems.
Networking is a technique for computers to communicate with one another, and to make it work, we rely on a set of OS abstractions, as well as plenty of kernel code that interacts with the actual hardware that sends and receives data across wires and WiFi.
It's difficult to think about computer networks without thinking about the underlying infrastructure powering them. In the old days of telephony networks, engineers and telephone operators relied on circuit switching to manage connections for telephone calls, meaning that each telephone connection occupied a physical, dedicated phone line. Circuit switching was widely used over a long period of time, even during early days of modern digital computing. Circuit switching significantly underutilized resources in that an idle connection (e.g. periods in a phone conversation when neither party was actively saying anything) must also keep a phone line occupied. Extremely complex circuit switching systems were built as telephone systems expanded, but circuit switching itself is inherently not scalable, because it requires dedicated lines (the "circuits") between endpoints.
Modern computer networks use packet switching, which allows sharing wires and infrastructure between many communicating parties. This means that computers do not need to rely on dedicated direct connections to communicate. The physical connections between computers are instead shared, and the network carries individual packets (small, fixed-size units of data), instead of full connections.
The concept of a connection now becomes an abstraction, implemented by layers of software protocols responsible for transmitting and processing packets, and presented to the application software as a stream connection by the operating system.
Thanks to packet switching and the extensive sharing of the physical infrastructure it enables, the internet has become cheap and stable.
A packet is a unit of data sent or received over the network. Computers communicate to one another over the network by sending and receiving packets. Packets have a maximum size, so if a computer wants to send data that does not fit in a single packet, it will have to split the data to multiple packets and send them separately. Each packet contains:
- Addresses (source and destination)
- to detect data corruption during transmission
- Ports (source and destination)
- to distinguish logical connections to the same machines
- Actual payload data
Ports are numbers in the range 1-65,535 that help the OS tell apart different connections, even if they are with the same remote computer. The tuple of (source address, source port, destination address, destination port) is guaranteed to be unique on both ends for any given connection.
Networking system calls
A networking program uses a set of system calls to send and receive
information over the network. The first and foremost system call is called
socket(). It creates a "network socket", which is the key
endpoint abstraction for network connections in today's operating systems.
socket(): Analogous to
pipe(), it creates a networking socket and returns a file descriptor to the socket.
The returned file descriptor is non-connected -- it has just been initialized
but it is neither connected to the network nor backed up by any files or
pipes. You can think of
socket() as merely reserving kernel state for a
future network connection.
Recall how we connect two processes using pipes. There is a parent process
which is responsible for setting everything up (calling
close(), etc.) before the child process gets to run a new program
execvp(). This approach clearly doesn't work here, because there
is no equivalent of such "parent process" when it comes to completely
different computers trying to communicate with one another. Therefore a
connection must be set up using a different process with different
In network connections, we introduce another pair of abstractions: a client and a server.
The client is the active endpoint of a connection: It actively creates a connection to a specified server. Example: When you visit
google.com, your browser functions as a client. It knows which server to connect to!
The server is the passive endpoint of a connection: It waits to accept connections from what are usually unspecified clients. Example:
google.comservers serve all clients visiting Google.
Client- and server-sides use different networking system calls.
Client-side system call --
connect(fd, addr, len) -> int: Establish a connection.
fd: socket file descriptor returned by
len: C struct containing server address information (including port) and length of the struct
- Returns 0 on success and a negative value on failure.
Server-side system calls
On the server side things get a bit more complicated. There are 3 system calls:
bind(fd, ...) -> int: Picks a port and associate it with the socket
listen(fd) -> int: Set the state of socket
fdto indicate that it can accept incoming connections.
accept(fd) -> cfd: Wait for a client connection, and returns a new socket file descriptor
cfdafter establishing a new incoming connection from the client.
cfdcorrespond to the active connection with the client.
The server is not ready to accept incoming connections until after calling
listen(). It means that before the server calls
listen() all incoming
connection requests from the clients will fail.
Among all these system calls mentioned above, only
involves actual communication over the network, all other calls simply
manipulate local state. So only
accept() system calls
Differences between sockets and pipes
One interesting distinction between pipes are sockets is that pipes are one way, but sockets are two-way: one can only read from the read end of the pipe and write to the write end of the pipe, but one are free to both read and write from a socket. Unlike regular file descriptors for files opened in Read/Write mode, writing to a socket sends data to the network, and reading from the same socket will receive data from the network. Sockets hence represents a two-way connection between the client and the server, they only need to establish one connect to communicate back and forth.
A connection is an abstraction built on top of raw network packets. It presents an illusion of a reliable data stream between two endpoints of the connection. Connections are set up in phases, again by sending packets.
Does all networking use a connection abstraction?
No, it does not. Here we are describing the Transmission Control Protocol (TCP), which is the most widely used network protocol on the internet. TCP is connection-based. There are other networking protocols that do not use the notion of a connection and deal with packets directly. Google "User Datagram Protocol" or simply "UDP" for more information.
A connection is established by what is known as a three-way handshake process. The client initiates the connection request using a network packet, and then the server and the client exchange one round of acknowledgment packets to establish the connection. This process is illustrated below.
Once the connection is established, the client and the server can exchange data using the connection. The connection provides an abstraction of a reliable data stream, but at a lower level data are still sent in packets. The networking protocol also performs congestion control: the client would send some data, wait for an acknowledgment from the server, and then send more data, and wait for another acknowledgment. The acknowledgment packets are used by the protocol as indicators of the condition of the network. The the network suffers from high packet loss rate or high delay due to heavy traffic, the protocol will lower the rate at which data are sent to alleviate congestion. The following diagram shows an example of the packet exchange between the client and the server using HTTP over an established connection.