CS195Y Lecture 22

4/11/16


Non Determinism in Spin

int x = 5;
active proctype test () {
    if
        :: x == 1 -> printf("hello!\n");
        :: x == 2 -> printf("goodbye!\n");
    fi;
}

What would you think happens when you run this statement?
You would expect (hope) that the program would fall through the if statement and complete because none of the conditions are true.
What actually happens is a timeout. It is blocking until one of the conditions is true.

Q: Suppose you have multiple active proctypes. Which one will get instantiated first?
A: Tim is reasonably sure that it will randomly choose which happens first, but he’s not sure.

Q: Why might blocking be a good thing?
A:

If we add a case: :: x == 5 -> printf("good morning!\n");, we get “good morning!” as expected.
What if we have two cases that are both true?

int x = 5;
active proctype test () {
    if
        :: x == 5 -> printf("good morning!\n");
        :: x == 5 -> printf("oh no!\n");
    fi;
}

You might expect it to do the first one, as many other programming languages do. But instead, it chooses one case randomly

x = 0;
if
    :: x++;
    :: skip;

The above code block will set x to 0 or 1 randomly.

What if we want a random number between 0 and 10?
We could just have an if with 10 cases, setting x to each possible number.
But what if we want a random number between 0 and 100, or 1000, or even worse, some unknown n? Clearly, in these cases, enumerating all of the possibilities is not feasible.

x = 0;
do
    :: x >= 100 -> break;
    :: x++;
od

Q: Won’t this just incremement to 100 and then stop?
A: Yes! How do we fix it?

x = 0;
do
    :: break;
    :: x < 100 -> x++;
do

Q: Isn’t this probability distribution really skewed?
A: Also yes! Assuming the probability of each case is .5, the probability compounds each time. So, the probability of 0 is .5, 1 is .25, 2 is .125, etc.

Locking

Tradeoff between security and liveness. The safest locking algorithm is to never let anyone access the shared resource. But that doesn’t make it a good locking algorithm.
Last time, we modeled locking two ways-- politeness and flags-- but we discovered that both algorithms were fundamentally broken. However, we will show that combining the two results in a locking algorithm that actually works.

byte victim = 255;
bool flags[2];
bool in_crit[2];
active [2] proctype a_process() {
    tryagain:
    do
        :: goto tryagain;
        :: flags[_pid] = true;
           victim = _pid;
           flags[1-_pid] == false || victim != _pid;
           in_crit[_pid] = true;
           assert(in_crit[1-_pid] == false);
           in_crit[_pid] = false;
           flags[_pid] = false;
}

Okay, so we have mutual exclusion. But do we have liveness?

ltl no_starvation {
    always (flags[0] -> eventually in_crit[0])
}

We get an error. Here’s the trace:
I raise my flag. I enter the critical section. I leave the critical section. At this point, my flag is still raised but I have left the critical section. Then I lower my flag, and I never try to enter the critical section again.
This violates our liveness property even though it seems to us that this should be a valid trace.

ltl no_starvation2 {
    always (flags[0] -> eventually !flags[0])
}

This also has an error. Here, we see an adversarial scheduler. The process could go into the critical section, but it never gets a turn from the scheduler, so it never actually does go into the critical section.
Luckily, there is a flag for spin that promises a (somewhat) fair scheduler. With this setting, there are no errors!

Q: Why do we need the no-op case?
A: We could model the system with both processes constantly contending for the critical section. However, this is slightly different than the scenario we set out to model.

Takeaways:
This locking algorithm actually works!
There are many other (working) locking algorithms that have different properties. For example, this locking algorithm does not guarantee that the critical section is first-come, first-served. There are other algorithms that do have this property.