# Lecture 31: Optimistic Linked Lists

## Annoucements

2. Next leaderboard submission is Friday? Monday?
3. Take-home quiz: released Wednesday (Gradescope), due Friday

## Today

1. Coarse locking
2. Fine-grained locking
3. Optimistic locking

Store, access, & modify collection of distinct elements:

The Set ADT:

• add an element
• no effect if element already there
• remove an element
• no effect if not present
• check if set contains an element

## Java SimpleSet Interface

public interface SimpleSet<T> {
// Add an element to the SimpleSet. Returns true if the element
// was not already in the set.

// Remove an element from the SimpleSet. Returns true if the
// element was previously in the set.
boolean remove(T x);

// Test if a given element is contained in the set.
boolean contains(T x);
}


## Linked List SimpleSets

• Each Node stores:
• reference to the stored object
• reference to the next Node
• a numerical key associated with the object
• The list stores
• reference to head node
• a tail node
• head and tail have min and max key values
• nodes have strictly increasing keys

## Why Keys?

Question. Why is it helpful to store keys in increasing order?

## Our Goals

1. Correctness, safety, liveness
• starvation-freedom?
• nonblocking??
• linearizability???
2. Performance
• parallelism?

## Synchronization Philosophies

1. Coarse-Grained (CoarseList.java)
• lock whole data structure for every operation
2. Fine-Grained (FineList.java)
• only lock what is needed to avoid disaster
3. Optimistic (OptimisticList.java)
• don’t lock anything to read, only lock to modify
4. Lazy (LazyList.java)
• use “logical” removal, only lock occasionally
5. Nonblocking (NonblockingList.java)
• use atomics, not locks!

## Coarse-grained Locking

One lock for whole data structure

For any operation:

1. Lock entire list
2. Perform operation
3. Unlock list

See CoarseList.java

## Coarse-grained Appraisal

• Easy to implement

• No parallelism
• All operations are blocking

## Fine-grained Locking

One lock per node

For any operation:

1. Lock head and its next
2. Hand-over-hand locking while searching
• always hold at least one lock
3. Perform operation
4. Release locks

See FineList.java

## Fine-grained Appraisal

• Parallel access
• Reasonably simple implementation

• can be much slower than coarse-grained
• All operations are blocking

## Optimistic Synchronization

Fine-grained wastes resources locking

• Nodes are locked when traversed
• Locked even if not modified!

A better procedure?

1. Traverse without locking
2. Lock relevant nodes
3. Perform operation
4. Unlock nodes

## An Issue!

Between traversing and locking

• Another thread modifies the list
• Now locked nodes aren’t the right nodes!

## Optimistic Synchronization, Validated

1. Traverse without locking
2. Lock relevant nodes
3. Validate list
• if validation fails, go back to Step 1
4. Perform operation
5. Unlock nodes

Seet OptimisticList.java

## How do we Validate?

After locking, ensure that:

1. pred is reachable from head
2. curr is pred’s successor

If these conditions aren’t met:

• Start over!

## Implementing Validation

    private boolean validate (Node pred, Node curr) {
while (node.key <= pred.key) {
if (node == pred) {
return pred.next == curr;
}
node = node.next;
}
return false;
}


## Optimistic Appraisal

• Less locking than fine-grained
• More opportunities for parallelism than coarse-grained

• Validation could fail
• Not starvation-free
• even if locks are starvation-free

## Performance Tests

On HPC Cluster:

• Compare running times of performing 1M operations
• add/remove/contains sequence chosen at random
• elements chosen from 1 to N at random
• N is universe size
• Parameters
• universe size $\approx$ set size

See SetTester.java

## Performance Predictions?

Under what conditions do you expect coarse/fine/optimistic strategies to be performant?

• Number of threads (1 to 128 on HPC)
• Set universe size (8 to 8,192)