Boaz Patt-Shamir

Tel Aviv University

**Will Rosenbaum**

Max Planck Institute for Informatics

* - For this talk, trees = paths!

- Model and Problem Definition
- Previous Approaches and Results
- New Results
- Future Work

- Network is a path* of buffers
- All packets have same destination (right endpoint)
- System is synchronous
- Each round, each buffer can forward up to $c\, (= 1)$ packets

* - more generally, a tree

- Each round, packets arrive in the network
- In any $t$ consective rounds, at most $\rho \cdot t + \sigma$ packets arrive
- $\rho \leq c$ is the asymptotic
**rate** - $\sigma \geq 0$ is the
**burstiness** - Such an injection pattern is
**$(\rho, \sigma)$-bounded**

Note: $(1,0)$-bounded $\implies$ at most one packet injected per round

What is the minimum buffer space required per node in order to deliver *all* packets for any $(\rho, \sigma)$-bounded injection pattern?

Every nonempty buffer forwards a packet to the right.

Worst case buffer size is \(\Theta(n)\).

Injection at $i$, all nonempty buffers $j \geq i$ forward

Worst case buffer size is $O(1)$. (Miller & Patt-Shamir, 2016)

$i$ forwards if $L(i) > L(i+1)$, or $L(i) = L(i+1)$ and $L(i)$ is *odd*

Worst case buffer size is $O(\log n)$. (PR & DLNO, 2017)

**Theorem** [PR, DLNO 2017]. Any protocol where the action of each node $v$ depends only on the current state of nodes up to distance $d$ from $v$ requires buffers of size
$$
\Omega\left(\frac{\log n}{d}\right)
$$
in the worst case.

**Question.** Is this this this lower bound optimal?

The lower bound of the previous theorem is tight!

**Theorem.** For every $d \in O(\log n)$ there exists a $d$-local protocol $A$ that for any $(\rho, \sigma)$-bounded injection pattern, $A$ delivers all packets and requires buffers of size
$$
O\left(\frac{\log n}{d} + \sigma \right).
$$

- A protocol is
**$d$-local**if the actions of each node $v$ depend only on the current state of nodes up to distance $d$ from $v$. - Taking $d = \log n$, this algorithm matches the performance of the best
*centralized*algorithm.

- Partition network into
**clusters**of size $O(d)$ - Define
**virtual load**of a cluster to be number of packets "owned" by the cluster - initial ownership assigned at time of injection
- Perform
**virtual forwarding**according to OED rule - virtual forwarding only changes ownership of packets
- Physically forward packets
- only packets owned by next cluster are forwarded to next cluster
- balance loads within clusters

Dashed line threshold $\sim (\log n) / d$

- Define a load threshold $$ \theta \sim \frac{\log n}{d} $$
- Call a packet
**bad**if its position in a buffer is $\geq \theta$ - Algorithm prioritizes forwarding bad packets to "good" positions
- After forwarding $\leq \sigma$ bad packets in network
- So: max load is $\leq \theta + \sigma$

Currently: too much bookkeeping!

- Can a simpler algorithm work?
- Can the algorithm be stateless?
- no virtual loads
- no packet ownership

Can algorithm be generalized to broader family of topologies?

Impediment: worst-case lower bounds!

The following topologies have $\Omega(n)$ worst-case loads for all protocols:

- path with arbitrary source/destination
- single source, single destination DAGs
- $2 \times n$ grid with single destination

Non-greedy protocols may give good performance in practice even on more general topologies

Preliminary empirical results for "diamond" topology