Efficient Decrease-And-Conquer Linearizability Monitoring
Linearizability has become the de facto standard for specifying correctness of implementations of concurrent data structures. While formally verifying such implementations remains challenging, \emph{linearizability monitoring} has emerged as a promising first step to rule out early problems in the development of custom implementations, and serves as a key component in approaches that stress test such implementations. In this work, we undertake an algorithmic investigation of the linearizability monitoring problem, which asks to check if an execution history obtained from a concurrent data structure implementation is linearizable.
While this problem is largely understood to be intractable in general, a systematic understanding of when it becomes tractable has remained elusive. We revisit this problem and first present a unified `decrease-and-conquer’ algorithmic framework for designing linearizability monitoring. At its heart, this framework asks to identify special \emph{linearizability-prserving} values in a given history — values whose presence yields an equilinearizable sub-history when removed, and whose absence indicates non-linearizability. More importantly, we prove that a polynomial time algorithm for the problem of identifying linearizability-prserving values, immediately yields a polynomial time algorithm for the linearizability monitoring problem, while conversely, intractability of this problem implies intractability of the monitoring problem.
We demonstrate our framework’s effectiveness by instantiating it for several popular concurrent data types — sets, stacks, queues and priority queues and sets — deriving polynomial time algorithms for each, with the \emph{unambiguity} restriction, where each insertion to the underlying data structure adds a distinct value. We further optimize these algorithms to achieve near-optimal log-linear time complexity by amortizing the cost of solving induced sub-problems through efficient data structures. Our implementation and evaluation on publicly available implementations of concurrent data structures show that our approach scales to very large histories and significantly outperforms existing state-of-the-art tools.