Consistency analysis in Bloom: a CALM and collected approach

This work from CIDR11 aims to provide theoretical foundations for correctness under eventual consistency, and identifies "order independence" (independence of program execution from temporal nondeterminism) as a sufficient condition for eventual consistency.

CALM (consistency and logical monotonicity) principle states that monotonic programs guarantee order independence, and hence, eventual consistency. A monotonic program is one where any true statement remains to be true as new axioms arrive. In contrast, in a non-monotonic program, new input can cause the earlier output to be revoked. A monotonic program guarantees eventual consistency, whereas non-monotonicity requires the use of coordination logic.

To simplify the task of identifying monotonic and nonmonotonic program segments, this work proposes using program analysis in a declarative language, Bloom. In Bloom, monotonicity of a program is examined via simple syntactic checks. Selection, join, and projection operators are monotonic, whereas aggregation and negation operators are nonmonotonic and are flagged as "points of order" in the program. Coordination logic needs to be introduced at these points of order to achieve consistency.

Bloom is implemented in Ruby as a domain specific language, called Bud. The paper presents two case studies, a key-value store and a shopping cart implementation, to demonstrate the above concepts. Bloom's alpha release was out a couple days ago. Congratulations to the Bloom team! I am sure they will receive useful feedback as the language gets used for building things.

Our related work on CALM principle
I tried to think of an analogue to the monotonicity property in the context of guarded-command languages. I think, monotonicity property corresponds to the guards being "stable" (closed under program actions) in a guarded-command program. If all guards of a program are stable, then the program is monotonic. For a guard that refers to the state at other processes, normally we would need synchronization and atomicity to evaluate the guard and execute the statement at the same step. But for actions with stable guards, we don't need that; we can evaluate the guard at one step and execute the statement at a later step with several other actions from other processes executing in between without need for synchronization.

We had in fact noticed this as a nice property a year ago, and published a paper on this with some generalizations of the stable property: Slow is Fast for Wireless Sensor Networks in the presence of Message Losses

This work on guarded-command languages can provide an imperative alternative to declarative languages for realizing the CALM principle. Declarative programs are hard to master for many developers (count me here) and may be difficult[different] to test and debug. Imperative programs have an advantage in this regard.

Concluding remarks
I think this is a promising direction to pursue. As Barbara Liskov mentioned in her ACM Turing Award Lecture (SOSP'09), "The Power of Abstraction", the next breakthrough for distributed computing will most likely be led by novel programming languages/abstractions.

I guess the next interesting question for this work is: What are the rules of thumbs for writing programs with less synchronization points?

Exercise questions
Are map, reduce primitives in Bloom monotonic? What does this imply for map-reduce chain programs?

Can you reconstruct any of the analysis for the provided programs?

Comments

Popular posts from this blog

Learning about distributed systems: where to start?

Hints for Distributed Systems Design

Foundational distributed systems papers

Metastable failures in the wild

The demise of coding is greatly exaggerated

Scalable OLTP in the Cloud: What’s the BIG DEAL?

The end of a myth: Distributed transactions can scale

SIGMOD panel: Future of Database System Architectures

Why I blog

There is plenty of room at the bottom