Sonification for monitoring and debugging distributed systems

Humans are top-grade pattern recognition machines. In his latest book, Ray Kurzweil, the Transcendent Man, theorizes that the human brain can be modeled as a hierarchy of pattern recognizers. Kurzweil says the neocortex contains 300 million very general pattern recognition circuits and argues that they are responsible for most aspects of human thought. (As a side note, here is an amusing little post I had written about some peculiar pattern recognition bug in our brains.)

I don't have theories or hard data to offer on human pattern recognition skills. All I can offer here are some anecdotes.

Some time ago, I had read a story about rangers decrypting a secure radio transmission by just getting accustomed to it. (This was a fascinating story, and I lost track of where I read this story. If you have a pointer, let me know in the comments.) The secure radio transmission system had some peculiarities when encrypting, maybe the encryption was taking tad bit longer when encrypting certain wovels. Now the rangers, with not much else to do, were listening to that channel continuously, and their ears started to pick up on these peculiarities. And soon they were able to decrypt the secure transmissions in real time. (Here is a more modern and advanced version of that attack, using software doing essentially the same thing.)

The auditory pattern recognition can go a long way in the extremes. Did you know that humans can learn to echolocate? Fricking Dare Devil stuff here.

My point with these examples is that humans are skilled at performing advanced pattern recognition feats. These skills are hardwired in our brains, and we rely on them as toddlers to pick up on spoken language, a distinctive human feature.

Using audio to debug

My examples were about audio pattern recognition, which is where I want to lead your attention. Sound is a multidimensional phenomenon, consisting of pitch, loudness, and timbre. Sound is also an ambient technology.  These make sound a viable alternative and complement to the visual sensing modalities.

Since sound is so fundamental and important for human perception, sound has been employed by several professions as a means of debugging and identifying problems. For many decades doctors and mechanics have been listening for abnormal noises for troubleshooting sickness and problems. Last week, I found myself intrigued with the question of whether we can use sound for software debugging. In other words, can we design a stethoscope analog tool for software debugging?

In order to do that, we should first figure out a way to transform data to sound. This process would be an analog of data visualization. I didn't know the correct term for this, so my initial Google searches where unproductive. Then, I came across the correct term, when I searched for "sound analog of visualization". Aha, sonification!

Armed with the correct terminology, all sorts of literature opened up on this process. A primitive and popular example of sonification is the Geiger counter. Radar is another basic example. Recently with SETI project, sonification became an interesting venue also in exploring space. This TED talk shows how Wanda Diaz, a blind astronomer, listens to the stars using sonification.

In the digital systems domain, there are also good examples of sonification. You must have seen this sonification of sorting algorithms. It is simple but brilliant. There has also been more advanced attempts at sonification. This work employs sonification for overviewing git conflicts.
These work employ sonification for business process monitoring. And these research papers explore how to employ sonification for understanding and debugging software
([1], [2], [3], [4]).

Sonification for distributed systems

What I was thinking was to use sonification for monitoring and debugging of distributed systems. Distributed systems are notoriously difficult to monitor and debug. Can we get any type of additional help or slight advantage through the sonification approach? Each type of message transmission can be assigned to a certain piano tone. Loudness may be used for sonifying the size of the message. The duration of computation or message transmission would naturally show up as the rhythm of the system. (Of course going at the millisecond scale won't work for human perception. The sonification software should be doing some slicing/sampling and slowing things down to provide the human with what he can cope with.) For datacenter monitoring, you may give each software system/service a different timbre, and assign each one to a different musical instrument. Then you may listen to your datacenter as if listening to an orchestra performing a symphony. Maybe your large map reduce deployment would go "da da da dum!".

Last night, my PhD students Aleksey and Ailidani surprised me with sonification related to projects they are working on. I had mentioned them about this idea on Thursday. When we met on Skype to catch up on their projects, the first thing they demonstrated to me was how their projects sounded. It was exciting to listen to the sonification of distributed systems. The Voldemort server operation certainly has a peculiar music/rythym. And I bet it is different from how Cassandra would sound --I haven't listened to that yet. The classic Paxos algorithm had a bluegrass like rhythm, at least in Ailidani's sonification. I bet other Paxos flavors, such as ePaxos, Raft, and Mencious will sound different than the classic Paxos. And certainly the faults and failures will sound quite abnormal and would be noticeable as off-tunes and disruptions in the rhythmic sonification of these systems.

Sonification for monitoring and debugging distributed systems may become a thing in a decade or so. What if you couple sonification with a virtual reality (VR) headset for visualization. Sonification could be a good complement to the VR headset. VR gives you total immersion. Sonification provides an ambient way for you to notice an anomaly (something slightly off-tune) in the first place. With VR you can go deep and investigate. With a good VR interface, maybe like guitar hero, dance dance revolution like interface, it is possible to look at the different nodes executing in parallel, crossing roads and diverging again. This blog post does a good job of reviewing how VR equipment can help in debugging.

Comments

alphazero said…
Up to and including the various storage engines and algorithms used in distributed systems, the sognograms in question are single node observations.

I think the interesting aspect of sonification is that it affords new system level monitors without impeding or affecting the observed system. For example, possibly a great way to do benchmarks.


If the sonofication is that of a partial (but distributed) subset of the total nodes in the system, then we're really back where we started. Whether encoded as sound or time series, the original problem of having a coherent and consistent view based on transmitted information (local observation) remains.

Unknown said…
As a quick follow up, this is how Voldemort sounds: http://charap.co/sound-of-voldemort/
Unknown said…
What Paxos sounds like: http://ailidani.blogspot.com/2016/09/what-paxos-sounds-like.html
Lukasz Guminski said…
I like your article. It reminds me my thinking on wave effects in distributed systems http://container-solutions.com/monitoring-performance-microservice-architectures/ When I am thinking about this now, they surely could be translated into sound.

Popular posts from this blog

The end of a myth: Distributed transactions can scale

Hints for Distributed Systems Design

Foundational distributed systems papers

Learning about distributed systems: where to start?

Metastable failures in the wild

Scalable OLTP in the Cloud: What’s the BIG DEAL?

The demise of coding is greatly exaggerated

SIGMOD panel: Future of Database System Architectures

Dude, where's my Emacs?

There is plenty of room at the bottom