InContext: Simple Parallelism for Distributed Applications (HPDC 2011)

The event-driven model is a popular framework for those who want to build a complex distributed application while keeping the simple event-driven system features where the lower details can be hidden.

However, introducing parallelism in the event-driven system has not been an easy task since it has to solve the concurrency problem between various events sharing state variables.

Assume that we have two events, event_a() and event_b() as follows.

state_variable v; event_a() { read(v); write(v, 1); } event_b() { read(v); // do something read(v); }

What happens when we run those two events in parallel? write(v,1) statement in event_a() can occur between the two read(v) statements in event_b(). This concurrent access will violate programmer expectations.

The simplest way to overcome this issue is not allowing parallelism at all. This is called the atomic event model, which Mace has used since its creation due to its simplicity.

However, what if we have a service that has a recurring event to be handled in timely manner? One good example is a heartbeat(ping) message to check the liveness of each node. If we have a event that takes a long time to complete, the heartbeat message may not be processed in time while the long-running event is handled.

The ideal solution (as suggested in InContext model) is allowing parallelism in a service so that each event can be handled separately in different threads. This will increase the responsiveness of the system that has a background service like ping messages to check liveness.

The problem is to how to deal with concurrent access to shared variables. How can we safely run the multiple events in parallel without breaking concurrency? Of course we can take another approach like SEDA by calling other methods by queuing events to be processed. However, one cannot simply parallelize their events with SEDA and it may require a lot of restructuring work on existing system. Is there a simpler algorithm?

InContext provides a solution for those who seek parallelism in event driven system while keeping simplicity.

Let us start with categorizing events by their behavior of dealing with state variables. In the InContext event model, there can be three types of events.

  • Events that read from and write to shared variables.
  • Events that only read from shared variables.
  • Events that neither read nor write to shared variables.

We call them global, anon, and nullevents, respectively. This expected behavior will be given by the developer and developer will simply annotate each transitions(=events) with their expected behavior. That will be global, anon, or none.

By categorizing the events, you may see some possible parallelism opportunities here. For example, anon events(read-only events) and null events(no-read-no-write events) can coexist without corrupting shared variables.

One that should be run solely at a time is global events(read/write events) since they are making changes to the state variables.

So what InContext execution model does is keeping the expected behavior at all times.

If we have a global event, this should be run solely and the other events will be deferred to be run until the end of this global event. We have a lock for this write phase to make it work.

After the global event finishes, all the anon events(read-only event) can be run in parallel. We will have a lock for this read phase. And if we have a global event in queue, it will be deferred to run until the last anon events finishes.

So here we will have a iterating read / write phases. Only single global event will run at write phase; otherwise, multiple anon events will run in the read phase. This is our execution model of Read/Write lock implementation.

This gives us a simple execution model that the developer don’t need to too much worry about all the concurrency issue.

However, this model itself is not an optimal solution since we may have available CPU core in the read phase. Can we provide an mechanism for an early-start for the latter write event? If we can, it will eliminate wait-time until the last anon events finishes.

This leads us to suggest an optimized execution model checkpointing the shared variables for anon events.

The key intuition in here is that the state variables read from the anon events are un-changing until the next global event begins. So how about making the first anon event, right after the latest global event, take a snapshot from the global state and make the following read events to read from the snapshot rather than from the global state variables? This is actually what our snapshot implementation does.

So the first anon event, right after the latest global event, will read from the global state and store them in separate snapshot. The other following read events will read from the snapshot rather than referring to the global state. The trick is to inject lines of code in the front of the anon events to declare the local variables to have the same name with any referred global state variables within the events and initialize those local variables with the values from the recent snapshot. By doing this technique, one can safely allow multiple parallelism while allowing early start of the write event.

The details are described in the full paper. We encourage to read this paper and share your ideas. We are working on merging this code into the Mace trunk, to release soon. Thank you.

InContext: Simple Parallelism for Distributed Applications.” Sunghwan Yoo, Hyojeong Lee, Charles Killian, and Milind Kulkarni. In proceedings of the 20th International ACM Symposium on High-Performance Parallel and Distributed Computing (HPDC 2011). San Jose, CA. 8-11 June, 2011. [pdf]

This entry was posted in Papers. Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *

*

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>