Why the Cool Kids Use Event Loops

When I was working in software development back in the 1990s, nearly all the software libraries that I worked on made use of event loops. This was because at the time most hardware had just one single CPU. Back in the day, I remember the excitement when threads were introduced into our development framework. It was revolutionary that we could now run two things at once, or rather appear to run two things at once, since a lot of the hardware at that time still only had a single core, and hence our threaded code was never truly concurrent.

Over the years I’ve had mixed feelings about threads. Some of the most challenging systems that I’ve maintained have suffered from the overuse or misunderstood impact of concurrency. Even today I have discussions around if a piece of code is truly thread-safe and although the libraries (for example, the Java Concurrency Library) have made massive improvements reducing the burden of developing with threads, it is still somewhat of a challenge to ensure that we are not calling code which is not thread-safe when we have assumed it is. This is something that is generally not easily picked up by either static analysis or software compilers.

I have been contributing to the open source project Chronicle Threads, and we have gone retro all the way back to the 1990s and embraced event loops: if it was good enough for the old-timers maybe it’s good enough for us today.

Key Points

Below are some of the key points to consider when choosing to use event loops:

Lock Free

By removing threads, we can reduce the overhead of concurrency locking. Lock-free code often runs faster and single-threaded code is usually simpler to write and test.

Testing and Evolving Requirements

Much higher confidence can be gained in single-threaded test cases, which can lead to fewer bugs and more stable code. In addition, as your requirements evolve, it is easier to maintain and extend business cases.

Shared Mutable State

If we are using a single-threaded event loop, it makes it very easy to access and modify mutable state between requests. A common approach to reducing multi-threaded complexity is by using immutable objects. However, in some cases, creating immutable objects can impact the performance of your application. On the flip side, multi-threaded solutions often have to signal/wait or exchange state. This reduces real-world scaling with the number of threads.

CPU Isolation and Thread Affinity

Event loops do have a slight overhead, as the event loop has to be managed, but this can be balanced with the advantage of running code on fewer cores. This ensures that the thread scheduler is not having to context switch between threads. Each context switch requires the stack frame and registers to be stored, and later, this state has to be loaded before the thread continues. If we adopt an event-loop design, the thread context switching can be reduced. However, this alone will not prevent threads from context switching entirely, as other processes can be scheduled to also run on the same core. To eliminate the context switching, we can pin our thread with a thread affinity library and then apply CPU isolation to ensure nothing else runs on that core. Pinning a thread can also reduce cache contention, which is when two threads, running on the same core, are forced to spend time writing data into the L1 and L2 caches, only for the other thread to overwrite it.

Event-Driven Architecture

If you are using an event loop as part of an event driven architecture, the event loops can be used to read messages and dispatch messages to event handlers. According to Wikipedia, “Building systems around an event-driven architecture (EDA) simplifies horizontal scalability in distributed computing models and makes them more resilient to failure.”

Resource Utilization

Resource utilization is likely to be higher when using a single-threaded event loop. For example, when implementing an EDA architecture while there are still events to process on the event loop, the core will remain busy. There is no context switching, signaling or waiting for state from another core.

In summary, single-threaded event loops can still be scaled by striping the event handlers in their own event loop, which in turn is bound to their own core, each stripe running independently. This approach can be applied to a wide range of use cases. In my case, I have used it when we developed a trading solution that required high performance, and where we needed to scale by running any number of independent engines.

Code Example

To illustrate how you can use event loop in your code, there is a code example on Github: SingleAndMultiThreadedExample.

Conclusion

Favoring event loops over threads and adopting an EDA single-threaded microservices architecture has been successful in reducing the burden of concurrency, where either single or multiple microservices are striped using a single-threaded event loop.

.

Leave a Comment