Effective Concurrency: Prefer Futures to Baked-In “Async APIs”

This month’s Effective Concurrency column, Prefer Futures to Baked-In “Async APIs”, is now live on DDJ’s website.

From the article:

When designing concurrent APIs, separate "what" from "how"

Let’s say you have an existing synchronous API function [called DoSomething]… Because DoSomething could take a long time to execute (whether it keeps a CPU core busy or not), and might be independent of other work the caller is doing, naturally the caller might want to execute DoSomething asynchronously. …

The question is, how should we enable that? There is a simple and correct answer, but because many interfaces have opted for a more complex answer let’s consider that one first.

I hope you enjoy it. Finally, here are links to previous Effective Concurrency columns:

The Pillars of Concurrency (Aug 2007)

How Much Scalability Do You Have or Need? (Sep 2007)

Use Critical Sections (Preferably Locks) to Eliminate Races (Oct 2007)

Apply Critical Sections Consistently (Nov 2007)

Avoid Calling Unknown Code While Inside a Critical Section (Dec 2007)

Use Lock Hierarchies to Avoid Deadlock (Jan 2008)

Break Amdahl’s Law! (Feb 2008)

Going Superlinear (Mar 2008)

Super Linearity and the Bigger Machine (Apr 2008)

Interrupt Politely (May 2008)

Maximize Locality, Minimize Contention (Jun 2008)

Choose Concurrency-Friendly Data Structures (Jul 2008)

The Many Faces of Deadlock (Aug 2008)

Lock-Free Code: A False Sense of Security (Sep 2008)

Writing Lock-Free Code: A Corrected Queue (Oct 2008)

Writing a Generalized Concurrent Queue (Nov 2008)

Understanding Parallel Performance (Dec 2008)

Measuring Parallel Performance: Optimizing a Concurrent Queue (Jan 2009)

volatile vs. volatile (Feb 2009)

Sharing Is the Root of All Contention (Mar 2009)

Use Threads Correctly = Isolation + Asynchronous Messages (Apr 2009)

Use Thread Pools Correctly: Keep Tasks Short and Nonblocking (Apr 2009)

Eliminate False Sharing (May 2009)

Break Up and Interleave Work to Keep Threads Responsive (Jun 2009)

The Power of “In Progress” (Jul 2009)

Design for Manycore Systems (Aug 2009)

Avoid Exposing Concurrency – Hide It Inside Synchronous Methods (Oct 2009)

Prefer structured lifetimes – local, nested, bounded, deterministic (Nov 2009)

Prefer Futures to Baked-In “Async APIs” (Jan 2010)

11 thoughts on “Effective Concurrency: Prefer Futures to Baked-In “Async APIs”

  1. I like the idea of separating the ‘What’ from the ‘How’, but I think that the Begin/End pattern still has its place if the ‘What’ and the ‘How’ are inextricably linked.

    For example, here at Leica we have a lot of hardware control code for our microscopes, which is, at the lowest level, inherently asynchronous. Generally, that is how we want the API to be used. If a user wants synchronous hardware control, he can implement his own wrapper on top of our API – I prefer this approach because synchronous execution would be the exception, rather than the rule.

    I’d be interested in your thoughts on this.

    Regards,
    Andrew.

  2. I was reading the article ‘ Prefer futures to baked in async API’. I could see one advantage of using BeginXXX, EndXXX pattern like .net. For example, if i have a class where only few of its methods can be async then i can explicitly expose them with BeginXXX/EndXXX.

  3. Great column.

    Am I the only one that cannot see correctly the code of examples 4 and 5? At least it is done on purpose , the local vars declaration are missing, and I think this is the key part of the code, where the future is used :)

  4. An architecture based on Futures is equally capable of hooking into IOCP, as long as you factor things correctly. The Win32 API for IOCP is actually closer in structure to Futures than it is to Begin/End – you perform an IO and pass in an OVERLAPPED that either specifies an event to signal or a callback to invoke on completion. You’re never actually required to call any sort of ‘End’ function, and the OVERLAPPED always represents a single IO at any given time. It’s basically a reusable Future.

  5. One (slightly pedantic) correction: you don’t need to call ar.AsyncWaitHandle.WaitOne() before calling EndZZZ(…) for correctly implemented Begin/End pair. The pattern states that EndZZZ will block if the operation has not completed.

    This does not really alter the point, and general mechanisms using something like futures are certainly easier (in general) for both implementer and consumer of an API. (The exceptions are where something like Begin/End can hook into lower level abstractions like IO Completion.)

  6. I’ll make my comment here, as it doesn’t require me to create an account.

    Async programming is useful for minimizing the number of threads while simultaneously having many concurrent IO operations in flight. Going with a task parallelism library for the same operation means you’re going to sacrifice one of those goals, typically the many concurrent IOs.

    The annoying thing is that the Begin/End async pattern in .NET is programmable (and with the AsyncCallback, more or less designed to be programmed) in continuation passing style. But CPS is a mechanical translation of straight-line normal imperative code.

    So my position is rather different. Compiler / language tooling should be improved in this area, so that straight-line code, in the right context (e.g. F#-style async workflow) can be mechanically transformed into a a monadic nesting of continuations. That way you get to have your cake and eat it too.

  7. One nice enhancement to the Future model is being able to add callbacks directly to a future, like in Twisted. It makes it straightforward to attach work to the end of a deferred computation without having to modify the call site. Doing that also means that you can attach a continuation to the Future instead of suspending your thread by blocking on an event.

Comments are closed.