Threads! Just Say No!

Edward A. Lee (of Ptolemy fame) is the Robert S. Pepper Distinguished Professor at U.C. Berkeley, and knows a thing or two about designing and building concurrent systems.  He also has a hate-hate relationship with Threads, as evidenced by my perusal of The Problem with Threads (Jan 2006).  For example
  • Threads [...] are wildly non-deterministic
  •  non-trivial multi-threaded programs are incomprehensible to humans
  • we […] require that programmers of multi-threaded systems be insane. Were they sane, they could not understand their programs
  • If we expect concurrent programming to be mainstream, and if we demand reliability and predictability from programs, then we must discard threads as a programming model 
And there is more, oh so much more.  Just read the whole thing - its wickedly entertaining, and makes its points with no small amount of wit and humor.  Herewith a quick look at the points made in the paper.

Note: All of the below applies to Thread based programming models, not to Threads themselves.  Of course, this assumes that at the OS level, the threads don't kill each other :-)



Concurrency
is the focus of this paper (quelle surprise!).  The problem is that while we, as humans, are good at concurrency in the physical world when we're actually doing things, we're just not good at the crazy abstractions that we have chosen (in our programming models) - these just don't  resemble the physical world.
Why?
In the real world, stuff is happening all the bloody time, and each of these things is almost invariably its own distinct thing, which may, or may not, be interacting with other things.  In short, the real world is a community of interacting entities, while we, in our petite programming pond, pretend that everything is a sequence of discrete steps


The Deeper Problem
is that threads dominate in most of our programming models  Threads are great for shared-nothing architectures, where we use them more like processes than threads.  This is actually one of the reasons that at the OS level, threads tend to work.  
However, when we use threads in our programs, we run into the determinism issue.  Deep down in places we don't talk about at parties, we want our programs to be deterministic, we need our programs to be deterministic.  

Threads, unfortunately, are wildly non-deterministic. Truly understanding what is going on in a highly threaded environment is, well, basically impossible.  There are all sorts of tools that can be used to impose some level of determinism on this whole mess (semaphores, moniters, etc.) but, this is basically approaching the problem backwards - "pruning a wild mass of brambles rarely
yields a satisfactory hedge
".  Or to let Prof. Lee state it
Having threads with shared anything leads to deep and intricate issues.  The reason that a lot of these actually (seem to) work is that by and large we have very modest parallelism, and with the cost of context switching being as high as it is right now, you don't see most bugs.  I conjecture that most multi-threaded general-purpose applications are, in fact, so full of concurrency bugs that as multi-core architectures become commonplace, these bugs will begin to show up as system failures.
The problem, however, is that programmers are guided by syntax rather than semantics.  People understand Threads in Java, because they understand the syntax!  The fact that they actually don't understand whats going on is what leads to the chaos ("Hey!  This Suzuki Hayabusa is exactly like my 250cc Triumph.  Splat…")




Solutions (in a Threaded world)
There are a couple of tried and proven approaches that, well, dont really work, but make people feel like they are accomplishing something.
  • Better processes:  Exactly.  And you get bonus points if you winced when you read this.  "Work smarter, not harder" rarely every works in practice.  And "deep" bugs are guaranteed to bite you in the ass.  Oh, they'll bite anyone's ass, but given that us humans have a limited amount of time/energy/effort availability, do we want to spend it all in 'process', or in actually Getting Stuff Done? (Note, 'process' as in "way of doing things", not as in "kernel process")
  • Better Patterns:  Sounds better, but the problem is that no matter how well you teach 'em, patterns are still tricky, and there is no real way to know (automagically) that they are/were implemented correctly.
  • Language extensions:  These basically force the language to do things a certain way (Guava is to Java, Split-C is to C, etc.).  These are probably the most "successful" ones of the lot, but they tend to shuffle the complexity around (point coming right up...)

The problem with all of these is that as I've said elsewhere, Complexity never goes away, it just moves up the food chain.   More to the point, the inherent 'Thready'-ness of the system implies that you are substituting one set of issues for another.  e.g.,  'Better Processes' limit the amount of 'actual' work you do, 'Better Patterns' are difficult to proof, etc.

Solutions (in a non-Threaded world)
So, where do we go from here?  The basics, as have already been pointed are that we need to start with a fundamentally deterministic interaction mechanism, and add non-determinism only where necessary.  Actor oriented architectures (where data flows through components, as compared to data controlling components), languages that make message-passing concurrency core to their existence (hello erlang! I knew there was a reason I've been using you all these years!), functional languages with their highly deterministic natures, all of these can be part (or all of) the solution.  Or, at the very least, the solution is easier to get to with these as a starting point - you can still absolutely shoot yourself in the foot in these environments, its just that you have to work at it, as compared to it being fairly straightforward in a typical Threaded Programming environment

Comments

J said…
This comment has been removed by the author.

Popular posts from this blog

Cannonball Tree!

Erlang, Binaries, and Garbage Collection (Sigh)

Visualizing Prime Numbers