An interesting question showed up in my mailbox today. So interesting that I think it’s worth a public answer and discussion:
In chapter 7 of The Art of Unix Programming, you classified threads under the section “Problems and Methods to Avoid”. You also wrote that with the increased emphasis on thread-local storage, threads are looking more like a controlled use of shared memory. This trend has certainly continued; recent programming languages like D, Scala, and Go encourage the use of threads as mostly isolated lightweight processes with message passing. Observing this trend, I have often wondered, why not go all the way and use multiple OS processes? I can think of two reasons to use threads in this newer, controlled way rather than using full processes:
1. Portability to Windows, which doesn’t have an equivalent of fork(2)
2. Performance, particularly because message passing between real processes requires serialization and deserialization, whereas message passing within a process can be done with shared memory and (maybe) locks
So what do you think? Are threads still a menace to be avoided in favor of full OS processes? Or has the situation improved since 2003?
I think it has, and I think you’ve very nearly answered your own question as to why. Bare threads were dangerously prone to deadlocks, livelocks, context-trashing, and various other sorts of synchronization screwups – so language designers set out to encapsulate them in ways that gave better invariants and locality guarantees without sacrificing their performance advantages. I think Scala’s transactional memory stands out as a particularly elegant stab at the problem.
I don’t develop for Windows or communicate much with people who do, so I’m not equipped to judge how important Windows portability is in motivating these features. But the performance issue you called out is real and quite alive on Unix systems.
UPDATE: Matt Campbell, who has materialized in the comments here, send the original question and has given me permission to cite him. Thanks for a good question!
Threads are a huge topic of discussion in the Windows development community. Honestly, I don’t hear much about problems, it is far more about the opportunity to take advantage of multicore. Each version of .NET and C# has had a focus on a particular theme. 2.0 was about Generics, 3.0 was about linq and so forth. 5.0 is entirely focused on multithreading. For example, they have introduced a whole new version of LINQ called PLINQ that is all about automatically threading the declarative syntax of LINQ and leveraging threading to run in parallel. So, for example, if you wanted to perform some expensive calculation Compute on a list of items Items, you could use this syntax:
var result = from item in Items.AsParallel() select Compute(item);
This would partition Items and use all the cores to perform Compute in parallel. FWIW, it is also deferred execution, what I would call lazy, but I’ve been yelled at here for using that term too loosely.
Of course there are lots of tools for intert hread locking too.
C# also has a process isolation mechanism called AppDomains that provides separation of state (and a lot of other things) within the same OS process.
I agree, but I put threading vs. forking in similar tool bins as compiled vs. interpreted. Unless the performance gains of threaded are really, really, really important, it’s almost always easier, quicker, and more flexible to fork and use the various available means of message passing.
jsk’s choice of analogy is ironic, because interpreter- or VM-based environments tend to favor threads, coroutines, or a single-threaded event-based model (as in Node.js) over forking. I can think of three reasons for this:
1. Reference counting and some garbage collectors tend to defeat the copy-on-write method of memory sharing used by forked processes. The Dalvik VM specifically avoids this problem.
2. In environments with JIT compilers, JIT compilation also doesn’t work well with copy-on-write memory sharing.
3. Portability to Windows again.
Maybe another reason is that in higher-level languages, it’s easier to build abstractions such as actors, so threads are more or less isolated and communicate via message passing.
The one thing that’s most dangerous in my work is the synchronisation on the UI thread. I’ve still not worked out why the UI can only be manipulated by a single thread in these days of C# and managed code, it may have been an engineering necessity back in the days of yore, but I seriously don’t care about the message pump any more.
In general, there are mechanisms and patterns available that reduce the risk of threading. You tend to know when you’re approaching danger territory.
However, I like the node.js type solution – run multiple processes, all accepting from the same work queue. Make everything async.
C++ on Windows is a large part of my day job.
Windows doesn’t have fork(), but it does have mmap(), which is probably the cheapest way to flush a lot of data between processes. We use mmap() in an isolation context, where crappy third-party components can live in their own process, and not take down the entire system.
When done correctly, threads are good. When done incorrectly, threads are terrible. I haven’t read TAUP, but it sounds like the argument against threading is its being prone to abuse and misuse…that makes sense, in 2003. Threading can be hard to get right. Since then, tools and abstractions such as thread pools, worker queues and OpenMP make things much easier, and help build ‘best practices’ models for those who need parallelism.
@jsk:
Hmm. At the risk of sounding like I’m just saying the same thing as Matt Campbell and esr, I’ll point out that while threading is more difficult in C and even somewhat more involved in Python, recent languages like Go, D and Scala are designed around multiprocessing and using multiple threads and so forth, so it’s way easier than it used to be.
That said, there is a time and a place for multithreading. Most multithreading seems to be implemented using POSIX threads, which are expensive, so you have to know when using multithreading vs. separate processes buys you performance. It is not always the case that using multithreading is worth it, even if the language does it make it easier. There is still something to be said for the simplicity of a fork.
Oh, and BTW — while there is no fork on Windows, you can simulate one with CreateJobObject and AssignProcessToJobObject on Windows 2000 and later. It is however, much hairier.
@Morgan
> It is not always the case that using multithreading is worth it, even if the language does it make it easier. There is still
> something to be said for the simplicity of a fork.
That’s what I was driving at, yeah. I personally weigh the modularity and flexibility of forking (and I use ‘forking’ here loosely, as I also include the myriad messaging methods with it) a little more than the performance of threading.
Even with OpenMP you have to take care about “false sharing” and bank conflicts…
Building on the single-tasking nature of the UI:
Windows applications still live in the world of modal dialogs, which absolutely drive me up the wall. In a multitasking environment, why oh why must you force me (as a user) to close a dialog box before engaging in another task? This is continually messing up my desired workflow.
Non-modal dialogs have been available in Windows for a long time, but few applications developers seem to use them routinely.
Threads are not so much a menace as a powerful but specific tool. I go out of my way to avoid threads, but in cases where I have to I then triple-check the semaphores, memory and other mechanisms associated with them and try to isolate them to the simplest subset. I know exactly what they are doing in all possible permutations.
Some UI things like Java have them but it is still a matter of taking care. If you isolate where the threads are it can be nearly as good as having separate processes. Normally a UI and background processor that only share a small footprint interface.
Burying them in layers of protection either limit or negate their usefulness, cause bloat and leakage, or encourage their use where refactoring would easily yield something better. Sort of like C++.
http://www.joelonsoftware.com/articles/ThePerilsofJavaSchools.html goes into pointers and recursion and too many programmers today can’t handle those. Threading means you have to think in higher dimensions still, at least to get things right.
@esr:
Threads were never rocket science, but they do require care. Current tools reduce the programming burden, but even with stock C or assembly language, some discipline and a well-thumbed copy of Hoare was all that ever was really required.
@Jessica Boxer:
This. The primary difference between yesterday and today is that threads (done properly) buy you a lot more today than they did a decade ago. This means both that more programmers are learning to use them correctly and that there is (as esr points out) extra evolutionary pressure on the tools to make this work.
As a side-note, the underlying mechanics of threads are actually arguably more difficult now for most systems than they were a couple of decades ago. Extra complexity in desktop systems manifests itself as multicore, which is more difficult to get right — in the simple case, a uniprocessor system can disable interrupts to insure atomic operation, and that just doesn’t work any more. Extra complexity in embedded systems often manifests itself as cache, but not every vendor spends the transistors and cycles to snoop the bus and make sure that DMA I/O is cache-coherent…
“Lazy Evaluation” is not done until the results are required (which may be never). So is a “deferred execution” done at some later point, but guaranteed to be done? Or what is the difference that you got yelled out for?
@Cathy:
Modal vs. non-modal dialogs is completely orthogonal to multithreading. In fact, like many programming problems, it’s probably easier to make non-modal dialogs operate correctly in a non-threaded environment.
@Jim T, in response to you question about why gui toolkits are primarily single threaded I have this link which is a pretty good summary of the issues involved, written from the perspective of people involved in the creation of the Swing toolkit for Java: http://weblogs.java.net/blog/kgh/archive/2004/10/multithreaded_t.html
Before any comes in and says that just because Swing failed to do it, maybe that is more a reflection on Swing than on the idea I present this thesis on the creation of an actual fully threaded UI toolkit: http://www.inf.uos.de/elmar/projects/java-gtk/thread-ui.pdf
So does that second link contradict the first? No, if you read the thesis I think you’ll see that the approach one would need to use to create an application with the threaded library is quite different and not without it’s own set of drawbacks. There appears to be a fundamental mismatch between the evented model of UI’s and threads, and I don’t think there is an easy way around it. So it appears that we will likely have to be careful of doing work in the UI thread for quite a while yet.
Multhreaded GUIs are not as useful as people think.
What you end up with is exponential increase of complexity ( everything needs to be synchronized, including derived implementations so popular in GUI world ) and for what ?
Just to be able to flip a control from another thread ?
Instead of synchronizing on every permutation of a widget, it is much easier to synchronize on required parts of your data domain – you almost never want to draw from separate threads to the same window, what you end up doing most often is presenting multiple views of the same data , in which case you synchronize (double buffer perhaps) on your data.
Multithreading is more intended so that there is nothing that halts the UI thread while it is thinking. IBM’s guidelines for OS/2 said if it takes more than 10ms to process, it belongs in a separate thread.
And with the way that the Windows NT kernel schedules threads across processors/cores/HT, it is always a net plus for performance.
(I don’t know how Linux implements threads, never used them. So this is not a slam or flamebait.)
@brian:
I think you may be misconstruing what warmi says — there is a considerable difference between multithreading UI vs. computation, and multithreading multiple UI tasks against each other. The former, which you are discussing, is often a huge win for not much effort. The latter, which warmi was discussing is only a win if the UI activities don’t then immediately get bottlenecked in the graphics hardware, and it can be some serious effort to try to parallelize some graphics operations without interference between the threads.
btw, warmi said “multithreaded gui,” which means exactly that — the gui itself is multithreaded, not that the gui operates on a single thread which could, in fact, communicate with other threads…
Patrick:
That’s correct.
I was talking about having GUI libraries being threadsafe, not about writing GUI based , multithreaded apps in general.
Since when can’t full processes make use of shared memory? You allocate a block of shared memory, you allocate data structures within that block, you share access to those data structures. The correct answer to “how should I share some memory between these execution contexts?” is not always “make it easy for any context to write to nearly every single byte of memory in any other context”.
With that being said, I do write threaded code; sticking to abstractions like the Threaded Building Blocks makes it relatively easy to take advantage of threads in fairly concise and safe ways.
@roystgnr:
This brings up an interesting point.
In a simple single-process embedded system, polling isn’t usually that much more inefficient than thread switching, and if you’re concerned about power management you can always halt the core until you get an interrupt.
So, in general, the best use-case for threads exists in systems with multiple processes, but unless you’re severely memory resource-constrained, the very ability to have multiple processes often obviates the need for threads.
Yes. Threads are most useful when you have multiple cores, and highly parallelizable yet tightly coupled tasks to perform. Unfortunately, “tightly coupled” and “highly parallelizable” often don’t go together, and even when they do, “tightly coupled” is the very feature that makes it difficult for mere mortals to write code that actually manages to usefully exploit parallelism.
Any thoughts on a 2nd edition of TAOUP?
Your position on “figuring out how to automatically do the right thing” vs mini-language config files seems to have evolved since its release as well.
>Any thoughts on a 2nd edition of TAOUP?
I’ve run the idea past Addison-Wesley. They don’t seem very interested, which puzzles me because I was given to understand the book did pretty well.
>Your position on “figuring out how to automatically do the right thing” vs mini-language config files seems to have evolved since its release as well.
Yes, that’s true. I started to push autoconfiguration in my own projects pretty hard in 2004 after thinking about what the Mac got right.
Patrick Maupin Says:
> “Lazy Evaluation” is not done until the results are required (which may be never). So is a “deferred execution” done at some later point, but guaranteed to be done? Or what is the difference that you got yelled out for?
Only done when the value is actually to be used, if it is never used, it is never evaluated. You could, for example, define an infinitely long list. I can’t remember what the Functional Programming purists yelled at me about, but my experience is that they can be rather territorial about this sort of terminology.
I like the way OpenBSD is doing this; OpenBSD makes their new servers “automatically do the right thing”, but they provide the mini-language config files. In fact, the syntax of all their newer config files is pretty similar. I think they have some yacc files in common or something. The pf configuration language has been reused at the very least, in OpenSMTPD and OpenNTPD. It was a huge relief when they came out with OpenNTPD. Just enable it, and it does the most common use case, of synchronizing your clock with the set of time servers. But the config file lets you do any of the other stuff you might have wanted to do with it before. OpenSMTPD is pretty good for config too.
@Patrick Maupin:
You mean PC and mobile device programmers. Servers and Unix workstations have had multiprocessor systems for decades. But I’m sure you’ve encountered more than a few of those in your travails.
Just browsed over some notes on AMD’s Fusion Development Summary, going over their new graphics card architecture. Apparently, they’re focusing more on [b]general[/b] purpose computation over pure graphical stuff (it’ll still do that, too). Just when threading is emerging from adolescence, we’re going to have to dive into massively parallel problems.
I’m already pondering some applications on my end (I do process control applications, using C# and (Iron)Python on very slow (Atom/VIA) CPUs) that could possibly benefit, perhaps heavily, by offloading setpoint (a setpoint is a future trigger, normally on the reaching of some value (based on monitoring the current value, read by a continuous output from some device), that’ll require immediate action) monitoring to the GPU.
Threading itself is just the tip of the iceberg. We’re in for interesting times.
I believe the inflection point for the safety and usability of threads in mainstream programming was the development of the Java memory model. Before it, multithreaded programs lived in a limbo between a completely synchronous model and the asynchronous but moderately well-insulated land of independent processes. Java particularly specified semantics about memory ordering that had previously been highly platform-idiosyncratic (much like the width of the various C integer and pointer types, the latter of which (IIRC) don’t have to be the same size when pointing to different types). Since ordering issues are the core of what makes concurrent programming hard, having consistent rules makes dealing with concurrency much safer. The original JMM had some nasty edge cases, but it was a major advance in making imperative concurrency tractable.
>> Any thoughts on a 2nd edition of TAOUP?
> I’ve run the idea past Addison-Wesley. They don’t seem very interested, which puzzles me
> because I was given to understand the book did pretty well.
I speak only for myself, but I’m willing to bet I can get some “AMEN”s from regular readers of this blog –
I would be happy to pre-subscribe to a copy of the 2nd edition of TAOUP, i.e., pay to our noble host cash in advance to have the right to get a (preferably nicely printed) copy of the text. (I would pay somewhat less for an e-book, but YMMV.)
Any other subscribers / patrons ???
P.S. – “pebcak” – Eric, you may delete my nonsense comment #354730
>I would be happy to pre-subscribe to a copy of the 2nd edition of TAOUP
Thank you. If you want this to happen, the most effective thing you could do is scare up a way for me to compile DocBook to an open ebook format like EPUB. I have substantial new material I’d like to publish, including a very nice optimization case study for chapter 13.
@esr
> the most effective thing you could do is scare up a way for me to compile DocBook to an open ebook
> format like EPUB.
Unless I am mistaken, it looks like recent versions of DocBook support ePub exporting.
The DocBook XSL stylesheets support epub output. They have for a while, actually.
(Curses! Beaten by 2 minutes. :-D)
Fork(2): Thread or Menace?
>I’ve run the idea past Addison-Wesley. They don’t seem very interested, which puzzles me because I was given to understand the book did pretty well.
That’s a shame. I’ve recently started reading it. It’s a very good book, too good to leave it outdated. Count me as subscriber :)
Linked at http://yacof.blogspot.com/2011/12/another-no-content-link-post.html (along with a link to the 2002 post by moshez)
Processes can use shared memory too. With some caveats, mmap()ing files can work, as can SysV shared memory (if you’re a masochist).
In any case, serialization is a fair bit easier when you know the other processes are running on the same CPU, with the same compiler, etc. You can just directly write the contents of everything except for pointers.
You should definitely do a second edition of TAOUP yourself if you can legally do so a la Louis CK (https://buy.louisck.net/). PDF or epub. You wouldn’t make a million bucks in 2 weeks, but I’d buy a reasonably priced second edition in a heartbeat, especially if I could bypass the publishing industry.
@Patrick –
My mistake. I misinterpreted.
@Warmi –
I can’t speak to Windows’ implementation, but OS/2’s WPS (OS/2 3.0, circa 1994) was completely thread-safe.
I suspect a large part of the problem in doing threads “right” is this very thing. If the underlying OS and WM are properly thread-safe, then you only have to make sure your own code is safe.
I have done multithreaded GUI stuff in MFC – having separate dialogs with their own UI threads, and further threads doing the I/O and pushing the processed data into shared resources.
As far as Unix servers having been multiprocessor for decades: server programming is a very different beast than workstation programming. GUI response times are very important, as is the impression that actual work is being done.
@Aaron – this is true, but threads run in the same memory context as the parent, whereas forked processes don’t.
Every time I read the title of this post, I keep trying to make a Programmers of Pern joke. Thankfully (for the rest of you), I’ve never managed to get one to compile, Threads or no.
Mr. Burnside:
I believe if you type C-x M-c M-Dragon in emacs you get a mode that will you program threads more safely.
In vim you’d use :set dragons on, but that’s only if you the plugin installed.
@esr
>Thank you. If you want this to happen, the most effective thing you could do is scare up a way for me to compile DocBook to an open ebook format like EPUB. I have substantial new material I’d like to publish, including a very nice optimization case study for chapter 13.
Eric:
I too want the updated version of TAOUP, would be willing to pay money or effort.
If someone else hasn’t made this offer redundant, I would undertake to convert TAOUP from docbook to epub.
I would need a copy of the source or a pointer to a copy or a pointer to a document of similar structure and complexity.
And permission with at least a smidgen of enthusiasim from you.
I own a paper copy of the original, so I should have a pretty good notion of when I have it right.
My initial plan is to use the python docbook2epub, and the deliverable should be a program that will create an epub doc if given a pointer to the docbook source.
Use my login email for reply if appropriate.
Jim Hurlburt
Yakima, WA
Do you control the rights to the book?
I read the first 1/2 3/4 of it for free online before breaking down and purchasing a hard copy.
Maybe a finished copy of it online would give your publisher a push in the right direction.