TJ leaving node.js

I just saw the news. TJ Holowaychuk, one of node’s important and respected contributors is leaving node.js for Go. Of course, this is not good news, especially for people like me who have invested a lot into node.js and have bet an industrial project on it.

Why is TJ leaving? Part of it has to do with the intrinsic attractiveness of Go. But a large part is related to deficiencies on the node side. Usabililty and the lack of robust error handling come first:

Error-handling in Go is superior in my opinion. Node is great in the sense that you have to think about every error, and decide what to do. Node fails however because:

  • you may get duplicate callbacks
  • you may not get a callback at all (lost in limbo)
  • you may get out-of-band errors
  • emitters may get multiple “error” events
  • missing “error” events sends everything to hell
  • often unsure what requires “error” handlers
  • “error” handlers are very verbose
  • callbacks suck

TJ also complains about APIs, tooling, lack of conventions:

Streams are broken, callbacks are not great to work with, errors are vague, tooling is not great, community convention is sort of there, but lacking compared to Go. That being said there are certain tasks which I would probably still use Node for, building web sites, maybe the odd API or prototype. If Node can fix some of its fundamental problems then it has good chance at remaining relevant, but the performance over usability argument doesn’t fly when another solution is both more performant and more user-friendly.

I have been supervising a large node.js project at Sage. We started 4 years ago and we have faced the issues that TJ mentions very early in our project. After 6 months of experimentation in 2010, I was seriously questioning the viability of node.js for our project and I was contemplating a backtrack. The reasons were precisely the ones that TJ gives today: usability, maintainability, robustness.

Yet, we went ahead with node.js; we put more and more people on the project and we successfully released a new version of our product last month, with a new web stack based on node.js. Our developers are very productive and generally happy to work with node.js.


Simply because the problems that TJ is mentioning don’t apply to us, and to others who have chosen the same approach:

  • Error handling and robustness are not issues for us. We are writing all our code with streamline.js. This lets us use good old structured exception handling. IMO this is even better than Go because you don’t have to check error codes after every call.
  • We never get duplicate callbacks; callbacks don’t get lost; errors are always reported in context, … All these problems are simply gone!
  • Debugging works and exceptions have understandable stacktraces.
  • We use an alternate streams library, based on callbacks rather than events, which keeps our code simple, robust and easy to understand.

So let us not throw the baby with the bath water. The problems that TJ puts forwards are very real but they are not insurmountable. You can write robust, elegant and maintainable node.js code today!

Maybe it is time to reconsider a few things:

  • Stop being stubborn about callbacks and push one of the alternatives: generators, fibers, preprocessors (a la streamline) (*). Probably not for the core code itself because of the performance overhead, but as an option for userland code.
  • Investigate alternatives for the streams API. Libraries like Tim Caswell’s min-stream or my own ez-streams should be considered. My experience with ez-streams is that a simple callback-based API makes a huge difference on usability and robustness (**).

(*) I left promises out of the list. IMO, they don’t cut it on usability.
(**) ez-streams is partly implemented with streamline.js, which will probably be a showstopper for some but its API is a pure callback API and it could easily be re-implemented in pure callbacks.

As I said in my intro above, I have a very strong investment in node.js and I really want the platform to continue to grow and succeed. Three years ago, node.js was the coolest platform because of the unique blend of JavaScript and asynchronous I/O. But there are alternatives today and people are asking for more, especially on usability and robustness.

The problems raised by TJ cannot be ignored.

Posted in Uncategorized | Tagged | 5 Comments

Easy node.js streams

JavaScript is a great playground for experimentation. After ES6 generators and Galaxy I went back to one of my pet topics: streams. The simple streams API that we have been using in our product works really well but I was getting a bit frustrated with it: too low level! Working with this simple read/write API felt a bit like working with arrays without the ES5 functional goodies (forEach, filter, map, reduce, etc.). You get the job done with loops but you lack the elegance of functional chaining. So, I decided to fix it and a new project was born: ez-streams.

I had been keeping an eye on streams2, the streaming API that got introduced in node 0.10.0, but I have not been convinced: too complex, people seem to be struggling with it, exception handling has problems, etc. So I went with a different design. Compatibility with steams2 was crucial but it was just too hard to get where I wanted to go by building directly on top of it.

The ez-streams project is now starting to take shape and I’ve just published a first version to NPM. The README gives an overview of the API and I don’t want to repeat it here. You should probably glance through it before reading this post to get a feel for the API. Here I want to focus on API design issues and explain why I took this route.

This project is a natural continuation of my earlier work on streamline.js. So I will be using the streamline syntax for the examples in this post. But the ez-streams API is just a regular callback based API and you don’t have to write your code with streamline to use it. You can call it from regular JavaScript code. I have included pure callback versions of some of the examples, to show how the API plays with vanilla JavaScript.

Minimal essential API

The first idea in this project was to keep the essential API as small and simple as possible. The essential ez-streams API consists in two function signatures:

  • an asynchronous read(_) function which characterizes reader streams.
  • an asynchronous write(_, val) function which characterizes writer streams.

The complete reader API is much more sophisticated but all the other calls are implemented directly or indirectly around the read(_) call. This makes it very easy to implement readers: all you have to do is pass a read function to a helper that will decorate it with the rest of the API.

For example, here is how you can expose a mongodb cursor as an EZ stream:

var reader = function(cursor) {
    return ez.devices.generic.reader(function(_) {
        var obj = cursor.nextObject(_);
        return obj == null ? undefined : obj;

Also, there was no reason to limit this API to string and buffer types: read could very well return integers, Booleans, objects. Even null or undefined. I decided to reserve undefined as an end-of-stream marker because I wanted streams to be able to transport all the types that are serializable in JSON. Symmetrically, I used undefined as end-of-stream marker for write. So there was no need for a separate end method, writer.write(_) would do the job.

As a consequence the API is not tainted by datatype specific issues. For example there is nothing in the reader and writer API about string encoding. This issue is handled in the devices that you put at both ends of your data processing chains. Better keep things orthogonal!

I may sound like an extremist in API minimalism here but I think that this is a very important point. A simple API is easier to wrap around existing APIs, it lends itself naturally to algebraic (monadic) designs, etc. This is probably the main reason why I did not go with node’s stream APIs (version 1 or 2).

Function application rather than pipes

The EZ streams design is directly influenced by ES5’s functional array API. It actually started as an attempt to mimick completely the ES5 design and the rest of the API followed naturally. It is also more remotely influenced by jQuery.

There is a pipe function in the EZ streams API but it plays a less prominent role than in node’s standard stream API. The pipe calls do not appear between processing steps. Instead, pipe only appears at the end of the chains, to transfer the data to a writer. The typical structure of an EZ streams chain is:

reader . op1(fn1) . op2(fn2)  .... opN(fnN) . pipe(_, writer)

All operations produce a reader, except the last one which is a reducer. pipe is a reducer but it is not the only one: forEach, some, every and of course reduce are all reducers and you can end your chains with any of them.

Most operations take a callback function as parameter (fn1, fn2, etc. above). The callback depends on the operation. It can be a filter, a mapper, a complex transform, etc. These callbacks allow you to inject your own logic into the chain.

The classical node pattern is different. It is directly inspired from UNIX’s command piping:

source | op1 | op2 | ... | opN

which becomes:

source . pipe(stream1) . pipe(stream2) ... .pipe(streamN)

The node design forces you to package your logic as streams, usually duplex streams which receive data from one pipe, transform it and send their results to another pipe.

I find the EZ stream design more natural and easier to use: it does not force you to package your code as streams and handle low-level stream events. Instead, you just have to provide a set of callbacks that are specific to the operations that you apply. Moreover, the most basic operations, like filter and map are aligned on the ES5 array API. So you are using familiar patterns.

Mixing functional and imperative styles

The general structure of an ez-streams processing chain is very functional. The basic operations (filter, map, reduce) are directly modelled after the ES5 functional array API. They are applied to a reader, and they produce another reader on which other operations can be chained.

But there is one important operation that somehow violates this rule: transform. The transform call itself is functional and is chained exactly like the other operations. But its callback receives 3 parameters: a continuation callback, a reader and a writer. You write the body of your transformation as a function that reads its input from the reader parameter and writes its output to the writer parameter.

Let us look at the CSV parser that I used as example in the README:

var csvParser = function(_, reader, writer) {
	// get a lines parser from our transforms library
	var linesParser = ez.transforms.lines.parser();
	// transform the raw text reader into a lines reader
	reader = reader.transform(linesParser);
	// read the first line and split it to get the keys
	var keys =',');
	// read the other lines
	reader.forEach(_, function(_, line) {
		// ignore empty line (we get one at the end if file is terminated by newline)
		if (line.length === 0) return;
		// split the line to get the values
		var values = line.split(',');
		// convert it to an object with the keys that we got before
		var obj = {};
		keys.forEach(function(key, i) {
			obj[key] = values[i];
		// send the object downwards.
		writer.write(_, obj);

This is clearly imperative style. Full of calls like read, write or forEach that smell side-effects.

This could be seen as a weakness of the design. Why introduce imperative style in this wonderful functional world?

The reason is simple: because it is usually easier to write transforms in imperative style!

If instead you try to write your transforms directly as chainable functions, you have to write functions that transform a reader into another reader. These functions usually take the form of a state automata. You have to write state machines!

Some folks find it natural and fun to write state machines. I don’t! I find it more difficult and more error prone than writing mundane loops with read and write calls. State machines are great but I’d rather let a program generate them than write them myself (I love regular expressions).

So the role of the transform function is simply to put the developpers back into their imperative shoes (*).

(*) Fortunately I noticed my horrible mistake and rephrased this in gender-neutral form before publishing. I don’t want to be the next victim!

When it comes to programming styles my religion is that you should just be pragmatic and use the style that best fits your problem instead of trying to fit everything into one style which has been arbitrarily designated as superior. Functional, imperative and object oriented styles all have a role to play in modern programming and a good developer is someone who uses the right style at the right moment, not someone why tries to force everything into a single style. I’d have a lot more to say but I’ll keep it for another post.

Exception handling

Exception handling works rather naturally with EZ streams: all processing chains are terminated by a reducer, and this reducer, unlike the previous chain elements, takes a continuation callback as first parameter. Exceptions that occur inside the chain are funneled through this continuation callback.

So, if you write your code with streamline.js, you can trap the exceptions with a try/catch around the whole chain. For example:

try {
        .filter(function(_, item) {
            return item.gender === 'F';
        .transform(ez.transforms.json.formatter({ space: '\t' }))
        .pipe(_, ez.devices.file.text.writer('females.json'));
} catch (ex) {
    logger.write(_, ex);

If you use EZ streams with raw callbacks, you just need to test the first parameter of your continuation callback. The previous example becomes:

    .filter(function(cb, item) {
        cb(null, item.gender === 'F');
    .transform(ez.transforms.json.formatter({ space: '\t' }))
    .pipe(function(err) {
        if (err) logger.write(function(e) {}, err);
    }, ez.devices.file.text.writer('females.json'));

Of course, you can also trap exceptions in all the processing callbacks that you install in the chain (the callbacks that you pass to filter, map, transform, etc.). If you trap an exception in such callbacks and return a normal result instead, processing will continue as usual and your reducer callback will not receive the exception.

So you do not have to use domains or other advanced error handling techniques; the EZ streams API is just a regular API with continuation callbacks and exceptions are always propagated through these callbacks. If they are not, this is a bug.

Backpressure and buffering

Developers who implement node streams keep talking about backpressure. From what I understand, they have to write special code to inform their inputs when outputs are not processing data fast enough, so that the inputs get paused. Then, once the outputs get drained a bit, the inputs can be resumed.

Frankly, I do not understand any of this. We have been writing a lot of code with the low-level read/write API (the essential API) and we have never run into situations where we would need to worry about backpressure and write special code to pause inputs.

This is because EZ streams handle read and write operations in a decoupled way at the low level. When wrapping a node readable stream, our read function buffers a bit of input with low and high water marks. We pause the stream when the high mark is reached and we resume it when the buffer goes below the low mark. On the output side, our write wrapper handles the little drain event dance so that we don’t overflow the output buffers. There is no explicit coordination between inputs and outputs, it works almost magically, thanks to the event loop:

  • If the input is too fast when we pipe data, the input stream gets paused when its buffer hits the high mark. Then the output gets a chance to drain its output buffers and process the data which has been buffered upstream. When the input buffers fall below the low mark, the input stream will be resumed and it will likely fill again its input buffers before the output gets drained. So input will be paused again, etc., etc.
  • If, on the other hand, the output is faster, the input stream will have empty buffers at all times and the pipe will be waiting for input most of the time.

So backpressure is a non issue with EZ streams. You don’t need to worry about it!

What you should worry about instead is buffering because it will impact the throughput of your stream chains. If you do not buffer at all, your pipeline will likely be inefficient because data will remain buffered at the source of the chain whenever some other operation is waiting for I/O further down the line. To keep the data flowing you need to inject buffers into the chain. These buffers will keep the upstream chain busy (until they fill, of course) while the downstream operations are waiting for I/O. Then, when the downstream operation will be ready to accept new input, it will get it from the buffer instead of having to pull it from the beginning of the chain.

The EZ streams API includes a buffer operation that you can inject in your processing chains to add buffering. The typical pattern is:

reader.transform(T1).buffer(N).transform(T2).pipe(_, writer);

This will buffer N items between transforms T1 and T2.

Note that buffering can become really tricky when you start to have diamond shape topologies (a fork followed by a join). If you are unlucky and if one of the branches is systematically dequeued faster than the other, you will need illimited buffering in the fork node to keep things flowing. I hit this in one of my unit tests but fortunately this was with a very academic example and it seems unlikely to hit this problem with real data processing chains. But who knows?

Streams and pipes revisited

I have been rather critical of node’s streaming/piping philosophy in the past. I just did not buy the idea that code would be packaged as streams and that you would assemble a whole application by just piping streams into each other. I think that my reluctance came primarily from the complexity of the API. Implementing a node.js stream is a real endeavor, and I could just not imagine our team use such a complex API as a standard code packaging pattern.

I’ve been playing with ez-streams for a few weeks now, and I’m starting to really like the idea of exposing a lot of data through the reader API, and of packaging a lot of operations as reusable and configurable filters, mappers, transforms, etc. So I feel that I’m getting more into the streams and pipes vision. But I only buy it if the API is simple and algebraic.

One word of caution to close this post: the ez-streams implementation is still immature. I have written basic unit tests, and they pass but I’m very far from having tested all possible edge conditions. So don’t expect everything to be perfectly oiled. On the other hand, I’m rather pleased with the current shape of the API and I don’t expect to make fundamental changes, except maybe in advanced operations like fork and join.

Posted in Uncategorized | 6 Comments

Bringing async/await to life in JavaScript

My dream has come true this week. I can now write clean asynchronous code in JavaScript: no callbacks, no intrusive control flow library, no ugly preprocessor. Just plain JavaScript!

This is made possible by a new feature of JavaScript called generator functions, which has been introduced by EcmaScript 6 and is now available in node.js (unstable version 0.11.2). I already blogged about generators a few times so I won’t get into the basics again here. The important thing is that ES6 introduces two small extensions to the language syntax:

  • function*: the functions that you declare with a little twinkling star are generator functions. They execute in an unusual way and return generators.
  • yield: this keyword lets you transfer control from a generator to the function that controls it.

And, even though these two language constructs were not orginally designed to have the async/await semantics found in other languages, it is possible to give them these semantics:

  • The * in function* is your async keyword.
  • yield is your await keyword.

Knowing this, you can write asynchronous code as if JavaScript had async/await keywords. Here is an example:

function* countLines(path) {
    var names = yield fs.readdir(path);
    var total = 0;
    for (var i = 0; i < names.length; i++) {
        var fullname = path + '/' + names[i];
        var count = (yield fs.readFile(fullname, 'utf8')).split('\n').length;
        console.log(fullname + ': ' + count);
        total += count;
    return total;

function* projectLineCounts() {
    var total = 0;
    total += yield countLines(__dirname + '/../examples');
    total += yield countLines(__dirname + '/../lib');
    total += yield countLines(__dirname + '/../test');
    console.log('TOTAL: ' + total);
    return total;

Here, we have two asynchronous functions (countLines and projectLineCounts) that call each other and call node.js APIs (fs.readdir, fs.readFile). If you look carefully you’ll notice that these functions don’t call any special async helper API. Everything is done with our two markers: the little * marks declarations of asynchronous functions and yield marks calls to asynchronous functions. Just like async and await in other languages.

And it will work!


The magic comes from galaxy, a small library that I derived from my earlier work on streamline.js and generators.

Part of the magic is that the fs variable is not the usual node.js file system module; it is a small wrapper around that module:

var galaxy = require('galaxy');
var fs ='fs'));

The function converts usual callback-based node.js functions into generator functions that play well with the generator functions that we have written above.

The other part of the magic comes from the galaxy.unstar function which converts in the other direction, from generator functions to callback-based node.js functions. This unstar function allows us to transform projectLineCounts into a callback-based function that we can call as a regular node.js function:

var projectLineCountsCb = galaxy.unstar(projectLineCounts);

projectLineCountsCb(function(err, result) {
    if (err) throw err;
    console.log('CALLBACK RESULT: ' + result);

The complete example is available here.

The whole idea behind this API design is that galaxy lets you write code in two different spaces:

  • The old callback space in which today’s node.js APIs live. In this space, you program with regular unstarred functions in continuation passing style (callbacks).
  • The new generator space. In this space, you program in synchronous style with starred functions.

The star and unstar functions allow you to expose the APIs of one space into the other space. And that’s all you need to bring async/await to life in node.js.


I assembled galaxy quickly from pieces that I had developed for streamline.js. So it needs a bit of polishing and the API may move a bit. Generator support in V8 and node.js is also brand new. So all of this is not yet ready for prime time but you can already play with it if you are curious.

I have introduced a galaxy.spin function to parallelize function calls. I’ll probably carry over some other goodies from the streamline project (funnel semaphore, asynchronous array functions, streams module, …).

I find it exciting that modules written in async/await style with galaxy don’t have any direct dependencies on the node.js callback convention. So, for example, it would be easy to write a browser variant of the star/unstar functions which would be aligned on the jQuery callback conventions, with separate callback and errback.

Also, another module was announced on the node.js mailing list this week: suspend. It takes the problem from a slightly different angle, by wrapping every generator function with a suspend call. It lets you consume node.js APIs directly and write functions that follow node’s callback pattern. This is an attractive option for library developers who want to stay close to node’s callback model. Take a look at the source; it’s really clever: only 16 locs! Galaxy is different in that it moves you to a different space where you can program in sync style with no additional API, just language keywords. Probably a more attractive option if you are writing applications because you’ll get leaner code if most of your calls are to your own APIs rather than to node’s APIs.

Happy */yield coding!

Posted in Asynchronous JavaScript, Uncategorized | 43 Comments

Harmony Generators in streamline.js

Harmony generators have landed in a node.js fork this week. I couldn’t resist, I had to give them a try.

Getting started

If you want to try them, that’s easy. First, build and install node from Andy Wingo’s fork:

$ git clone node-generators
$ cd node-generators
$ git branch v8-3.19
$ ./configure
$ make
# get a coffee ...
$ make install # you may need sudo in this one

Now, create a fibo.js file with the following code:

function* genFibos() {  
  var f1 = 1, f2 = 1;  
  while (true) {  
    yield f1;  
    var t = f1;  
    f1 = f2;  
    f2 += t;  

function printFibos() {
    var g = genFibos();
    for (var i = 0; i < 10; i++) {
      var num =;
      console.log('fibo(' + i + ') = ' + num);  


And run it:

$ node --harmony fibo
fibo(0) = 1
fibo(1) = 1
fibo(2) = 2
fibo(3) = 3
fibo(4) = 5
fibo(5) = 8
fibo(6) = 13
fibo(7) = 21
fibo(8) = 34
fibo(9) = 55

Note that generators are not activated by default. You have to pass the --harmony flag to activate them.

Using generators with streamline.js

I had implemented generators support in streamline.js one year ago and I blogged about it but I could only test in Firefox at the time, with a pre-harmony version of generators. I had to make a few changes to bring it on par with harmony and I published it to npm yesterday (version 0.4.11).

To try it, install or update streamline:

$ npm install -g streamline@latest # you may need sudo

Then you can run the streamline examples:

$ cp -r /usr/local/lib/node_modules/streamline/examples .
$ cd examples
$ _node_harmony --generators diskUsage/diskUsage
./diskUsage: 4501
./loader: 1710
./misc: 7311
./streamlineMe: 13919
./streams: 1528
.: 28969
completed in 7 ms

You have to use _node_harmony instead of _node to activate the --harmony mode in V8. You also have to pass the --generators option to tell streamline to use generators. If you do not pass this flag, the example will still work but in callback mode, and you won’t see much difference.

To see what the transformed code looks like, you can just pass the -c option to streamline:

$ _node_harmony --generators -c diskUsage/diskUsage._js

This command generates a diskUsage/diskUsage.js source file containing:

/*** Generated by streamline 0.4.11 (generators) - DO NOT EDIT ***/var fstreamline__ = require("streamline/lib/generators/runtime"); (fstreamline__.create(function*(_) {var du_ = fstreamline__.create(du, 0); /*
 * Usage: _node diskUsage [path]
 * Recursively computes the size of directories.
 * Demonstrates how standard asynchronous node.js functions
 * like fs.stat, fs.readdir, fs.readFile can be called from 'streamlined'
 * Javascript code.
"use strict";

var fs = require('fs');

function* du(_, path) {
  var total = 0;
  var stat = (yield fstreamline__.invoke(fs, "stat", [path, _], 1));
  if (stat.isFile()) {
    total += (yield fstreamline__.invoke(fs, "readFile", [path, _], 1)).length;
  } else if (stat.isDirectory()) {
    var files = (yield fstreamline__.invoke(fs, "readdir", [path, _], 1));
    for (var i = 0; i < files.length; i++) {       total += (yield du(_, path + "/" + files[i]));     }     console.log(path + ": " + total);   } else {     console.log(path + ": odd file");   }   yield ( total); } try {   var p = process.argv.length > 2 ? process.argv[2] : ".";

  var t0 =;
  (yield du(_, p));
  console.log("completed in " + ( - t0) + " ms");
} catch (ex) {
}, 0).call(this, function(err) {
  if (err) throw err;

As you can see, it looks very similar to the original diskUsage/diskUsage._js source. The main differences are:

  • Asynchronous functions are declared with function* instead of function.
  • Asynchronous functions are called with a yield, and with an indirection though fstreamline__.invoke if they are not directly in scope

But otherwise, the code layout and the comments are preserved, like in --fibers mode.

You can execute this transformed file directly with:

npm link streamline # make streamline runtime available locally - may need sudo
node --harmony diskUsage/diskUsage


Of course, the next step was to try to compare performance between the 3 streamline modes: callbacks, fibers and generators. This is a bit unfair because generators are really experimental and haven’t been optimized like the rest of V8 yet but I wrote a little benchmark that compares the 3 streamline modes as well as a raw callbacks implementation. Here is a summary of my early findings:

  • In tight benches with lots of calls to setImmediate, raw callbacks outperform the others by a factor of 2 to 3.
  • Fibers always outperform streamline callbacks and generators modes.
  • Fibers nails down everyone else, including raw callbacks, when the sync logic dominates the async calls. For example, it is 4 times faster than raw callbacks in the n=25, loop=1, modulo=1000, fn=setImmediate case.
  • Streamline callbacks and generators always come up very close, with a slight advantage to callbacks.
  • The results get much closer when real I/O calls start to dominate. For example, all results are in the [243 258] ms range with the simple loop of readMe calls.
  • The raw callbacks bench is more fragile than the others. It stack overflows when the modulo parameter gets close to 5000. The others don’t.
  • The generators bench crashed when setting the modulo parameter to values < 2.

My interpretation of these results:

  • The difference between streamline callbacks and raw callbacks is likely due to the fact that streamline provides some comfort features: long stack traces, automatic trampolining (avoids the stack overflow that we get with raw callbacks), TLS-like context, robust exception handling, etc. This isn’t free.
  • I expected very good performance from fibers when the sync/async code ratio increases. This is because the sync-style logic that sits on top of async calls undergoes very little transformation in fibers mode. So there is almost no overhead in the higher level sync-style code, not even the overhead of a callback. On the other hand fibers has more overhead than callbacks when the frequency of async calls is very high because it has to go through the fibers layer every time.
  • Generators are a bit disappointing but this is not completely suprising. First, they just landed in V8 and they probably aren’t optimized. But this is also likely due to the single frame continuation constraint: when you have to traverse several layers of calls before reaching the async calls, every layer has to create a generator and you need a run function that interacts with all these generators to make them move forwards (see lib/generators/runtime.js). This is a bit like callbacks where the callbacks impact all the layers that sit on top of async APIs, but not at all like fibers where the higher layers don’t need to be transformed.
  • The fibers and generators benches are based on code which has been transformed by streamline, not on hand-written code. There may be room for improvement with manual code, although I don’t expect the gap to be in any way comparable to the one between raw callbacks and streamline callbacks. The fibers transformation/runtime is actually quite smart (Marcel wrote it). I wrote the generators transform and I think it is pretty efficient but it would interesting to bench it against other solutions, for example against libraries that combine promises and generators (I think that those will be slower because they need to create more closures/objects but this is just a guess at this point).
  • The crashes in generators mode aren’t really anything to worry about. I was benching with bleeding edge software and I’m confident that the V8 generators gurus will fix them.

So yes, generators are coming… but they may have a challenge to compete head to head with raw callbacks and fibers on pure performance.

Posted in Asynchronous JavaScript, Uncategorized | Leave a comment

Node’s social pariahs

I learned a new expression on the node mailing list this week: social pariahs. The node.js police is after them, and it looks like I’m on the black list. I should probably take it easy and just “roll in my grave”, like Marcel did :-).

But Mikeal followed up with a blog article and I’d like to respond. Unfortunately, comments are turned off on his blog so I’m responding here (BTW, I wonder how comments work with horizontal scrolling).


I’ll be quick on this one. Yes, compatibility is very important and you need some rules if you want to build a lively ecosystem. The module system and the calling conventions are key. I learned this 25 years ago, when designing APIs on VAX/VMS. VMS had this great concept of common calling conventions which made it possible to link together modules written in different languages. Nothing new under the skies here.

Promise libraries are problematic in this respect because they promote a different API style for asynchronous functions. The standard callback(err, result) pattern is replaced by a pair of callback and errback, plus an optional progress callback, with different signatures. So you need wrappers to convert between the two API styles. Not a problem today as the wide majority of node.js libraries stick to node’s callback style but it could cause fragmentation if promises were to gain momentum.

Streamline.js is a good node.js citizen

Mikeal is quite vocal against streamline.js but I doubt that he has even read the README file. He is missing some very important points:

Streamline is not a library, it is a language tool, a precompiler.

Streamline is fully aligned on node’s callback convention.

Streamline is not trying to disrupt the ecosystem, it is trying to help people consume compliant code, and also produce compliant code.

To illustrate this, let me go back to the example that revived the debate this week on the mailing list. As I wrote in my post, streamline lets you chain the 3 asynchronous calls in a single line of code:

function computeAsyncExpression(_) {
  return (Object1.retrieveNum1(_) + Object2.retrieveNum2(_)) /  Object3.retrieveNum3(_);

The streamline preprocessor transforms this into (*):

function computeAsyncExpression(cb) {
  if (cb == null) return __future(computeAsyncExpression, 0);
  Object1.retrieveNum1(function(err, v1) {
    if (err) return cb(err);
    Object2.retrieveNum2(function(err, v2) {
      if (err) return cb(err);
      Object3.retrieveNum3(function(err, v3) {
        if (err) return cb(err);
        cb(null, (v1 + v2) / v3);

(*) actual code is a bit different but the differences are irrelevant here.

So the computeAsyncExpression function generated by streamline is more or less what the OP posted on the mailing list. It is a regular node.js function with a callback. You can call it like any other node.js API that you would have implemented directly in JavaScript with callbacks.

Streamline.js does not try to enforce a new API style; it just helps you write functions that conform to node’s callback conventions. And for lazy people like me, writing one line instead of 10 is a big win.

I did not talk about the first line in the generated function:

  if (cb == null) return __future(computeAsyncExpression, 0);

This is not a standard node.js pattern! What does it do?

If you pass a null or undefined callback to a standard node API, you usually get an exception. This is considered to be a bug and you have to fix your code and pass a valid callback.

Streamline handles this case differently, by returning a future instead of throwing an exception. The returned future works very much like a promise but it does not come with a new API pattern. Instead, a streamline future is a function that takes a regular node.js callback as parameter. You typically use it as:

  var future = computeAsyncExpression(null);
  // code that executes in parallel with computeAsyncExpression
  // now, get a result from the future
  future(function(err, result) {
    // async computation is over, handle the result

Streamline is not introducing a disruptive API pattern here. It is leveraging the existing callback pattern.

So far so good but streamline also supports a fibers mode, and experimental support for generators. Is this still aligned on node’s callback convention?

The answer may seem surprising but it is a YES. If you precompile the computeAsyncExpression(_) function with the --fibers option, what you get is still a regular asynchronous node.js function that you can call with a regular callback. This function uses fibers under the hood but it retains the standard callback signature. I won’t explain the technical details here because this would drive us too far but this is how it is.

And when generators land into V8 and node, it will be the same: streamline will give you the option to use them under the hood but still produce and consume standard callback APIs!

Libraries and Applications

The second point I wanted to discuss in this response is the distinction between libraries and applications. The node.js ecosystem is not just about people who publish modules to NPM. There are also lots of people who are building applications and services with node.js. Maybe they do not directly contribute to the ecosystem because they do not share their code but they contribute to the success and visibility of the platform.

I did not write streamline.js because I wanted to flood NPM with idiosyncratic modules. I wrote it because I was developing an application with a team of developers and I wanted application code that is robust, easy to write and easy to maintain. I wrote it because we had started to develop our application in callback style and we had produced code that was too convoluted and too fragile. Also we had reached the end of our prototyping phase and were about to move to a more industrial phase, and the learning curve of callbacks was just too high.

If I were in the business of writing NPM modules, I would probably think twice before writing them with streamline: there is a slight overhead because of some of the comfort features that streamline gives you (robust exception handling, TLS-like context, long stack trace); it is a different language, like CoffeeScript, which may rebuke people who want to fork, etc. I would probably use it to write drivers for complex legacy protocols (we are doing this internally in our project) but I would probably stick to raw callbacks for creative, very lightweight modules.

But I’m not writing NPM modules; I’m writing applications and services. And if I post a link to streamline to the mailing list it is because I think that this tool may help other people who are trying to write applications and who are running into the problems that we ran into more than 2 years ago: robustness, maintainability, learning curve, etc. To plagiarize Mikeal:

I feel really bad for people that ask incredibly simple questions on this list and get these incredibly complex answers!

I may have been too vocal on the mailing list at some point but I’m trying to be much more discreet these days. The streamline ecosystem is not very large but the feedback that I get is very positive. People like the tool and it solves their problem. So I don’t feel bad posting a link when the async topic comes back to the mailing list. Maybe a tool like streamline can help the OP solve his problem. And even if it does not, it won’t hurt the OP to take a look, discover that there is not one way of dealing with async code. He’ll learn along the way and make up his own mind.

Posted in Uncategorized | 5 Comments

Node.js stream API: events or callbacks?

Last year, I wrote a blog post about events and node streams. In this post, I proposed an alternative API for streams: callback-oriented rather than event-oriented.

For readable streams, the proposal was to have a simple read(cb) call, where cb is a callback with a function(err, data) signature. A null data value signals the end of stream.

I did not discuss writable streams in this early post but shortly afterwards I implemented wrappers for both readable and writable streams in streamline.js’ streams module and I used a very similar design for the writable stream API: a simple write(data, cb) function (similarly, a null data ends the stream).

Note: the parameters are swapped in the streamline API (write(cb, data)) because it makes it easier to deal with optional parameters. In this post I will stick to the standard node.js convention of passing the callback as last parameter.

I have been using this callback-based API for more than a year and I have found it very pleasant to work with: it is simple and robust (no risk of losing data events); it handles flow control naturally and it blends really well with streamline.js. For example, I could easily re-implement the pump/pipe functionality with a simple loop:

function pump(inStream, outStream, _) {
  var data;
  do {
    data =;
    outStream.write(data, _);
  } while (data != null);

State of affairs

I find the current node.js stream API quite hairy in comparison. On the read side we have three events (data, end, error) and two calls (pause and resume). On the write side we have two events (drain, error) and two calls (write and end).

The event-oriented API is also more fragile because you run the risk of losing events if you do not attach your event handlers early enough (unless you pause the stream immediately after creating it).

And from the exchanges that I see on the node mailing list, I have the impression that this API is not completely sorted out yet. There are talks about upcoming changes in 0.9.

I have tried to inject the idea of a callback based API into the debate but I’ve been unsuccessful so far. Discussions quickly turned sour. I got challenged on the fact that flow control would not work with such an API but I didn’t get any response when I asked for a scenario that would demonstrate where the potential problem would be.


So I’m writing this post to try to shed some light on the issue. What I’ll try to do in this post is prove that the two APIs are equivalent, the corrolary being that we should then be free to choose whatever API style we want.

To prove the equivalence, I am going to create wrappers:

  • A first set of wrappers that transform streams with event-oriented APIs into streams with callback-oriented APIs.
  • A second set of wrappers that transform streams with callback-oriented APIs into streams with event-oriented APIs.

There will be three wrappers in each set: a Read wrapper for readable streams, a Write wrapper for writable streams, and a wrapper that handles both read and write.

After introducing these wrappers, I will demonstrate on a small example that we get an equivalent stream when we wrap a stream twice, first in callback style and then in event style.

In this presentation I will deliberately ignore peripheral issues like encoding, close events, etc. So I won’t deal with all the subtleties of the actual node.js APIs.

The callback read wrapper

The callback read wrapper implements the asynchronous read(cb) API on top of a standard node.js readable stream.

exports.CallbackReadWrapper = function(stream) {
  var _chunks = [];
  var _error;
  var _done = false;

  stream.on('error', function(err) {
  stream.on('data', function(data) {
    _onData(null, data);
  stream.on('end', function() {
    _onData(null, null);

  function memoize(err, chunk) {
    if (err) _error = err;
    else if (chunk) {
    } else _done = true;

  var _onData = memoize; = function(cb) {
    if (_chunks.length > 0) {
      var chunk = _chunks.splice(0, 1)[0];
      if (_chunks.length === 0) {
      return cb(null, chunk);
    } else if (_done) {
      return cb(null, null);
    } else if (_error) {
      return cb(_error);
    } else _onData = function(err, chunk) {
      if (!err && !chunk) _done = true;
      _onData = memoize;
      cb(err, chunk);

This implementation does not make the assumption that data events will never be delivered after a pause() call, as this assumption was not valid in earlier versions of node. This is why it uses an array of chunks to memoize. The code could be simplified if we made this assumption.

The callback write wrapper

The callback write wrapper implements the asynchronous write(data, cb) API on top of a standard node.js writable stream.

exports.CallbackWriteWrapper = function(stream) {
  var _error;
  var _onDrain;

  stream.on('error', function(err) {
    if (_onDrain) _onDrain(err);
    else _error = err;
  stream.on('drain', function() {
    _onDrain && _onDrain();

  this.write = function(data, cb) {
    if (_error) return cb(_error);
    if (data != null) {
      if (!stream.write(data)) {
        _onDrain = function(err) {
          _onDrain = null;
      } else {
    } else {

The process.nextTick call guarantees that we won’t blow the stack if stream.write always returns true.

The event read wrapper

The event read wrapper is the dual of the callback read wrapper. It implements the node.js readable stream API on top of an asynchronous read(cb) function.

exports.EventReadWrapper = function(stream) {
  var self = this;
  var q = [],

  function doRead(err, data) {
    if (err) self.emit('error', err);
    else if (data != null) {
      if (paused) {
      } else {
        self.emit('data', data);;
    } else {
      if (paused) {
      } else {
  self.pause = function() {
    paused = true;
  self.resume = function() {
    var data;
    while ((data = q.shift()) !== undefined) {
      if (data != null) self.emit('data', data);
      else self.emit('end');
    paused = false;;

exports.EventReadWrapper.prototype = new EventEmitter();

The event write wrapper

The event write wrapper is the dual of the callback write wrapper. It implements the node.js writable stream API on top of an asynchronous write(data, cb) function.

exports.EventWriteWrapper = function(stream) {
  var self = this;
  var chunks = [];

  function written(err) {
    if (err) self.emit('error', err);
    else {
      chunks.splice(0, 1);
      if (chunks.length === 0) self.emit('drain');
      else stream.write(chunks[0], written);
  this.write = function(data) {
    if (chunks.length === 1) stream.write(data, written);
    return chunks.length === 0;
  this.end = function(data) {
    if (data != null) self.write(data);

exports.EventWriteWrapper.prototype = new EventEmitter();

The combined wrappers

The combined wrappers implement both APIs (read and write). Their implementation is straightforwards:

exports.CallbackWrapper = function(stream) {, stream);, stream);

exports.EventWrapper = function(stream) {, stream);, stream);

exports.EventWrapper.prototype = new EventEmitter();

Equivalence demo

The demo program is based on the following program:

"use strict";
var http = require('http');
var zlib = require('zlib');
var util = require('util');
var fs = require('fs');

http.createServer(function(request, response) {
  response.writeHead(200, {
    'Content-Type': 'text/plain; charset=utf8',
    'Content-Encoding': 'deflate',    
  var source = fs.createReadStream(__dirname + '/wrappers.js');
  var deflate = zlib.createDeflate();
  util.pump(source, deflate);
  util.pump(deflate, response);
console.log('Server running at');

This is a simple program that serves a static file in compressed form. It uses two util.pump calls. The first one pumps the source stream into the deflate stream and the second one pumps the deflate stream into the response stream.

Then we modify this program to wrap the three streams twice before passing them to util.pump:

"use strict";
var wrappers = require('./wrappers');
var http = require('http');
var zlib = require('zlib');
var util = require('util');
var fs = require('fs');

http.createServer(function(request, response) {
  response.writeHead(200, {
    'Content-Type': 'text/plain; charset=utf8',
    'Content-Encoding': 'deflate',    
  var source = fs.createReadStream(__dirname + '/wrappers.js');
  var deflate = zlib.createDeflate();
  source = new wrappers.EventReadWrapper(new wrappers.CallbackReadWrapper(source));
  response = new wrappers.EventWriteWrapper(new wrappers.CallbackWriteWrapper(response));
  deflate = new wrappers.EventWrapper(new wrappers.CallbackWrapper(deflate));
  util.pump(source, deflate);
  util.pump(deflate, response);
console.log('Server running at');

This program works like the previous one (maybe just a little bit slower), which shows that the doubly wrapped streams behave like the original unwrapped streams:

EventWrapper(CallbackWrapper(stream)) <=> stream

Note that this program won’t exercise the full pause/resume/drain API with a small input file like wrappers.js. You have to try it with a large file to exercise all events.

The next demo is a streamline.js variant that transforms the three streams into callback-oriented streams and uses the pump loop that I gave in the introduction:

"use strict";
var wrappers = require('./wrappers');
var http = require('http');
var zlib = require('zlib');
var util = require('util');
var fs = require('fs');

http.createServer(function(request, response) {
  response.writeHead(200, {
    'Content-Type': 'text/plain; charset=utf8',
    'Content-Encoding': 'deflate',    
  var source = fs.createReadStream(__dirname + '/wrappers.js');
  var deflate = zlib.createDeflate();
  source = new wrappers.CallbackReadWrapper(source);
  response = new wrappers.CallbackWriteWrapper(response);
  deflate = new wrappers.CallbackWrapper(deflate);
  pump(source, deflate);
  pump(deflate, response);
console.log('Server running at');

function pump(inStream, outStream, _) {
  var data;
  do {
    data =;
    outStream.write(data, _);
  } while (data != null);

This program too behaves like the original one.


This experiment demonstrates that event-based and callback-based streams are equivalent. My preference goes to the callback version, as you may have guessed. I’m submitting this as I think that it should be given some consideration when discussing evolutions of the stream API.


  • The APIs are not completely equivalent though. One difference is that the event-driven API supports multiple observers. But in most pumping/piping scenarios the stream has a single observer. And callback APIs can also be tweaked to support multiple observers (streamline’s futures support that).
  • It is also important to verify that the flow control patterns are similar and that, for example, the callback version does not do excessive buffering. This is the case as the queues don’t hold more than two elements in pumping/piping scenarios.

Source code is available as a gist.

Posted in Asynchronous JavaScript, Uncategorized | 6 Comments

node.js for the rest of us

Simple things should be simple. Complex things should be possible.
Alan Kay

I published streamline.js 18 months ago but did not write a tutorial. I just took the time to do it.

The tutorial implements a simple search aggregator application. Here is a short spec for this application:

  • One page with a search field and a submit button.
  • The search is forwarded to Google and the results are displayed in the page.
  • A second search is run on the local tree of files. Matching files and lines are displayed.
  • A third search is run against a collection of movies in a MongoDB database. Matching movie titles and director names are displayed.
  • The 3 search operations are performed in parallel.
  • The file search is parallelized but limited to 100 simultaneous open files, to avoid running out of file descriptors on large trees.
  • The movies collection in MongoDB is automatically initialized with 4 entries the first time the application is run.

The implementation takes 126 lines (looks nicer in GitHub):

"use strict";
var streams = require('streamline/lib/streams/server/streams');
var url = require('url');
var qs = require('querystring');

var begPage = '<html><head><title>My Search</title></head></body>' + //
'<form action="/">Search: ' + //
'<input name="q" value="{q}"/>' + //
'<input type="submit"/>' + //
var endPage = '<hr/>generated in {ms}ms</body></html>';

streams.createHttpServer(function(request, response, _) {
  var query = qs.parse(url.parse(request.url).query),
    t0 = new Date();
  response.writeHead(200, {
    'Content-Type': 'text/html; charset=utf8'
  response.write(_, begPage.replace('{q}', query.q || ''));
  response.write(_, search(_, query.q));
  response.write(_, endPage.replace('{ms}', new Date() - t0));
}).listen(_, 1337);
console.log('Server running at');

function search(_, q) {
  if (!q || /^\s*$/.test(q)) return "Please enter a text to search";
  try {
    // start the 3 futures
    var googleFuture = googleSearch(null, q);
    var fileFuture = fileSearch(null, q);
    var mongoFuture = mongoSearch(null, q);
    // join the results
    return '<h2>Web</h2>' + googleFuture(_) //
    + '<hr/><h2>Files</h2>' + fileFuture(_) //
    + '<hr/><h2>Mongo</h2>' + mongoFuture(_);
  } catch (ex) {
    return 'an error occured. Retry or contact the site admin: ' + ex.stack;

function googleSearch(_, q) {
  var t0 = new Date();
  var json = streams.httpRequest({
    url: '' + q,
    proxy: process.env.http_proxy
  // parse JSON response
  var parsed = JSON.parse(json);
  // Google may refuse our request. Return the message then.
  if (!parsed.responseData) return "GOOGLE ERROR: " + parsed.responseDetails;
  // format result in HTML
  return '<ul>' + {
    return '<li><a href="' + entry.url + '">' + entry.titleNoFormatting + '</a></li>';
  }).join('') + '</ul>' + '<br/>completed in ' + (new Date() - t0) + ' ms';

var fs = require('fs'),
  flows = require('streamline/lib/util/flows');
// allocate a funnel for 100 concurrent open files
var filesFunnel = flows.funnel(100);

function fileSearch(_, q) {
  var t0 = new Date();
  var results = '';

  function doDir(_, dir) {
    fs.readdir(dir, _).forEach_(_, -1, function(_, file) {
      var f = dir + '/' + file;
      var stat = fs.stat(f, _);
      if (stat.isFile()) {
        // use the funnel to limit the number of open files 
        filesFunnel(_, function(_) {
          fs.readFile(f, 'utf8', _).split('\n').forEach(function(line, i) {
            if (line.indexOf(q) >= 0) results += '<br/>' + f + ':' + i + ':' + line;
      } else if (stat.isDirectory()) {
        doDir(_, f);
  doDir(_, __dirname);
  return results + '<br/>completed in ' + (new Date() - t0) + ' ms';;

var mongodb = require('mongodb'),
  mongoFunnel = flows.funnel(1);

function mongoSearch(_, q) {
  var t0 = new Date();
  var db = new mongodb.Db('tutorial', new mongodb.Server("", 27017, {}));;
  try {
    var coln = db.collection('movies', _);
    mongoFunnel(_, function(_) {
      if (coln.count(_) === 0) coln.insert(MOVIES, _);
    var re = new RegExp(".*" + q + ".*");
    return coln.find({
      $or: [{
        title: re
      }, {
        director: re
    }, _).toArray(_).map(function(movie) {
      return movie.title + ': ' + movie.director;
    }).join('<br/>') + '<br/>completed in ' + (new Date() - t0) + ' ms';;
  } finally {

var MOVIES = [{
  title: 'To be or not to be',
  director: 'Ernst Lubitsch'
}, {
  title: 'La Strada',
  director: 'Federico Fellini'
}, {
  title: 'Metropolis',
  director: 'Fritz Lang'
}, {
  title: 'Barry Lyndon',
  director: 'Stanley Kubrick'

I organized the tutorial in 7 steps but I did not have much to say at each step because it just all felt like normal JavaScript code around cool APIs, with the little _ to mark the spots where execution yields.

I’m blogging about it because I think that there is a real opportunity for node.js to attract mainstream programmers. And I feel that this is the kind of code that mainstream programmers would feel comfortable with.

Posted in Asynchronous JavaScript, Uncategorized | 3 Comments

Asynchronous JavaScript with Generators – An Experiment

I have recently added a third blade to my little asynchronous programming swiss army knife: the generator blade.

Looks sharp! Here are some details.


Generators are JavaScript’s flavor of coroutines. They solve one of computer science’s most important problems: generating the mysterious fibonacci numbers!

function genFibos() {  
  var f1 = 0, f2 = 1;  
  while (true) {  
    yield f1;  
    var t = f1;  
    f1 = f2;  
    f2 += t;  

function printFibos() {
	var g = genFibos();  
	for (var i = 0; i < 10; i++) {
	  var num =;


Generators are characterized by the presence of a yield keyword. Any function that contains a yield, like the genFibos function above, returns a generator.

Generators have a somewhat unusual execution pattern. I’ll quickly describe it on the example above.

The first oddity is that the genFibos() call inside printFibos does not execute the body of the genFibos function. Instead, it just returns a generator which is assigned to the g variable.

The first call to in the printFibos loop starts the generator: the body of the genFibos function executes until it reaches the yield keyword for the first time. At this point, control is transfered from genFibos to the point where was called and the num variable receives the value of f1 (1).

Then, execution continues in printFibos‘s for loop; i is incremented and is called a second time. This call transfers execution to the genFibos function again, at the point where we left it before, i.e. just after the first yield. The genFibos function loops and reaches yield a second time. As previously, control is transfered back to printFibos, and returns the value of f1 (2), which is assigned to num.

The for loop continues, hits another which transfers control to the yield point inside genFibos, which causes genFibos to loop again to the next yield, and so on, and so on.

What we have here is a little dance between printFibos and genFibos, with and yield f1 to jump from one to the other. Coroutines: functions that cooperate by yielding control to each other.

A bit disconcerting at first but it works, and it can help us deal with asynchronous code, as we’ll see shortly.

Generators in JavaScript

Generators are in the works for EcmaScript Harmony but they haven’t landed in V8 yet. So we cannot use them in node.js today.

On the other hand, they have been available for more than 5 years in Firefox (Firefox 2 and up). They are disabled by default but you can enable them by adding a version attribute to the <script> tag:

<script type="application/javascript;version=1.7">
function genFibos() { ... }
function printFibos() { ... }

And they are also available in luvmonkey, Tim Caswell’s port of libuv to spidermonkey. I’ve used both of these environments in my experimentations.


Streamline.js is my little language tool for asynchronous JavaScript programming. It does not help much with fibonacci numbers but it does help a lot with code that sits on top of asynchronous APIs. No need to twist your brain any more; you can just write the code as if the APIs were synchronous, as long as you follow one simple rule:

Pass the _ marker everywhere a callback is expected.

Here is an example:

// waits one second (async/non blocking) and returns result
function delay(_, result) {
  setTimeout(_, 1000);
  return result;

function helloWorld(_) {
  print("starting ...");
  print(delay(_, 'hello ...'));
  print(delay(_, '... world'));

helloWorld(function(err) {
  if (err) throw err;
  else print("continuing with callback");

It executes as follows:

_node helloWorld._js
starting ...
(wait one second)hello ...
(wait one second)... world
continuing with callback

Streamline works by transforming the source, either to callback based code, or to code that uses Marcel Laverdet’s node-fibers library. You can see the callback transformation in action by cutting and pasting the example above in the online demo.

Yielding Hello World!

Supposedly, generators make it easier to write asynchronous code. And they do! Here is the generators version of my little hello world example:

function delay(_, result) {
  yield AsyncGen.invoke(this, setTimeout, [_, 1000], 0);
  yield result;

function helloWorld(_) {
  print("starting ...");
  print(yield delay(_, 'hello ...'));
  print(yield delay(_, '... world'));

function startDemo() {, [function(err) {
    if (err) alert(err);
    else print("continuing with callback");
  }], 0);

Quite nice! The general flow is similar to the synchronous flow. Instead of having to rewire everything with callbacks you just need to apply some rather simple rules:

  • First, keep the _ parameter (*) and add a yield keyword in front of all the calls that have this parameter.
  • Replace all return keywords by a yield.
  • Don’t forget to add a yield at the end of functions that don’t end with a return. This is because these functions do return at the end; it is just JavaScript that lets you omit the return when you don’t have a value to return.
  • Use the special AsyncGen.invoke function to call asynchronous APIs that expect a callback, like setTimeout.
  • Use the special function to call any of your generator functions as a node.js callback style function.

(*) You can rename the _ parameter though, as long as you do it consistently.

The other side of the mirror

Looks good. But how will it work? We now have generator functions that call other generator functions, and yield keywords all over the place in our asynchronous code. If you remember the little fibonacci dance that we saw earlier, you can imagine that we’ll need a dance partner to interact with these generator functions. And this time, the dance is going to be wild. So we’ll need a very strong partner on the other side to keep it going!

This strong partner is the the function, which gets a little help from AsyncGen.invoke. Here they come:

  var PENDING = {};

  window.AsyncGen = {
    run: function(fn, args, idx) {
a)    var cb = args[idx],

      function resume(err, val) {
        while (g) {
          try {
b)          val = err ? g.throw(err) : g.send(val);
            err = null;
            // if we get PENDING, the current call completed with a pending I/O
            // resume will be called again when the I/O completes. So just return here.
c)          if (val === PENDING) return;
            // if we get [PENDING, e, r], the current call invoked its callback synchronously
            // we just loop to send/throw what the callback gave us.
d)          if (val && val[0] === PENDING) {
              err = val[1];
              val = val[2];
            // else, if g yielded a value which is not a generator, g is done. 
            // so we unwind it we send val to the parent generator (or through cb if we are at the top)
e)          else if (!isGenerator(val)) {
              g = g.prev;
            // else, we got a new generator which means that g called another generator function
            // the new generator become current and we loop with g.send(undefined) (equiv to 
            else {
f)            val.prev = g;
              g = val;
              val = undefined;
          } catch (ex) {
            // the send/throw call failed.
            // we unwind the current generator and we rethrow into the parent generator (or through cb if at the top)
g)          g.close();
            g = g.prev;
            err = ex;
            val = undefined;
        // we have exhausted the stack of generators. 
        // return the result or error through the callback.
h)      cb(err, val);
      // set resume as the new callback
i)    args[idx] = resume;
      // call fn to get the initial generator
j)    g = fn.apply(this, args);
      // start the resume loop
k)    resume();

    invoke: function(that, fn, args, idx) {
      // Set things up so that call returns:
      // * PENDING if it completes with a pending I/O (and cb will be called later)
      // * [PENDING, e, r] if the callback is called synchronously.
      var result = PENDING,
        sync = true;
l)    var cb = args[idx];
      args[idx] = function(e, r) {
m)      if (sync) {
n)        result = [PENDING, e, r];
        } else {
o)        cb(e, r);
p)    fn.apply(that, args);
q)    sync = false;
      return result;

To demonstrate how it works, I’ll use an annotated version of the hello world program:

    function delay(_, result) {
1)    yield AsyncGen.invoke(this, setTimeout, [_, 1000], 0);
2)    yield result;

    function helloWorld(_) {
3)    print("starting ...");
4)    print(yield delay(_, 'hello ...'));
5)    print(yield delay(_, '... world'));
6)    yield;

    function startDemo() {
7), [function(err) {
8)      if (err) alert(err);
        else print("continuing with callback");
      }], 0);
9)  }

Here is how execution unfolds (buckle up and be ready for a little rodeo!):

  • 7) startDemo calls
  • a) and i) run stores the callback in cb and replaces it by resume.
  • j) run calls helloWorld with resume as callback. Nothing gets executed in helloWorld but a helloWorld generator is returned and assigned to g.
  • k) run calls resume().
  • b) resume calls g.send(undefined), which is the same as calling Execution jumps to 3) inside helloWorld.
  • 3) helloWorld prints "starting...".
  • 4) helloWorld calls delay. Nothing gets executed in delay but a delay generator is returned and yielded by helloWorld. This yield jumps back to b), where g.send returns the delay generator.
  • b) The delay generator is assigned to val and err is reset.
  • c) d) and e) The tests are false so these if branches are skipped.
  • f) The delay generator is chained with the helloWorld generator and is assigned to g. val is set to undefined.
  • b) resume calls g.send(undefined). Execution jumps to 1) inside delay.
  • 1) delay calls AsyncGen.invoke to invoke setTimeout with resume as callback.
  • l) invoke remembers the callback in cb and replaces it by its own callback.
  • p) invoke calls setTimeout.
  • p) sync is set to false and invoke returns PENDING.
  • 1) We are back inside delay, and PENDING is yielded to the run loop. Executions jumps to b) with PENDING as return value.
  • b) PENDING is assigned to val and err is reset.
  • c) The test is true and resume returns.
  • k) run returns.
  • 9) startDemo returns.
  • Sleeping one second…
  • m) Awaken in setTimeout‘s callback. sync is false.
  • o) resume is called with both parameters undefined.
  • b) resume calls g.send(undefined). Execution jumps to 1) inside delay.
  • 2) delay yields result, which is "hello ...". Execution jumps to b) with "hello ..." as returned value.
  • b) "hello ..." is assigned to val and err is reset.
  • c) and d) The tests are false.
  • e) The test is true. The delay generator is closed and popped from the chain. Now, g points to the helloWorld generator which is where we left it, at 4).
  • b) resume calls g.send("hello ..."). Execution jumps to 4) inside helloWorld.
  • 4) helloWorld prints "hello ...".
  • 5) The same dance happens again with the second delay call, up to c) where the current resume call returns and o), the end of the setTimeout callback.
  • Sleeping one second…
  • m) Awaken in setTimeout‘s callback. Same dance as before until "... world" is returned at 5).
  • 5) helloWorld prints "... world" and then "done!".
  • 6) helloWorld yields. Execution jumps to b) with undefined as returned value.
  • b) undefined is assigned to val and err is reset.
  • c) and d) The tests are false.
  • e) The test is true. The helloWorld generator is closed and popped from the chain. Now, g is undefined.
  • h) The run loop is over. run calls its callback, which was passed by startDemo.
  • 8) startDemo‘s callback prints "continuing with callback".

This was probably tedious and painful to follow but I think that it is worth going through it at least once step by step, to understand how execution unfolds when mixing generators and asynchronous calls. I find it not very obvious, to say the least and I had to take a few aspirins to get the code into a simple form that works.

Also, the step by step execution that I just described did not explore all the branches because helloWorld is a well behaved dancer that does not throw exceptions. But the run function is robust enough to cope with less well behaved dancers.

The nice part is that, with these two functions, we can now write async code with normal, sync-like control flow. For example, we can call the delay function in a loop as:

function printNumbers(_, min, max) {
  for (var i = min; i <= max; i++) print(yield delay(_, i));
}, [function(err) {
  if (err) alert(err);
  else print("continuing with callback");
}, 1, 100], 0);

The two functions, run and invoke allow us to cross the mirror in both direction between the callbacks world and the generators world:

callbacks => run => generators => invoke => callbacks

The little trouble with finally

The run function is rather powerful and can handle all JavaScript constructs except one: finally clauses.

The problem is that all return statements are converted to yield statements and run is assuming that the generator is done when it yields a value (e) rather than another generator (f). This works, except in the case where the return was inside a try block with a finally clause. In this case, run must enter the generator again, to execute the finally clause after the return. And, unfortunately, g.send(val) cannot do the job because it would resume just after the return which has been turned into a yield, instead of resuming in the finally clause.

There is nevertheless a workaround, which imposes a small amount of code rewriting. The idea is to rewrite:

function f(_) {
  try {
      return x;
  } finally {


function f(_) {
  var result__ = null;
  do {
    try {
        { result__ = [x]; break finally__; }
    } finally {
  } while (false);
  if (result__) return result__[0];

The do { ... } while (false); loop looks silly but it is labeled and it acts as a goto which lets us move the return outside of the try/finally construct.
Once all the returns have been moved outside of the try/finally, they can be converted to yield and the run function is back on tracks.

Looking for the full stack trace, desperately

The second problem that I hit in this experiment was the difficulty to reconstruct the stack trace when catching an exception.

The exception carries the stack trace of the current resume call but I did not find a good way to reconstruct the stack trace that corresponds to the sequence of yield calls. All the information is in the generators that are in the g --> g.prev --> g.prev.prev ... chain but I could not find any API to access this information.

The situation could be improved by making every generator yield upon entry to pass stack trace information to the run function but this is inefficient and insufficient (it can give the function names but not the line numbers). Thing would be much better if generators had an API that run could call to get stack information.

Wrapping up

As I mentioned in the introduction, I have added generators as a third blade to streamline.js. I was quite confident that streamline could be adjusted to generate code that takes advantage of generators but as usual, the proof of the pudding is in the eating. Good news! It works!

You can play with it here:

I’ll probably continue to use streamline, though, even when generators get widely available: the preprocessor handles all the ugly details and hides all the extra yield, invoke and run under the carpet. All that remains is the underscore in every spot where the code yields. Also, streamline provides other features, like futures, that would need to be hand-coded otherwise.

Having three blades makes it easy to compare callbacks, fibers and generators:

First, the generators and fibers transforms produce code that is much leaner and simpler than what the callback transform produces. This does not necessarily mean that they will outperform callbacks because they have their own runtime overhead (coroutines). I did not have time to do serious benchmarking but my experience with real code shows that callbacks win when there is a thin layer of logic on top of I/O calls but fibers can take the advantage when the logic becomes thicker and when caching comes into play.

I cannot benchmark fibers against generators today because they are supported by different engines (V8 and spidermonkey). All I can do is compare the generated code patterns. And here, the fibers transform wins because it does not need to introduce yield at all levels, only at the lowest level. So, fibers have an advantage with thick layers of code, which will need to be balanced with the optimizations that the JS engine may be able to apply to generators.

And I won’t close this post without thanking Marcel one more time. His implementation of the fibers transform gave me a big head start in this experiment.


Posted in Asynchronous JavaScript, Uncategorized | 5 Comments

Node.js: Awesome Runtime and New Age JavaScript Gospel

I am a big fan of node.js but I have a big problem with the core team. No need to hide it and pretend everything is OK. The fibers vs. callback war erupted again this week on the mailing list, with new ballistic benchmark missiles fired from both sides and showers of sarcasms.

This is ridiculous. Time to sit down and blog about it!

Awesome Runtime

Node is an awesome runtime. It is simple; it is fast; it is 100% asynchronous; it is JavaScript. These are the things that make it so attractive for people like me. I’m sold on the “end to end JavaScript” story and node is an important piece of that story. I’ve also done my share of classical thread programming (Java, .NET) and I’m now deeply convinced that async is the way to go.

Node also comes with a good package system that manages dependencies in a very decoupled way and a simple tool to publish, install and update packages. I also like the fact that most components are published under the very liberal MIT license and that Windows, which is important for us, is supported as a first class platform.

Definitely, node.js is an an awesome runtime. I’m not regretting one single day to have chosen it for one of our projects.

New Age JavaScript Gospel

So the core team does a great job on the runtime but it seems invested with another mission: evangelizing a new way of programming, based on callbacks, streams and pipes.

To develop on node, you have to learn callbacks because this is how JavaScript deals with asynchronous code. You may find it hard but you are wrong:

It is not hard, you just need to learn callbacks.

Also, some people may tell you that they have found ways to ease your pain, but you should not listen to them; they are just heretics who are trying to divert you from the true and only way to node.

You *have* to learn callbacks!

So you will learn callbacks! And you’ll probably find out that they are not so hard after all. It is mostly a question of knowing a few patterns and applying them. It is more painful than hard. But it is also more error prone; it forces you to write more code; and the code is more difficult to read and modify because of all the “pattern noise”.

You’ll probably think that all this might be fine for a personal project but that it doesn’t scale well to large projects. Costs will rise at all levels: training costs to get people up to speed with callbacks, development costs because of extra code to write and of more difficult code reviews, quality costs because of more fragile code, maintenance costs, etc.

So, you may come back to the mailing list with a question like:

Fine, now I understand callbacks but I still have problems. Isn’t there a better way?

And you’ll get the same answer:

No. Callbacks are perfectly fine! You just need to refactor your logic and you should try to reformulate your problem with streams and pipes.

And don’t listen to the sirens who say that that they have solutions for you and that you shouldn’t bother with callbacks.

You *have* to learn callbacks and streams!

Someone might add:

Your application is probably not a good fit for node. You should have chosen PHP or Ruby instead.

But you want to use node because you like JavaScript and node’s asynchronous model.

I don’t know what you’ll do at this point. One possibility is that you’ll follow the party line: you will write a stream. It might not solve the problem you were trying to solve in the first place but you’ll be able to post your success story on the mailing list and you’ll get plenty of kudos from the core team.

The worst is that this is hardly a caricature. Node is not just a runtime; it comes with a gospel:

You have to program with callbacks and you have to rethink your application as streams and pipes.

The gospel is wrong!

Asynchronous !== Callbacks

Part of the problem comes from an unfortunate identification between asynchronous programming and callbacks. Asynchronous programming and callbacks are seen by many as one and the same thing. For them, programming asynchronously means programming with callbacks.

This is just plain wrong: asynchronism is the essence of node while callbacks are just an accident of node. They belong to different levels.

Asynchronism is a behavior. It is essential in node because node is all about performance and asynchronous I/O. Synchronous (blocking) I/O is a disaster for node.

Callbacks are just an artifact that JavaScript gives us to deal with asynchronous behaviors. Today, JavaScript only gives us this artifact, but tomorrow it will give us other artifacts: generators and yield. Callbacks are just an accident.

And, BTW, callbacks are probably not the best artifact to express asynchronism because they are *not* essentially asynchronous. Just consider the callbacks passed to Array.prototype.forEach: those are invoked synchronously.

IMO, the artifact that I have introduced in streamline.js (the _ marker), and that has been the target of so many sarcastic comments on the mailing list, is a better artifact because it captures the essence of asynchronism in code (the points where execution yields). So, in a sense, streamline.js provides a cleaner model for asynchronous programming than callbacks.

Every piece of software is not a proxy

The nirvana of node’s gospel is a program built entirely with streams and pipes. This comes from a vision that every piece of software can be thought of as being some kind of proxy and that systems can be built by connecting streams with pipes. This is a very appealing vision and it would just be so great if it would apply to every domain.

It does not!

The streams and pipes vision will likely work well in domains that are naturally event driven. This is the case of most front-end systems. But what about back-end systems? Most of the time these are more goal driven than event driven: they don’t push much data to other layers; instead, they respond to requests/questions by pulling data from other systems. I don’t see the streams and pipes model fitting too well in this context (but maybe it is just my imagination which is limited).

Does this mean that node.js should be ruled out for back-end systems and that I am just plain wrong because I’m trying to use it in areas where it does not fit? I don’t think so. Back-end systems pull data. This means I/O. And the best way to do I/O is the asynchronous way. So why not node.js?

Moving Forwards

I wrote all this because I was getting really fed up with the childish attitude on the mailing list and the difficulty to get into interesting discussions.

As I said in the introduction, I’m a big fan of node.js. It is a great runtime and I’ll continue to use it.

I don’t agree with the gospel and I think that it is totally counterproductive.

The simple fact that the “callback hell” question comes back so regularly on the mailing list should raise a red flag. There is a reality behind this question and this reality cannot be ignored forever.

The gospel is counterproductive because it slows down the adoption of node. The current technical entry ticket is just too high. The core team is young and is probably used to working with high profile software developers. I am working with a mix of high profile and normal developers (people who have very valuable domain knowledge but are less savvy technically) and for them “JavaScript with callbacks” is just a no go.

There is a big opportunity for node.js to compete with PHP and the likes but it won’t succeed if it keeps the bar so high.

It is also counterproductive because people don’t use the tools that would allow them to develop 100% asynchronous modules without pain. So, what they end up doing instead is bury a few Sync calls in their modules, thinking that “well, this is just a little bit of file I/O; it won’t hurt” (I hit two cases like this recently and I had to fork and streamline them to eliminate the Sync calls).

And it is counterproductive because it pollutes the discussions, blurs the message and upsets everyone.

Posted in Uncategorized | 42 Comments

Fibers and Threads in node.js – what for?

I like node.js, and I’m not the only one, obviously! I like it primarily for two things: it is simple and it is very fast. I already said it many times but one more won’t hurt.

Before working with node, I had spent many years working with threaded application servers. This was fun sometimes but it was also often frustrating: so many APIs to learn, so much code to write, so many risks with critical sections and deadlocks, such a waste of costly system resources (stacks, locks), etc. Node came as a breath of fresh air: a simple event loop and callbacks. You can do a lot with so little, and it really flies!

But it does not look like we managed to eradicate threads. They keep coming back. At the beginning of last year Marcel Laverdet opened Pandora’s box by releasing node-fibers: his threads are a little greener than our old ones but they have some similarities with them. And this week the box got wide open as Jorge Chamorro Bieling released threads_a_gogo, an implementation of real threads for node.js.

Isn’t that awful? We were perfectly happy with the event loop and callbacks, and now we have to deal with threads and all their complexities again. Why on earth? Can’t we stop the thread cancer before it kills us!

Well. First, things aren’t so bad because fibers and threads did not make it into node’s core. The core is still relying only on the event loop and callbacks. And it is probably better this way.

And then maybe we need to overcome our natural aversion for threads and their complexities. Maybe these new threads aren’t so complex after all. And maybe they solve real problems. This is what I’m going to explore in this post.

Threads and Fibers

The main difference between fibers and real threads is on the scheduling side: threads use implicit, preemptive scheduling while fibers use explicit, non-preemptive scheduling. This means that threaded code may be interrupted at any point, even in the middle of evaluating an expression, to give CPU cycles to code running in another thread. With fibers, these interruptions and context switches don’t happen randomly; they are into the hands of the programmer who decides where his code is going to yield and give CPU cycles to other fibers.

The big advantage of fiber’s explicit yielding is that the programmer does not need to protect critical code sections as long as they don’t yield. Any piece of code that does not yield cannot be interrupted by other fibers. This means a lot less synchronization overhead.

But there is a flip side to the coin: threads are fair; fibers are not. If a fiber runs a long computation without yielding, it prevents other fibers from getting CPU cycles. A phenomenon known as starvation, and which is not new in node.js: it is inherent to node’s event loop model; if a callback starts a long computation, it blocks the event loop and prevents other events from getting their chance to run.

Also, threads take advantage of multiple cores. If four threads compete for CPU on a quad-core processor, each thread gets 100% (or close) of a core. With fibers there is no real parallelism; at one point in time, there is only one fiber that runs on one of the cores and the other fibers only get a chance to run at the next yielding point.

Fibers – What for?

So, it looks like fibers don’t bring much to the plate. They don’t allow node modules to take advantage of multiple cores and they have the same starvation/fairness issues as the basic event loop. What’s the deal then?

Fibers were introduced and are getting some love primarily because they solve one of node’s big programming pain points: the so called callback pyramid of doom. The problem is best demonstrated by an example:

function archiveOrders(date, cb) {
  db.connect(function(err, conn) {
    if (err) return cb(err);
    conn.query("select * from orders where date < ?",  
               [date], function(err, orders) {
      if (err) return cb(err);
      helper.each(orders, function(order, next) {
        conn.execute("insert into archivedOrders ...", 
                     [, ...], function(err) {
          if (err) return cb(err);
          conn.execute("delete from orders where id=?", 
                       [], function(err) {
            if (err) return cb(err);
      }, function() {
        console.log("orders have been archived");

This is a very simple piece of business logic but we already see the pyramid forming. Also, the code is polluted by lots of callback noise. And things get worse as the business logic gets more complex, with more tests and loops.

Fibers, with Marcel’s futures library, let you rewrite this code as:

var archiveOrders = (function(date) {
  var conn = db.connect().wait();
  conn.query("select * from orders where date < ?",  
             [date]).wait().forEach(function(order) {
    conn.execute("insert into archivedOrders ...", 
                 [, ...]).wait();
    conn.execute("delete from orders where id=?", 
  console.log("orders have been archived");

The callback pyramid is gone; the signal to noise ratio is higher, asynchronous calls can be chained (for example query(...).wait().forEach(...)), etc. And things don’t get worse when the business logic gets more complex. You just write normal code with the usual control flow keywords (if, while, etc.) and built-in functions (forEach). You can even use classical try/catch exception handling and you get complete and meaningful stack traces.

Less code. Easier to read. Easier to modify. Easier to debug. Fibers clearly give the programmer a better comfort zone.

Fibers make this possible because they solve a tricky topological problem with callbacks. I’ll try to explain this problem on a very simple example:

db.connect(function(err, conn) {
  if (err) return cb(err);
  // conn is available in this block
// Would be nice to be able to assign conn to a variable 
// in this scope so that we could resume execution here 
// rather than in the block above.
// But, unfortunately, this is impossible, at least if we 
// stick to vanilla JS (without fibers).

The topological issue is that the conn value is only accessible in the callback scope. If we could transfer it to the outer scope, we could continue execution at the top level and avoid the pyramid of doom. Naively we would like to do the following:

var c;
db.connect(function(err, conn) {
  if (err) return cb(err);
  c = conn;
// conn is now in c (???)

But it does not work because the callback is invoked asynchronously. So c is still undefined when execution reaches doSomething(c). The c variable gets assigned much later, when the asynchronous connect completes.

Fibers make this possible, though, because they provide a yield function that allows the code to wait for an answer from the callback. The code becomes:

var fiber = Fiber.current;
db.connect(function(err, conn) {
  if (err) return fiber.throwInto(err);;
// Next line will yield until fiber.throwInto 
// or are called
var c = Fiber.yield();
// If fiber.throwInto was called we don't reach this point 
// because the previous line throws.
// So we only get here if was called and then 
// c receives the conn value.
// Problem solved! 

Things are slightly more complex in real life because you also need to create a Fiber to make it work.

But the key point is that the yield/run/throwInto combination makes it possible to transfer the conn value from the inner scope to the outer scope, which was impossible before.

Here, I dived into the low level fiber primitives. I don’t want this to be taken as an encouragement to write code with these primitives because this can be very error prone. On the other hand, Marcel’s futures library provides the right level of abstraction and safety.

And, to be complete, it would be unfair to say that fibers solve just this problem. They also enable powerful programming abstractions like generators. But my sense is that the main reason why they get so much attention in node.js is because they provide a very elegant and efficient solution to the pyramid of doom problem.

Sponsored Ad

The pyramid of doom problem can be solved in a different way, by applying a CPS transformation to the code. This is what my own tool, streamline.js, does. It leads to code which is very similar to what you’d write with fiber’s futures library:

function archiveOrders(date, _) {
  var conn = db.connect(_);
  flows.each(_, conn.query("select * from orders where date < ?",  
                           [date], _), function(_, order) {
    conn.execute("insert into archivedOrders ...", 
                 [, ...], _);
    conn.execute("delete from orders where id=?", 
                 [], _);
  console.log("orders have been archived");

The signal to noise ratio is even slightly better as the wait() and future() calls have been eliminated.

And streamline gives you the choice between transforming the code into pure callback code, or into code that takes advantage of the node-fibers library. If you choose the second option, the transformation is much simpler and preserves line numbers. And the best part is that I did not even have to write the fibers transformation, Marcel offered it on a silver plate.

Wrapping up on fibers

In summary, fibers don’t really change the execution model of node.js. Execution is still single-threaded and the scheduling of fibers is non-preemptive, just like the scheduling of events and callbacks in node’s event loop. Fibers don’t really bring much help with fairness/starvation issues caused by CPU intensive tasks either.

But, on the other hand, fibers solve the callback pyramid of doom problem and can provide a great relief to developers, especially those who have thick layers of logic to write.

Threads – What for?

As I said in the intro, threads landed into node this week, with Jorge’s thread_a_gogo implementation (and I had a head start on them because Jorge asked me to help with beta testing and packaging). What do they bring to the plate? And this time we are talking about real threads, not the green kind. Shouldn’t we be concerned that these threads will drag us into the classical threading issues that we had avoided so far?

Well, the answer is loud and clear: there is nothing to be worried about! These threads aren’t disruptive in any way. They won’t create havoc in what we have. But they will fill an important gap, as they will allow us to handle CPU intensive operations very cleanly and efficiently in node. In short, all we get here is bonus!

Sounds too good to be true! Why would these threads be so good when we had so many issues with threads before? The answer is simple: because we had the wrong culprit! The problems that we had were not due to the threads themselves, they were due to the fact that we had SHARED MUTABLE STATE!

When you are programming with threads in Java or .NET or other similar environments, any object which is directly or indirectly accessible from a global variable, or from a reference that you pass from one thread to another, is shared by several threads. If this object is immutable, there is no real problem because no thread can alter it. But if the object is mutable, you have to introduce synchronization to ensure that the object’s state is changed and read in a disciplined way. If you don’t, some thread may access the object in an inconsistent state because another thread was interrupted in the middle of a modification on the object. And then things usually get really bad: incorrect values, crashes because data structures are corrupted, etc.

If you have shared mutable state, you need synchronization. And synchronization is a difficult and risky art. If your locks are too coarse you get very low throughput because your threads spend most of their time waiting on locks. If they are too granular, you run the risk of missing some edge cases in your locking strategy or of letting deadlocks creep in. And, even if you get your synchronization right, you pay a price for it because locks are not free and don’t scale well.

But threads a gogo (I’ll call them TAGG from now on) don’t share mutable state. Each thread runs in its own isolate, which means that it has its own copy of the Javascript code, its own global variables, its own heap and stack. Also, the API does not let you pass a reference to a mutable Javascript object from one thread to another. You can only pass strings (which are immutable in Javascript) (*). So you are on the safe side, you don’t run the risk of having one thread modify something that another thread is accessing at the same time. And you don’t need synchronization, at least not the kind you needed around shared mutable objects.

(*) it would be nice to be able to share frozen objects across threads. This is not available in the first version of TAGG but this may become possible in the future. TAGG may also support passing buffers across thread boundaries at some point (note that this may introduce a limited, but acceptable, form of shared state).

I hope that I have reassured the skeptics at this point. As Jorge puts it, these threads aren’t evil. And actually, they solve an important problem which was dramatized in a blog post a few months ago: node breaks on CPU intensive tasks. The blog post that I’m referring to was really trashy and derogative and it was making a huge fuss about a problem that most node applications won’t have. But it cannot be dismissed completely: some applications need to make expensive computations, and, without threads, node does not handle this well, to say the least, because any long running computation blocks the event loop. This is where TAGG comes to the rescue.

If you have a function that uses a lot of CPU, TAGG lets you create a worker thread and load your function into it. The API is straightforwards:

var TAGG = require('threads_a_gogo');

// our CPU intensive function
function fibo(n) { 
  return n > 1 ? fibo(n - 1) + fibo(n - 2) : 1;

// create a worker thread
var t = TAGG.create();
// load our function into the worker thread

Once you have loaded your function, you can call it. Here also the API is simple:

t.eval("fibo(30)", function(err, result) {
  console.log("fibo(30)=" + result);

The function is executed in a separate thread, running in its own isolate. It runs in parallel with the main thread. So, if you have more than one core the computation will run at full speed in a spare core, without any impact on the main thread, which will continue to dispatch and process events at full speed.

When the function completes, its result is transferred to the main thread and dispatched to the callback of the t.eval call. So, from the main thread, the fibo computation behaves like an ordinary asynchronous operation: it is initiated by the t.eval call and the result comes back through a callback.

Often you’ll have several requests that need expensive computations. So TAGG comes with a simple pool API that lets you allocate several threads and dispatch requests to the first available one. For example:

var pool = TAGG.createPool(16);
// load the function in all 16 threads
// dispatch the request to one of the threads
pool.any.eval("fibo(30)", function(err, result) {
  console.log("fibo(30)=" + result);

TAGG also provides support for events. You can exchange events in both directions between the main thread and worker threads. And, as you probably guessed at this point, the API is naturally aligned on node’s Emitter API. I won’t give more details but the TAGG module contains several examples.

A slight word of caution though: this is a first release so TAGG may lack a few usability features. The one that comes first to mind is a module system to make it easy to load complex functions with code split in several source files. And there are still a lot of topics to explore, like passing frozen objects or buffers. But the implementation is very clean, very simple and performance is awesome.

Wrapping up on threads

Of course, I’m a bit biased because Jorge involved me in the TAGG project before the release. But I find TAGG really exciting. It removes one of node’s main limitations, its inability to deal with intensive computations. And it does it with a very simple API which is completely aligned on node’s fundamentals.

Actually, threads are not completely new to node and you could already write addons that delegate complex functions to threads, but you had to do it in C/C++. Now, you can do it in Javascript. A very different proposition for people like me who invested a lot on Javascript recently, and not much on C/C++.

The problem could also be solved by delegating long computations to child processes but this is costlier and slower.

From a more academic standpoint, TAGG brings a first bit of Erlang’s concurrency model, based on share nothing threads and message passing, into node. An excellent move.

Putting it all together

I thought that I was going to write a short post, for once, but it looks like I got overboard, as usual. So I’ll quickly recap by saying that fibers and threads are different beasts and play different roles in node.

Fibers introduce powerful programming abstractions like generators and fix the callback pyramid problem. They address a usability issue.

Threads, on the other hand, fix a hole in node’s story, its inability to deal with CPU intensive operations, without having to dive into C/C++. They address a performance issue.

And the two blend well together (and — sponsored ad — they also blend with streamline.js), as this last example shows:

var pool = TAGG.createPool(16);
console.log("fibo(30)=" + pool.any.eval("fibo(30)", _));

Kudos to Marcel and Jorge for making these amazing technologies available to the community.

Posted in Asynchronous JavaScript, Uncategorized | 59 Comments