Asynchronous Javascript – the tale of Harry

Catching Up

Before telling you about Harry, I’ll start with a short account of what I went through recently so that those who know me from long ago can understand how I ended up being involved in asynchronous Javascript development, and what that means.

After many years of programming with mainstream OO languages (Java and C#), I eventually decided to give up in the summer of 2009 and I switched to Javascript. The trigger was jQuery. I used it to experiment with HTML 5 and it helped me realize that the world had changed and that many of my previous beliefs were just wrong. Functional programming, which I had put aside 20 years ago to focus on mainstream OO, was coming back, with new ideas and a huge potential. I also realized that strong typing, which I had been worshiping for years, was somewhat of a neurotic thing. It makes you feel safe but you pay a high price for it (you write a lot of code just to please the compiler) and it actually misses the point (you should be relying on unit tests that check the semantics rather than on compilers that only check formal constraints). It also introduces tight coupling between code modules, and makes the code more rigid than necessary. JQuery really made me rediscover the pleasure of functional programming and convinced me that Javascript was not a toy language but most likely a very important language for the future.

Then, I thought that if I was going to invest a lot on Javascript and jQuery for the client side, I might as well try to use it also on the server side. This would make it possible to reuse code between client and server. It would also make the development process simpler: one language to learn, common methodologies and tools, etc. This is how I ended in the SSJS (server side Javascript) world.

So, about 18 months ago, we (I had taken the lead of a new team for a new project in the meantime) started working with Helma NG (now RingoJS). We quickly switched to Narwhal which seemed to have more traction at the time. And we were keeping an eye on a blip that was getting bigger on our radar screens: node.js. It looked amazing but I wondered if it would be wise to drag a Sage project into it. Selling SSJS as a viable platform for future applications had already been a bold move but with node.js we were crossing the line between being leading-edge and bleeding-edge!

But as we were moving forwards with Narwhal, it became clearer every day that node.js was really where the future of SSJS was shaping up. There was a vibrant community around it. And an incredibly fast Javascript engine! So we ended up making the switch in the spring of last year.

Asynchronous Javascript in node.js

Node.js is really cool. Small, simple, very fast, etc. But it takes a radical approach on concurrency: no threads, asynchronous I/O instead. This has a profound impact on the way code is written.

Node.js provides two API styles to deal with asynchronous programming:

  • An event style
  • A callback style

The event style allows you to emit events from various points in your code, and set up listeners that catch these events and act upon them somewhere else. The flow of control is highly non-local and a bit similar to what we classically use for exception handling. It works really well for I/O handling (HTTP client and server for example) but I have a hard time imagining business application developers writing their mundane business logic with this kind ofΒ non-local flow of control.

So we would likely end up writing most of our code in the callback style, which is what node.js proposes for local flows that call asynchronous functions. The emblematic pattern for an asynchronous call is the following:

asyncFunc(args, function(err, result) {
  if (err)
    // error: propagate it or handle it
    // do something with result

Every asynchronous function takes an extra argument which is a callback. When the asynchronous operation completes, the callback is executed. If the operation fails, an error object (err) is passed as first argument to the callback. If the operation succeeds the first argument is set to null and an optional result may be passed through the second argument. A simple and straightforward design (like most things in node.js)! The new thing is that node.js is highly asynchronous so this pattern is not anecdotic as it may have been before. In node.js, it is omnipresent.

Harry’s first steps

So now comes the time to tell you my little story about Harry. Harry is an experienced programmer who makes his first steps in node.js. He has done some async programming before but never with a system that is as pervasively asynchronous as node.

To get familiar with node’s APIs, Harry decides to implement a function that traverses directories to compute disk usage. In his previous life he would have written it as:

function du(path) {
  var total = 0;
  var stat = fs.stat(path);
  if (stat.isFile()) {
    total += fs.readFile(path).length;
  else if (stat.isDirectory()) {
    var files = fs.readdir(path);
    for (var i = 0; i < files.length; i++) {
      total += du(path + "/" + files[i]);
    console.log(path + ": " + total);
  else {
    console.log(path + ": odd file");
  return total;

In node.js, the fs.stat, fs.readFile and fs.readdir calls are asynchronous. So, the du function itself must be asynchronous too. Its signature becomes:

function du(path, callback)

where callback is a node.js callback function that du will use to return its result. The signature of the callback is the following:

callback(err, result)

So Harry tries to adapt his du implementation for node.js. He quickly reaches the following point:

function du(path, callback) {
  var total = 0;
  fs.stat(path, function(err, stat) {
    if (err) { callback(err); return; }
    if (stat.isFile()) {
      fs.readFile(path, function(err, data) {
        if (err) { callback(err); return; }
        total += data.length;
        // (a) what do I do here?
      // (b) and here?
    else if (stat.isDirectory()) {
      fs.readdir(path, function(err, files) {
        if (err) { callback(err); return; }
        // (c) is this right?
        for (var i = 0; i < files.length; i++) {
          du(path + "/" + files[i], function(err, len) {
            if (err) { callback(err); return; }
            total += len;
            (d) what do I do here?
          // (e) and here?
        // (f) this does not sound right either!
        console.log(path + ": " + total);
      // (g) what do I do here?
    else {
      console.log(path + ": odd file");
      // (h) and here?
  // (i) sounds right, but not in the right place.
  callback(null, total);

He has started to introduce the callbacks but he is hitting some difficulties and a lot of questions arose. After a bit of thinking, Harry figures out the answer to many of his questions:

At spots (b), (e), (g) and (i) I should not have any code because these statements follow an async call.

(i) is misplaced. I need to make 3 copies of it. The first two will go to spots (a) and (h). The third copy will go somewhere in the for branch where I am a bit lost at this point. I note that I have been lucky in this example because there is a single statement after the if/else if/else branching sequence. If there had been more code, I would have had to copy a whole block 3 times. So I’ll probably need to package the trailing statements into a function the next time I hit this kind of branching code

So Harry makes the changes and he is now left with (c), (d) and (f), that is the branch with the for loop. Let’s look at it again:

      fs.readdir(path, function(err, files) {
        if (err) { callback(err); return; }
        // (c) is this right?
        for (var i = 0; i < files.length; i++) {
          du(path + "/" + files[i], function(err, len) {
            if (err) { callback(err); return; }
            total += len;
            (d) what do I do here?
        // (f) this does not sound right either!
        console.log(path + ": " + total);

After a bit more investigation, Harry comes up with the following conclusions:

(c) is clearly wrong. du is an async function. So if I leave the loop like this, all the du calls will execute in parallel and I have no way of collecting all the results and continuing!

(d): this seems to be where I need to continue looping.

(f) is clearly misplaced as it will execute before any of the loop iterations get a chance to run. Maybe I’ll know what to do with it once I have fixed (c) and (d).

Fortunately, Harry is a smart guy. The conclusion that he derives from (d) leads him to conclude that he should restructure the loop and handle it recursively rather than iteratively. After a bit of time, he comes up with the following solution:

      fs.readdir(path, function(err, files) {
        if (err) { callback(err); return; }
        function loop(i) {
          if (i < files.length) {
            du(path + "/" + files[i], function(err, len) {
              if (err) { callback(err); return; }
              total += len;
              loop(i + 1);
          else {
            // loop is done. Execute last statement and then callback to return value
            console.log(path + ": " + total);
            callback(null, total);
        // start the loop

Bingo! Harry is happy. He can write simple algorithms like this with asynchronous functions. It just seems to require a bit more brain cycles that before, but this is feasible.


A few weeks have gone by. Harry has written a few modules in node.js and he starts to feel more comfortable with callbacks. He actually came up with some patterns that help him write robust code without too many headaches.

The first pattern is actually a helper function for loops:

function asyncForEach(array, iterator, then) {
  function loop(i) {
    if (i < array.length) {
      iterator(array[i], function() {
        loop(i + 1);
    else {

With this helper function, he can now write his loops as:

asyncForEach(array, function(item, next) {
  // body of my loop
  somethingAsync(function(err, result) {
    if (err) {callback(err); return; } // I'm starting to get tired of writing this one!
    // do something with item and result
    next(); // don't forget me at the end of every code path
}, function() {
  // this is where execution resumes after the loop

He also came up with a small funny construct that helps him deal with branching code (he calls it the branch neutralizer):

(function(next) {
  // if or switch statement with branches that may mix sync and async calls.
  // All code paths must end up calling next() or callback(null, result)
})(function() {
  // this is where execution resumes after the branches

Also, he is now mentally wired to automatically replace return statements by statements like { callback(err, result); return; } that he often simplifies as return callback(err, result); as this is more compact and nobody actually cares about the values returned by asynchronous functions.

Harry feels more relaxed. He now has patterns and a methodology that lets him deal with node.js APIs without too many headaches. He has also published these patterns on the team’s wiki to foster consistent coding practices inside his team.

Looking back

Then summer arrives and Harry takes a well deserved vacation break on the French Riviera. When he comes back, he did not loose his newly acquired programming skills but sometimes he wonders:

In my previous life, I could write algorithms in a very natural form. The code that I was writing was directly related to the problem I was trying to solve. There was no noise between the statements. Also, I could easily chain calls. For example I could write something as simple as:

total += fs.readFile(path).length;

Now, despite all the patterns that I have found, I still have to write something like:

fs.readFile(path, function(err, data) {
  total += data.length;
  // continue here ..

Isn’t there something wrong here? I can write the same kind of algorithms as before in this amazing node.js environment but the code that I write contains a lot more noise than before, natural chains are broken, etc.

Actually, I often feel like working with a crippled language. As soon as I have asynchronous calls in scope, I cannot use while and for loops any more. Of course, I cannot use break and continue either. I also need to neutralize my if and switch statements. And I have completely given up on try/catch/finally for now. I’ll see later if I can find a pattern for it. The only keyword which seemed to have survived (and actually prospered) is function.

Sounds like a regression. What am I getting in return?

Sure, I know. I may get a raise. I was a dull classical programmer and I am now part of an elite of bleeding edge programmers who know how to do asynchronous programming in this trendy node.js environment. My management needs to know that!

Also, I can write programs that run fast and consume very little system resources because they run in this amazing new server.

But instead of spending my time crafting clever algorithms with simple control flow statements, I now find myself spending a significant part of my time applying the same silly patterns over and over. The code that I write is harder to read and the beauty of my algorithms is burried into lots of callback noise. Also, I need a lot more concentration because as soon as I miss one of these next() calls, I break a callback chain and my code just goes nowhere without returning a result, and it can take time to find out where I have forgotten to call next().

Also I had a bit of hope that async programming would allow me to write more parallel code and that for example, I could take advantage of the (b), (e), (g) and (i) spots in my code to do something clever after firing the async calls but I find it hard to exploit those spots. I haven’t yet found how I could take advantage of them. Maybe I’m not looking in the right place. I feel frustrated.

Haven’t I become some kind of slave programmer? Wouldn’t it make sense to have a machine take care of all the tedious callback patterns for me, so that I could go back to writing beautiful algorithms?

Harry also remembered something from his CS classes and he added the following:

Node.js is actually a new run-time environment that I am targeting when I am writing code. When I write a function like du, what the function does is no different than what my old du function did in my old environment. Also, the library functions that I am calling (fs.stat, fs.readdir and fs.readFile) are similar to the ones I had before (what they do, not how they do it). And I am using a high level programming language which is supposed to screen me from differences between execution engines. So why should I write different code? It looks like something is interfering when it should not and that I’m being asked to deal with a problem that the language tools should handle. Am I still programming in a high level language?


I’ll leave Harry to his thoughts for a moment.

As you can guess, I went through this too. And what I ended up finding is that Harry is right. He is writing code that a machine could write. So why not let the machine write it for him?

I was actually hoping that a solution would emerge from the community some day. The first signs of hope came from the promise pattern. This is an attempt to solve the problem with special libraries rather than by transforming code. The promise pattern actually improves the situation when the code contains a sequence of async calls that don’t return values. But as soon as the flow hits an asynchronous call that returns a value, a callback pops up. So it only solves part of the problem. I tried to understand where the limit was and a simple topological consideration convinced me that even the smartest of us would not be able to fully solve Harry’s problem and eliminate all the callback noise with any library-based approach. We had to step out and transform the code.

The second sign of hope came from Neil Mix’s narrative.js. This is a compiler tool that relies on a small extension to the language (a special yielding operator) to convert synchronous-looking code into its callback-based equivalent. Unfortunately it suffers from two flaws:

  • It introduces new syntax in the language: this makes the source incompatible with all sorts of tools that understand the Javascript syntax
  • It is not modular: The functional beauty and modularity of the original code is destroyed by the narrative.js compiler.

So the project did not get traction and got abandoned.

More recently, I stumbled upon Oni Labs’ stratified.js. I was excited at first but I quickly realized that this project suffered from the same flaws as narrative.js: it extends the language and the compiler does not preserve the modular quality of the original code.

At this point it became very clear to me that the right way to help Harry was to create a tool that would just try to automate what Harry does by himself when he writes code. The tool would take the statements one by one and apply the patterns that Harry has found. This would preserve the modular beauty of Javascript. And if I could do this through a naming convention rather than through new syntax it would be less disruptive for Harry because he would be able to keep his favorite editing tools (in the long term it makes sense to have a language construct for this but in the short term a naming convention will work much better).

So I decided to give it a try. The last week-ends have been somewhat sacrificed and I had some sleepless nights too but the tool started to shape up. It is not completely finished today but it now works well enough that I decided to publish it. I called it streamline.js and made it available on Sage’s GitHub site, here.

So I have good news for you Harry. You now have the best of both worlds. The old one because you can get back to writing real code with simple control flow and easy to read algorithms, and the new one because your code will run in this amazing and exciting node.js server. I’ve put your streamlined du function in diskUsage.js.

And by the way, I have a little extra goodie for you: you can now parallelize your code in a controlled way with two little functions that make parallel programming sound like gardening (or janitoring, your choice): spray and funnel. Just take a look at the diskUsage2.js example!

Happy programming!

This entry was posted in Asynchronous JavaScript. Bookmark the permalink.

73 Responses to Asynchronous Javascript – the tale of Harry

  1. nalply says:

    Javascript is lacking continuations. Continuations are powerful abstractions of control flow. Control structures like loops, branches or exception handling could be re-implemented for the asynchronous case using continuations.

    V8 does not support continuations, therefore a CPS-transformation is neccessary. Javascript inherits the syntactic complexity of C, and C is very hard to CPS-transform. It is a pity. Also it might be rather slow (uneval to function source code, parse, transform and eval transformed funtion code), but implementable as a library. However, to give continuations full power, control structures should work with continuations, and this probably needs a language extension.

    Will continuations really provide the programmer with an asynchronity abstraction toolkit? I do find this idea very intriguing. Caveat emptor: Power corrupts the soul.

    • I think I get the gist of it. Situation may be better if the Javascript had continuations.

      But I’m more in the logic of lowering the bar so that programmers who have standard Javascript skills can easily get into node than moving it higher so that PhDs in functional programming can go through the roof. Will continuations help the average JS programmer? Probably too much power here.

      • nalply says:

        You say streamline.js is doing only what a programmer would have to do. Now I have the distinct impression that it is in fact a specialized CPS transformation. Specialized because it only transforms asynchronous invocations and not the whole syntax tree.

        It’s hilarious. All the poor Node.js programmers who are struggling to do a CPS transformation!

      • Yes this is exactly what’s happening.

        I’m glad that someone eventually gets it.

        As I describe it, programming node.js made me feel like a real slave, repeating these idiotic patterns over and over. So I reacted by developing this tool. I’m only starting to use it and it’s a real relief. What’s strange is that the rest of the community does not seem to realize the alienation. To the opposite, they seem to like their current fate…


  2. Alex says:

    Just like some people prefer to code in assembly or C instead of C++, I can understand how you might prefer coding in straight JS rather than a higher-level language such as StratifiedJS.

    But I don’t get the part where you say that for StratifiedJS “the compiler does not preserve the modular quality of the original code”. What is this “modular quality” that you are referring to?

    Why does it even matter at all what the generated code looks like? The generated code is just an artefact of the implementation, not a replacement for the original SJS sources. Conceivably, StratifiedJS’ keywords could be added to a JS engine such V8 or Spidermonkey directly, in which case there wouldn’t *be* any lower-level JS artifact. Would your “loss of modular quality” argument still be valid in this case? Or, to turn the question around, take C++: would you have rejected the original implementation of C++ (which compiled to C, rather than directly to assembly) with the same argument?

    • Well, I don’t advocate for straight JS with all its callback noise. What I advocate for is streamline.js, which has some similarities with StratifiedJS: it transforms the source and produces Javascript (just like CFront used to generate C – I actually did my first steps into C++ with it a long long time ago).

      But there are differences. The first one is that the streamline.js source is also valid Javascript. I could have introduced a new keyword or operator but I chose not to because I wanted to source to behave well in editors, etc.

      The second difference is that streamline.js does *not* come with any special runtime and that the compiler does not need to perform any kind of global code analysis to find out where asynchronous functions are called (as I understand it, StratifiedJS does that). Streamline.js does something much simpler: it takes the functions that have an underscore at the end of their name (and only those), and it transforms their bodies to inject the “callback noise” that is needed to make them work. And this “callback noise” is nothing else than what the programmer would have written by hand if he had been implementing these functions in plain Javascript. And, moreover, the transformed function has the usual node.js async signature. So it may be called from any other function, including functions that have not been transformed by streamline.js. As I understand it, the functions that you write in a .sjs source file cannot be called from a regular .js file. With streamline.js, this is not a problem.

      The central point that I wanted to make with my little tale, but I am not sure that people really picked it up, is what Harry hints at the end of his post-vacation ramblings: Javascript is not a high level language any more in node.js. Why, because it does not shield the developer from things that happen in the runtime that are completely (or almost completely) orthogonal to the semantics expressed in the source (they should not “interfere”). If Javascript were a high level language in node.js, the programmer should be able to write his code the way he did before (or almost), and the compiler should be able to translate it, even if the execution model is different. Instead of that, the developer finds himself dealing with some kind of “intermediate language” (all these callbacks) and he cannot even use the natural features of the language (loops, exception handling, etc.).

      This is the problem that streamline.js fixes. Instead of letting the programmer deal with the necessary callback logic, streamline.js does it as part of a preprocessing step. The code which is generated by streamline.js is actually very close to the code that the programmer would have written by hand (a big difference with StratifiedJS). If you want to see examples of what the transformation produces, just take a look at the transform-test.js source. Of course, the programmer would probably have choosen different names for the intermediate variables but he would have written more or less the same code.

      So my claim is that streamline.js does not do anything really sophisticated. All it does is introduce the preprocessing pass which is necessary to turn Javascript back into a “high level programming language” for node.js.

      By high level, I mean a language in which you can use while loops, try/catch statements, etc. with their “normal” semantics to express “normal” algorithms the “normal” way. Not a language in which you have to twist your brain all the time. As Alan Kay said: “Simple things should be simple, complex things should be possible”.


  3. I had forgotten to put a link to diskUsage.js at the end of the story. I fixed it to make the story a bit more obvious.

    The link to diskUsage2.js seems to have misled some readers into thinking that streamline.js is a library. The interesting part is how the async calls are handled in diskUsage.js, not the spray and funnel functions that I’m demoing in diskUsage2.js. Theses functions are just icing on the cake.

  4. Markus says:


    while I understand the problem, I disagree with the solution, as it is basically a crude hack. I’d go for coroutines instead.
    If v8 does not support continuations, fine, but the v8 part does not do anything blocking anyway, the callback complexity comes with node.js io abstraction layer.

    So, put coroutines into node.js, and hide the complexity where it happens.
    If a call is blocking, and there is no callback, create a coro context, add the event to the loop, and once the callback comes in, continue with your coro context.
    This allows both, writing callbacks, and, hiding the callbacks, using the same language.
    Basically you could write asynchronous code like synchronous code, if you do not provide a callback, and the call is blocking, the magic is done behind the curtains to make it work.
    I’d prefer such solution over everything else, as it is easy to understand, and fixes the problem in the place -or at least close to- where it happens, instead of adding more code layers.
    Of course coroutines have their own problems, one of them is portability, but I really doubt there will be a better solution.
    In case your platform does not provide and usable coroutine support, you are lost, but still coroutines would much better for the majority of the users than any other solution.

    node.js could use libcoro, which is from the same hands as libev and libeio, which are used in node.js already πŸ˜‰

    • nalply says:

      Just a note: Coroutines are an application of continuations. How about a funny hypothetical idea: Extend V8 with continuations but only expose them as coroutines!

      I agree, it is easier to use libcoro instead. Nice pointer. We should do some research whether a Node add-on could be possible with libcoro.

      • Nalply,

        I have the impression that continuations (and probably coroutines too) would ultimately provide more power. I’m very naive about the academic side of all this but I have the impression that continuations are the “ultimate goto” (or rather a “semantically correct” setjmp/longjmp). Very powerful but also very dangerous (your caveat emptor). Is this something that we really want to put in the hands of Javascript programmers?

        By giving them a coroutine library instead, we put things a bit under control. But what about restoring the language keywords so that they work the same way in sync-land and in async-land instead of introducing yet another library? Isn’t this sufficient?

        The big advantage that I see is that developers don’t have anything new to learn and they are guided by the structures that they’ve been using for years.

        Does this make any sense, or am I completely off?

      • nalply says:

        Bruno, I am an engineer and studying Law, so I am far away from Computer Science academia. I am just enthralled by continuations. I understand very well that production code needs a different mindset than pure ecstasy.

        Maybe it is possible to wrap a libcoro coroutine in a native Javascript object with a Node add-on. To switch or to exit a coroutine, call a method of the coroutine object.

        But note! See for a discussion about coroutines in Node.

      • Markus says:

        It is unlikely you’ll get coroutines in v8, if google wanted v8 to have coroutines, there would be coroutines already.
        I’d limit the use of coroutines to blocking io calls within node.js, so the user can’t use them directly, but they get used by node.js for calling back, jumping to the code, if there is no callback provided for a blocking call.

        if you write

        a =

        and do not provide a callback, coroutines are used to create a context and return once ‘something was read’ or an error occured.
        if you write

        a = ... function(data,err){

        you can handle the cb mess yourself.

  5. What I like about callbacks is that it reminds me “the good old days” when I could “juggle” with C pointers and thread race conditions.

    Half kidding.

    I am nearing 20K lines of nodejs javascript code… I got bitten sometimes, often the way you describe, but I kind of enjoy it that I survived.

    Regarding coroutines, I believe that the Icon programming language deserves a look.

  6. This is *awesome* stuff! I really love your approach, and your explanation was very eloquent. Thanks for delivering such a kickass combination of ideal and pragmatic.


  7. Great work! I am an average programmer and I hate asynchronous programming. Any attempt to eliminate/reduce the need of it in coding deserves a lot of applause. I found your article a pleasure to read, very easy to understand for a problem that is actually rather complex to solve. I especially appreciate that you didn’t use buzz words like “continuation” anywhere in your article. Not that we don’t understand what those words mean, but that, just like asynchronous programming source code, the presence of them greatly reduce readability. Keep up the good work!

  8. YuppY says:

    Great story!

    One note: funnel function is actually a Semaphore.

    So this version of diskUsage2_.js would look better:

    var fileSemaphore = new Semaphore(20);
    try {
    total += fs.readFile(path, _).length;
    } finally {

    • Good point. I’ve worked a lot more with monitors than semaphores (Java, C#) and for some reason I had equated “semaphore” with “binary semaphore”.

      On the other hand, I prefer having a single call that pairs the acquire/release operations. It makes the API safer by removing the risk of having unbalanced calls. And I sorta like the “funnel” metaphor.


      • YuppY says:

        In Python acquire and release operations are paired via context manager protocol:

        with file_semaphore:
        total +=len(open(file, 'r'))

        Maybe this simple approach can be adopted to Javascript.

        Btw, do you know that fs.Stat has file size information (stat.size) and fs.readFile call not needed in this example? πŸ˜‰

      • Regarding the pairing, it would probably be overkill to dedicate syntax for this in Javascript, as lambdas are really cheap.

        And, yes, the example is not very clever because you can do stat.size. Counting lines would be more meaningful:

        total += fs.readFile(path, _).split('\n').length;

  9. Dad says:

    I’m new to JavaScript, so please pardon my ignorance. On the surface of it, this looks quite interesting and useful. One suggestion based on something I learned at JSConf or NodeConf – make your anonymous functions have useful names. This makes stack traces dramatically more useful.

    I wonder, in this case, if you could make the callback function name be related to the function it was a callback from. So if, as in your diskUsage_.js example there’s a line that says:

    var stat = fs.stat(path, _);

    and if instead of:

    return fs.stat(path, __cb(_, function(__0, stat) {

    it generated:

    return fs.stat(path, __cb(_, function fs.stat_callback_1234(__0, stat) {

    where 1234 is the line number of the original source (or something else useful and also disambiguating since you might call fs.stat( in multiple places in the program).

    Your online interactive examples page is really neat. Suggestions:
    * Be most interesting if the samples provided also showed the standard conventional callback version as well as the streamlined for contrast.
    * if the “show complete code” additional code was formatted so as to be readable instead of lacking line returns and indentation.

    • Thanks for your very useful feedback.

      I like the idea of having meaningful callback names in stack traces. Including the function name seems a very good idea. I can also include the line number when the “mark” line numbers options is set, but I don’t want to do it systematically because this interferes with source code control systems (all the callbacks would get renamed when a line is inserted at the top of the file, which pollutes the diffs). There is also a question of increased code size but I can probably ignore it.

      Regarding the demo, I wouldn’t be able to show the “manual callbacks” when the user modifies the source on the left. Also I don’t really feel like “manually coding” the complex cases like the try/catch/finally or the lazy operators. I’m not sure that they would look much better than the generated code. I’ll think more about it!

      I did not beautify the helper functions at the top because I did not want them to take too many lines. I could add a beautify button on the right side, or just tell the user to copy it, paste it to the left and beautify.

      Thanks again. And the best way to find out if this is useful or not is to try it: try to write something with standard callbacks and try to write it with streamline. Then compare your productivity, the readability and maintainability of the code, its robustness, etc.


      • Dad says:

        πŸ™‚ Sure! I have to learn JavaScript before I can dive into this – some of what you are doing I don’t completely follow. Like, why so many parenthesis? Looks like more than you need, but then again, I don’t know JavaScript well yet so likely I’m just ignorant.

        For example:
        console.error( ( ( ( "UNCAUGHT EXCEPTION: " + err.message ) + "\n" ) + err.stack ) );

      • I’m just using the narcissus decompiler to regenerate the source from the parse tree. Narcissus is also behind the “beautify” button.

        Narcissus has a tendency to add a little more parentheses than strictly necessary. And if you hit the beautify button twice you’ll see even more parentheses. It was probably written by a LISP programmer πŸ™‚


      • Dad says:

        ha! Funny! (LISP programmer). Ok. Glad to hear JavaScript doesn’t _actually_ need all those parens… was starting to think I maybe didn’t want to use node.js after all… πŸ˜›

  10. Pingback: Streamlined asynchronous Javascript, with Bruno Jouhier – State of Code

  11. chrisjacob says:

    Really well written post and streamline.js looks like a really well thought out solution. I’m relatively new to JavaScript and want to dive into Node – and all that callback noise and asynchronous thinking is absolutely confusing to newbies.

    Your guiding rule of “Replace all callbacks by an underscore and write your code as if all functions were synchronous.” was music to my ears. I’ll be following your project and hope to get my hands dirty with it sometime soon.

    Keep up the great work and also this awesome blog posts – it’s people like you that really make programming a wonderful profession to be in.

    – Chris

  12. Pingback: A Node.js Experiment: Thinking Asynchronously, Using Recursion to Calculate the Total File Size in a Directory « Procbits

  13. Steven Garcia says:

    Very interesting solution you have here – glad to see you are still actively working on it. I am curious what your opinion is of async – seems like the most commonly used tool to address the callback issue, even though it is decidedly more complex than your proposal

    • Yes, I know caolan/async. It is a very clever piece of work and probably the best “library” for async programming.

      But, no matter how clever it is, a pure JS library will never be able to solve the “topological” issue that I tried to describe in my last post. It cannot transfer the conn variable from the inner scope to the outer scope. If an async call returns a value through a callback, there is nothing a library can do to avoid the callback. You need extra power to solve this problem: either a fiber or coroutine library with a yield call, a CPS transform like streamline, or direct support from the language (a yield operator).

      I tried various libraries before writing streamline. But they left me frustrated because 1) they only solved part of the problem (the level of “callback noise” was still too high) and 2) I found it overwhelming to have to learn so much API to do control flow when the language is supposed to provide it, and to have to give an “async API training” to every developer in the team.

  14. Pingback: Asynchronous Javascript – the tale of Harry « async I/O News

  15. Julian Knight says:

    Wow! Great explanation – thanks.
    It has been really useful to me to see the journey your (not so?) mythical programmer took – of course, it is mirroring my own experiences & has helped to provide a shortcut!
    Like most things in JavaScript land, it took me several reads and considerable experimentation before I “got” what you were saying but I think I’ve got there. I was going to start using the async library but I think that I’m going to give yours a go as you’ve suggested to others to see if it resolves real-world issues.

    I think what a lot of “developers” forget is that there are large groups of people out here who are not full-time “developers” but who do need to use programming to get things done. Personally, I’m not interested in coroutines, CPS transformations or any of the other “cruft” that stems from the theory – I need something I can get my head around NOW! It really shouldn’t be so hard in the 21st C to write a single-page client-server browser-based app.

  16. Pingback: Asynchronous Javascript | x443

  17. Pingback: A pure library approach to async/await in standard JavaScript « Smellegant Code

  18. Pingback: Streamlined Asynchronous JavaScript, with Bruno Jouhier - Zef.

  19. Eric des Courtis says:

    Sounds like a whole lot of work just to get what Erlang provides right out of the box. Granted the language on the browser is Javascript but wouldn’t it be better to get Erlang on the browser than Javascript on the server side?

    Keep in mind the great thing about NodeJS was that it used things like epoll under the hood (which solves the C10K problem). Having closures made coding in it less insane than doing it with a language like C, Java etc… Erlang does the exact same thing but shields you properly from the callback nightmare by having a pre-emptive scheduler.

  20. Michael says:

    I’m trying out streamline.js as an alternative to continuations.js, as I like being able to get return values, but I’ve run into a roadblock and am hoping there is a simple solution.

    The GitHub pages says that “You have three options to use streamline in the browser: The first one is to compile the source with _node -c. The compiler generates vanilla Javascript code that you can load with directives in an HTML page.” This option matches the usage I would like to follow: compilation on the server and then not having to have any dependencies in the actual code.

    However, when I do this I get code that is NOT vanilla Javascript, as it appears to depend on a function called “require” which my browser says is undefined. My understanding is that this is a node.js thing, so why is it being used when I’m trying to generate code for the browser?

  21. Pingback: Asynchronous programming – breaking the illusions of blocking code | chronodekar's comments

  22. Streamline needs a little runtime to support the generated code. In the early versions the runtime was systematically embedded into every transformed file. This was not optimal for large projects so the code generation was changed to require the runtime from a separate js file. This works well in node.js but not in the browser.

    To get it working in the browser you have two options:

    • compile your file with streamline --standalone -c myfile._js. This will embed the runtime into your transformed file. This is the best option if you only load a small number of streamline files.
    • load the streamline/lib/callbacks/require-stub.js and the other runtime files into your HTML page. See test/common/callbacks/flows-test.html for an example. This option is preferable if you load a lot of streamline source files in your page because the runtime code will be shared.

    Also, the --standalone option was broken in early 0.10.x versions and got fixed in 0.10.5. So check the version if you get a syntax error in the generated code.

  23. James Scott says:

    Thanks for the article Bruno! Really cool that you are writing tools to help devs with this kind of thing. Async programming is good, but the code we have to write to make it work is just awful. Lets cross our fingers and hope the next version of ECMAScript fixes it…

  24. Hi! This thread looks like a good place to leave my question – and there’s more bandwidth than a tweet! However, I am going to try to keep it short. Also, please forgive me for any terminology errors, as I have minimal knowledge of JavaScript!

    The technology I invented/discovered about 45 years ago – Flow-Based Programming (FBP) – – is finally starting to take off, even though at least one program was (as of Jan. 2014) in continuous production use for just under 40 years (it may still be, for all I know!). IBM and Microsoft are all developing platforms that look very much like FBP; Facebook has adopted a similar technology, which it says is superior to MVC.

    In 2012, some bright people built what was supposed to be an FBP implementation using Node.js – – but their software has fundamental differences from “classical” FBP, which IMHO is creating a lot of confusion! It was therefore suggested that we (the “classicists”) should look into building an FBP JavaScript implementation using a different approach (this could still be based on Node.js, if that seems indicated). I have been reading for a few days now, and, so far, I have come across ez-streams, streamline, continuations, narrative.js, waitfor, etc. , so I’m now totally confused! However, it does look to me that this blog is a good source of useful ideas.

    The basic mechanism of FBP is dead simple: process ports are linked by connections which have finite capacities. “Receive” suspends if the connection being received from is empty, and “send” suspends if the connection being sent to is full. In fact I believe I saw something similar in one of Bruno’s posts… So, if I can get this working in JS, I will be quite happy – but I totally agree about callback hell – the style, as for all “classical” FBP implementations, has to be imperative (direct, not inverted).

    Given the above description, can I get a pointer (or two) on which JS infrastructure would be the best one to build a JSFBP implementation on. I would very much welcome feedback from any of the people reading this blog. BTW if anyone wants to look at my 3 implementations of “classical” FBP (functionally they are all much the same), they can be found at (DrawFBP is a diagramming tool, not an FBP implementation).

    I feel it is important to converge the two FBP directions in a clean way. Any help or pointers would be much appreciated!


    Paul M.

    • JeanHuguesRobert says:

      Hi Paul.

      Given the non blocking *nature* of Javascript, there are basically two ways to implement reactive / flow based programming 1/ synchronous way 2/ hacky way.

      The “synchronous” way is the one of You have been in contact with the author. In that model, the “ports” have no buffer at all, they are expected to handle the data as it… flows! Some “stateful process” can chose to implement some buffering if they really need it but it should generally be avoided.

      The “hacky” way is all the other solutions that try to hide the lack of threads in Javascript. Some are cleaner than others. The most promizing “standard” one at this stage is the hack based on future ES6 Generator (see for example). “Promises” come next, they are a recent addition to Javascript. The buzz “du jour” is, stream based.

      Overall, it’s a big mess. I believe you should built whatever you need starting from the ground: non blocking Javascript.


      Jean Hugues

      PS : I would not advice “my” implementation, it’s just a prototype :

  25. PS I forgot to mention node-fibers, and I have just come across suspend!

  26. Hello Paul,

    I wasn’t aware of FBP but it looks very interesting and very forward-looking (45 years ago!)

    I would distinguish two different things here:

    1. Tools and libraries like (in historical order of appearance) narrative.js, stratified.js, streamline.js, fibers, waitfor, suspend, galaxy, co, etc.
      All these tools have been developed to fix the famous callback hell problem. This problem originates from the lack of an async/await construct in the JS language. If things go well (fingers crossed), this problem will be fixed by ES7’s async functions. This should clear this part of the mess.
    2. Libraries like ez-streams min-streams event-stream, etc. These libraries have been developed to ease the pain with node streams and provide a higher level programming model. Some of them have similarities with lazy.js, RxJs and other reactive libraries. Worth mentioning too, WHATWG streams, which is an attempt to clean up and standardize the node streams model.

    One of the original visions behind node.js was that programs would be composed by assembling streams with pipes. Given this, it is surprising to see such a proliferation of stream libraries for node. I don’t know what motivated all these library authers but I can at least explain why I developed ez-streams. It came from my frustration with node’s stream API: the API is complex, brittle and, IMO, poorly composable.

    I just discovered FBP and I have only had the time to glance through the links that you posted but I sense a very high level of similarity between 1) FRP, 2) node’s original vision and 3) the spirit of many of the libraries that have been developed to ease streams programming in node.js.

    So there is convergence but also a great deal of confusion because of so many initiatives and a lot of passion around them.

    I’ve put a number of pointers in this reply but I don’t know what to recommend. Of course, I would highly recommend ez-streams because it fits my mental model (although I’d like to get rid of the underscores but I’ll have to wait for ES7, and I also have plans to inject more laziness into it). But it’s worth investigating all the creative libraries and also play a bit with Haskell to get a better feel for lazy programming.

    And, last but not least, I’m totally thrilled by the perspective of true componentization and graphic programming. This has been a strong motivation behind ez-streams. I don’t want to disclose too many details but wait and see … (hint, I started watching the amazing jsplumb library a couple months ago).

    Anyway, I need to investigate FBP. Looks like it has been sitting there and it’s now ready to take off. Bravo!

    • I did not mention Jean-Hugues’s Water & fluid but it’s definitely on the same line and worth a very close look!

      ez-streams is probably a bit dull in comparison but hey, I’m not doing it just for the fun of it, I’m also building products with it.

  27. Hi Bruno,

    Thanks for the kind words! And thanks, also Jean-Hugues, for your suggestions.

    I would very much like to ask some dumb questions, but first a few comments:

    The various streaming approaches all seem to me attempts to emulate *nix pipes. While I accept that ease of combining components trumps ease of writing them, the send/receive API (with blocking) seems to me ridiculously easy to write and reason about, and there is no callback hell! So that would be my focus in building a JS-FBP.

    I was very interested to read that the original intent of Node was to support some kind of streaming. It seems strange to me that they didn’t provide a basic suspend/resume mechanism!

    I have implemented 7 “classical” FBP implementations so far: 4 of which used green threads, and the other 3 “red” or native threads. As I understand it, JavaScript does not support multiple cores, so the latter don’t seem relevant to this discussion. The first 2 (IBM mainframe) used a simulated “longjmp”/”setjmp” (just save all the registers), and my first “C” implementation also used “longjmp”/”setjmp”. The second “C” implementation used Windows fibers – this has two basic functions: create fiber and switch fiber.

    So, dumb question: given that, if I understand correctly, Node adds some new function to JavaScript, written (I think) in C or C++, why can’t we simply add some support for “suspend” and “resume” to JavaScript? This would make life a whole lot easier! If so many of these stream implementations use an “enhanced” JavaScript, why can’t we enhance it in a different way?! Sorry to be obtuse, but nobody has been able to give me an answer on this!

    Another dumb question: my green thread implementations all had a “future events queue” (which held the status info for threads which were ready to run) – would this be similar to what Node.js uses? Or could it be adapted?

    Last point: my “green thread” implementations all assumed that the “sends” and “receives” could be issued at any point in the call stack, so I basically had to have multiple stacks (“Create fiber” does this for you), but, we could restrict “sends” and “receives” to the top level of the process’s stack, which might make stack management easier… I think that would be an interesting experiment!

    I would appreciate it very much if you or other readers could: either a) shoot holes in what I have naively written, or b) volunteer to give me a hand (I think a partnership might be fruitful) or all of the above!

    There is a good discussion going on about the differences between “classical” FBP and “FBP-like” systems such as NoFlo, at!topic/flow-based-programming/L_E7dEU6sN8 – especially the last half-dozen or so posts.

    Look forward to getting your reactions, and thanks again for your interest!

    Best regards,

    Paul M.

  28. JeanHuguesRobert says:

    Hi Paul.


    C’s setjump/longjump was initially designed to implement some poor man exception handling. It was later discovered that some creative use of setjump/longjump was possible to manipulate the CPU stack pointer in order to provide support for a kind of call/cc, leading the way to the implementation of non-premptive “task/thread/fiber”. There is nothing equivalent in Javascript, because the language “by design” does not support any solution for switching stacks.

    This is going the change with ES6 that introduces the notion of “generators”. It was soon discovered that some creative use of “yield” and “function *” was possible to manipulate the CPU pointer, much like with setjmp/longjmp (but with more “syntax clutter” alas). Some impatient people even implemented nice “transpilers” that generate ES5 code for ES6 constructs, usually by generating ugly, inefficient and hard to debug state machines. Please note that moving from sync to async code using generators means *all* function must become generators, much like when moving sync to CPS style async code using callbacks means adding the infamous “cb” parameters to *all* functions that can “block” (“block” semantically, JS beeing non blocking). I expect some “generator hell” in addition to “calback hell” !

    That “transpiler” solution was also used by other creative people to implement various schemes for non preemptive concurrency ; some of them are really smart. As Bruno signaled, ES7 may include native support for one of these solution, the async/await one, originally introduced by M$ in C# I think.

    On nodejs, thanks to possible extensions in C, “fibers” were implemented years ago, that’s the “node-fiber” native module. Unfortunately I believe that there is no transpiler that would transform source code that use such fibers so that it could run (however slowly) in the browser. This excludes “isomorphic” solutions where code using fibers can run unchanged either server side or client side.

    Why call/cc is still not part of Javascript is a mystery for me. One possible explanation is that it would break a lot of the optimizations that Javascript can do when the stack is known to be unique. See the section about “delimited continuations” in


    I suspect that in addition to the 7 previous implementations, you need to consider an 8th one. That one should implement “classical FBP” efficiently in non blocking environments where access to task/thread/fiber is not an option. I believe that your best bet on this track is probably to leverage the async/await effort of ES7 ; that is what I would do if I were you. Await/async is basically syntax sugar for CPS callbacks, hence it does not “break” the non blocking model of Javascript and consequently I believe that it will eventually be part of Javascript without much opposition. This may take a while however and transpilers may help in the mean time. See


    Jean Hugues

  29. Hi Jean-Hugues,

    Thanks for your very interesting reply! A lot to absorb!

    Some comments/questions:

    In the first paragraph, you said, “[JavaScript] ‘by design’ does not support any solution for switching stacks.” In my experience, *no* language provides support for switching stacks. In my implementation using setjmp/longjmp I still had to create stacks and do the first switch to them using Assembler language! Ugly! Fundamentally, the problem is the von Neumann orientation of just about all modern languages (and the associated mindset, which many of my colleagues can’t seem to shake off!).

    I finally found a page describing “function *” – – but this still looks to me like a one-stack solution. I could implement this by letting yield be a pseudo-return, but one which hangs onto fibonacci’s data in the stack *below* the data of the routine that calls it. If I’m right, this is going to get pretty strange when you have several hundred coroutines! I also didn’t see any clean way of terminating fibonacci – it looks like it dies when its calling routine returns. That won’t generalize either!

    This discussion makes me wonder if any of the touted JS “multithreading” ideas are truly multithreaded. One contact of mine (who shall remain nameless!) talked about his efforts to implement a form of multithreading in NoFlo – – as “putting lipstick on a pig”! He has since left the project.

    I liked the example of the use of await/async in (I assume that’s the same) – I would assume that FBP “receive” will await until the connection is non-empty, while “send” will await until the connection is non-full. How would you turn these states into events that can be awaited? Et qu’est-ce que c’est un thunk?!

    At some point in this evolution, will JavaScript be using multiple stacks? I really don’t think any approach that turns one stack into a pretzel is going to be viable long term!

    I guess I have to ask again: what do you think of the possibility of using a “native” interface for suspend/resume, and, I guess, multiple stacks? Apart from the difficulty of getting it accepted by DP shops, of course…

    Thanks for the great feedback! I really appreciate your help so far, and think that a JavaScript/FBP project would be fascinating. If you or any of your colleagues would like to work on something like that, I would love to be involved!

    Thanks and best regards to you and Bruno

    Paul M.

    Je suis Charlie

    • Hi Paul,

      There are lots of questions. I’ll try to answer to my best.

      First, is JavaScript truly multi-threaded?

      The answer is a little complex:

      • JavaScript (or rather EcmaScript) is agnostic on this. The specs don’t impose anything and don’t specify any threading nor synchronization primitives. Implementations are free to support multiple threads or not.
      • Some JavaScript engines, for example the Rhino engine, are multi-threaded.
      • But most JavaScript engines are single-threaded. V8, which powers node.js, is single-threaded.
      • As V8 can be extended in C++ some people have managed to extend it with real-threads (preemptive scheduling): threads-a-gogo and web workers. But these extensions are very different from typical threaded systems (C++, Java, C#): threads can exchange messages but they do not share any mutable data: if thread A created object O1, thread B can receive a string serialization (JSON) of O1 but it cannot access O1 directly. In technical terms, each thread runs in a different V8 isolate.
      • Some people have also managed to extend V8 with green threads (non preemptive): the node-fibers library. This library provides true coroutines and deep continuations: each coroutine has its own stack; switching is explicit (with explicit run and yield calls) and implemented with setjmp/longjmp (no threads but separate stacks).
      • Generators are not true coroutines. They only support shallow (single frame) continuations. There is only one stack. A generator can call other functions and can push and pop frames on the stack but it can only be suspended from its bottom stack frame. So it does not need a separate stack because there is only one stack frame to save and restore when it is suspended / resumed.
      • There is no way to run true (preemptive) threads inside a single V8 isolate. Of course, this could be attempted in a C++ extension but it will crash because (almost all) V8’s APIs are not thread-safe.

      So I would answer your second question about the possibility to extends JavaScript with suspend/resume and multiple stacks as follows:

      • Yes, if you use the Rhino engine. Unfortunately it is slow.
      • No, in general, if you use V8. You cannot implement a classical preemptive threading system in a single V8 isolate.
      • Yes, if you only want non-preemptive threads (explicit suspend/resume). This is what node-fibers give you.
      • Yes, if you only want preemptive threads that don’t share any mutable state. They will run in separate “isolates”. This is what threads-a-gogo give you.

      But the real question is: do you absolutely need true coroutines with separate stacks and deep continuations to implement FBP? Or do you only need async/await? My gut feeling is that async/await should be sufficient. Async/await is not (yet) available out-of-the-box in JS but there are several solutions that give you async/await in node.js today: some are based on fibers, some on pre-processors, some on generators.

      Streamline.js, my pet tool, is a pre-processor which gives you three ways to emulate async/await: on top of fibers, on top of generators, or on top of vanilla JS. The first two transforms are rather simple. The third one is a bit trickier.

      I hope this helps and I’ll be happy to continue the discussion (why not here as it might be interesting for others).

      And thanks for your support in these difficult days. We breathe a little better tonight and I hope we’ll get a strong demonstration in the streets on Sunday.


  30. Hi Bruno,
    I can’t tell you how much I appreciate your very complete answer to my questions – in fact, some of them are things I have been trying to find out for months (if not years!). I just had to go to the right place! I think I will frame your response!

    This will be a short note (sigh of relief on your part!). Based on what you describe, I plan to start playing with node-fibers (multiple stacks). I actually have a soft spot for fibers – my first, and probably most successful, FBP implementation was a fibers-like implementation written in IBM main frame Assembly language, and it ran 80% of the batch code for a bank for at least 25 years (they started replacing it with conventional HLL code after I left the 2nd time, but I know for sure that one very complex program (approx. 60 nodes) was still running in production at the beginning of 2014).

    What is fascinating to me is that the thing that you like about fibers (“fibres” en anglais) is that it gets you away from the callback pyramid of doom, while what appealed to me about it (and FBP) was that I didn’t think that a bunch of large von Neumann-style programs could be delivered in the time we had available – or maintained reliably, once it was built. In fact, I told management that (in about 1970), and amazingly they agreed! At that time, I had been in the computer field for about 11 years – just shows you the power of conviction. It’s taken a bit longer for FBP really to take off, but I’m cautiously optimistic! A colleague in Russia recently referred to the “FBP brand” (we’re still trying to define exactly what it is, but I’m hopeful)!

    So mille remerciments, and I’ll struggle on for a bit. If I hit a dead end – hopefully not – you’ve given me lots of things to try! And, by the way, what is a “thunk”?!

    Best regards,

    Paul (aka sometimes as Jean-Paul)

    PS Good article on the importance of satirists, and cartoonists in particular (think Daumier) in my local paper – .

  31. Thanks Paul.

    It’s been a long day so I’ll only give you a quick answer on thunks. What people call thunks in node.js is exactly what I called futures in one of my posts:

    A thunk is a function F that encapsulates a computation in progress and that you can resolve with F(function(err, result) { ... }).

    Thunks are very similar to promises. Conceptually they do the same thing but thunks are much simpler. Promises have a rich API, which IMO is a bit of an overkill. But promises have been standardized in EcmaScript 6 so it is probably better to bet a design on promises than on thunks today.

    Fibers are a low-level mechanism and Marcel Laverdet who designed the node-filbers library added a futures library on top of fibers. If you start with fibers I strongly encourage you to play with his futures library. Futures/thunks/promises are a very clean way of encapsulating asynchronous computations.

  32. Hi Bruno and Jean-Hugues,

    Cautiously reporting first success using node-fibers! It took me a while to get the relationship between “run” and “yield” straight in my head, but I think I’ve got it. And this code works! I’m sure it’s not very elegant, but I feel (I hope) that the rest is just slogging. Hopefully, I won’t run into any show-stoppers along the way!

    Just for background, I am defining two FBP processes, ‘sender’ and ‘recvr’, with one shared connection (implemented by ‘array’). Data objects are shipped around as the contents of objects called Information Packets (IPs). The connection can hold up to 10 IPs. Of course, a complete FBP implementation has a lot more mechanism and parametrization, but I just wanted to prove feasibility at first. For instance, the fibers will actually be attributes of more normal objects called Process, etc., which are instances of components, which in turn should have proper component names (e.g. Sender, Receiver) – again, I assume this is just routine…

    I would really appreciate it if one or both of you, when you have time, could take a look and tell me if there are JavaScript or node-fibers constructs that would make this more readable, faster, or whatever – or indeed if there is anything here that will run afoul of JavaScript internals. I am a complete JavaScript beginner, so I have probably done things a JS expert would never do! πŸ™‚

    Thanks again for all your help! If anything is not clear here, please let me know!

    Best regards,

    Paul M.

    var Fiber = require('fibers');
    function IP(contents) {
        this.contents = contents;    
    var sender = new Fiber(function() { 
        for (var i = 0; i < 25; i++) {
          var ip = new IP(i + ''); 
    var recvr = new Fiber (function(){  
        while (true) {      
          ip = receive();      
          var i = ip.contents;  
    function send(ip){
          if (nxtget == nxtput && array[nxtget] != null){
          array[nxtput] = ip; 
          nxtput ++;
          if (nxtput > 9)
            nxtput = 0;
    function receive(){
          if (nxtget == nxtput && array[nxtget] == null){
          var ip = array[nxtget];
          array[nxtget] = null;
          nxtget ++;
          if (nxtget > 9)
            nxtget = 0;    
          return ip; 
    function close() {
    var queue = [];
    var array = [];
    for (var i = 0; i< 10; i++)
       array[i] = null;
    var nxtget = 0;
    var nxtput = 0;;
    var x = queue.shift();
    while (x != undefined) { ;
      x = queue.shift();
  33. This example has been significantly extended – initial versions of ports, connections, processes, and the test case has 3 cooperating processes. I think it’s much clearer now! It is at . It takes about 100 microsecs per send/receive pair on my desktop.

    I only have two globals: “processes” (an array of Process/fibre pairs), and “queue” (which holds the process statuses ( roughly “setjmps”) ) (and the start time). Any way of getting rid of the global called “processes”?

    Also any ideas about packaging would be much appreciated!

    Best regards,

    Paul M.

  34. Hi Bruno and Jean-Hugues, I was wrong – my test case takes approx. 50 microsecs per send/receive pair. I found a strange bug, so that may be the reason for the higher figure!

    Connection capacities are currently set to 5.

    Bruno, if you don’t like callbacks, you may find the Copier component interesting – .

    I have successfully chopped up the original big program into separate components: network definition, 3 components and infrastructures (fbp.js). However, I suspect that it isn’t very elegant (in particular, having to qualify so many variables) – could one of you take a look, and let me know if it can be done more elegantly…? It seems to me that one should be able to use heavily used functions like “send” and “receive” without qualification.

    I am hoping to announce this work fairly soon, so any help you can give in cleaning it up would be much appreciated!


    Paul M.

    PS It took me about 3-4 days to get it to this stage, so I am very impressed with Node-fibers, now that (I think) I understand it!

  35. Hello Paul,

    It took me time to reply because I had a busy week.

    The globals (processes, list queue) are not a problem because they are not exported by your module.

    The main problem I see is that your code is not designed to interact with node.js APIs. It is all run from a single run() call which terminates before node’s event loop starts. For example, what if you wanted to replace copier by a function that loads additional data from a file; how would you hook fs.readFile into copier?


  36. Hi Bruno,

    Thanks for getting back to me! Actually file handling was next on my list of things to do!

    I tried readFile, and, as far as I got, it seems that Fiber.current is undefined in the callback. Since the callback is being called by that fiber, I thought Fiber.current might be preserved – I am currently using it to determine which Process is running.

    However, readFileSync works fine (I have promoted a reader component to Github) and I could try to remove the dependency on Fiber.current – sounds like I should do that anyway, as users may use callbacks, and I can’t really tell them not to! I’ll give that a try.

    Stylistically, did you see anything that looked like bad JavaScript?

    Thanks for any time you can spend on this.

    Best regards,


    • You cannot use readFileSync: node.js is single-threaded and your process will just sit there doing nothing while waiting for file I/O to complete. You have to make it work with readFile. Fiber.current isn’t preserved but you can propagate it with a closure:

      var fiber = Fiber.current;
      fs.readFile(path, function(err, data) {
        // you can use fiber here!

      I’d rather discuss it in a GitHub issue on your repo. Can you open one and I’ll follow up there.


      • Sorry Bruno, didn’t see this note (about opening an issue)! I have to call it quits for the day, but will continue tomorrow! Also discovered I was having a problem iterating through an associative array – do you have a good link on that?

        Thanks and best regards,


  37. It seems like the more general problem is finding out which variables a callback has access to.

    If I replace the code in reader.js by

    var fbp = require('./fbp.js');
    var fs = require('fs'); 
    exports.reader = function () {     
        fs.readFile('./readme.txt', "utf8", function (err, data) {
            if (err) throw err;
         var ip = fbp.create(data); 
         fbp.send('OUT', ip);

    then I have problems accessing some of the FBP internal structures. Are you aware of any scope problems in callbacks running under fibers? I’m sorry, at this point I can’t be any more specific!

    Can I pass more parameters to the callback, or is its structure fixed by fs.readFile?



    • The parameters are fixed by fs.readFile but you can build a custom readFile with different parameters:

      function myReadFile(path, cb) {
        var fiber = Fiber.current;
        fs.readFile(function(err, data) {
          cb(err, data, fiber);

      But note that the callback will execute after your main run() function has returned! I suggest you play a bit with node async API and callbacks first, to get used to node’s execution model.

  38. Right, I finally figured that out as well!

    What state is a fiber in while waiting for the callback to fire? In the multi-fiber case, there comes a time when no fiber can proceed as one or more are waiting for callbacks. In my other implementations, the process/fiber can call a “wait” service – does node-fibers or JavaScript have anything similar? Or should I just terminate the network and then restart it at callback time?

    Hope this makes sense!



  39. The problem is that you are calling run directly from your script. You should run it inside a fiber instead:

    Fiber(function() {;
  40. Does this let me “pause” the whole run until the callback fires? I don’t want to lose the script variable values – or should I just be willing to reinitialize?

  41. Hi Bruno, my async reader seems to be working, rather to my surprise! (I had to add some more smarts in fbp.js)). Thanks so much for pointing out this defect – I didn’t know enough JS to realize there was a problem! πŸ™‚

    Quite a bit more stuff to be added – I’ll probably keep breaking it for the next several weeks, but, right now, my two test cases work! Fingers crossed! BTW I would love to hear about anything else you spot!

    Best regards,


    • Hi Paul, that’s good. Once you get one async scenario working it should not be too difficult to hook up more APIs. I’ll take a look at the code when I get a bit of time and I’ll give you feedback but I’ll do it directly with a GitHub issue.

      • That would be great! Presumably Github will let me know when comments arrive…

        As I put new stuff in, my code will begin to sprawl, so any suggestions about packaging would also be welcome!

        Best regards,


  42. Michael says:

    How does one use the standalone browser version of streamlined code with debugging? I’m using “_node –standalone -c mytest._js” and it works just fine, but when I add “–source-map” I get code that seems to depend on a “require” function – my code is not using any libraries (other than my own) and so this function does not exist. Furthermore, it seems to reference other files in the streamline hierarchy. Can I not do standalone browser code with source map support so I can understand the code in (e.g.) Chrome’s developer tools? What do I need to do for this to work?

    • Michael says:

      Wow I feel silly for asking the exact same question I did almost exactly a year ago which you already answered. Never mind! (If only I could delete my post!)

      • Michael says:

        Well in any case it appears my last problem was slightly different in that I wasn’t trying to get source maps included, but I will try to see if I can figure out what files need to get included for it to work.

    • Michael says:

      Success! I loaded the following files in this order:

      And it worked!

  43. Hi Michael. Often, asking the question gives you the answer! Glad it worked.

    If you have more questions, I suggest to post them to the mailing list (!forum/streamlinejs), and to use GitHub issues for bug reports (

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s