Optimizing the Performance of a Node.js Function

Introduction

After letting it stagnate for awhile, I decided to rework Street.js to use the things I have been working with in Node.js this last year.  My main goals are as follows:

  • ES6ify the code base
  • Replace nasty callback code with Promises
  • Pass ESLint using JavaScript Standard Style config
  • Annotate types with Flow
  • Simplify the implementation

Node.js v4 has a lot of new ES6 features that are extremely helpful, fun, and performant.  I will be refactoring Street to use these new features.  Of course, like a good semver citizen, I will update my major version (to v1.0) when I publish the rewrite.

Setup

I am using Babel as a transpiler to paper over ES6 features not yet in Node.js, but blacklisting the transforms that are already present.  Many ES6 features (e.g. generators, symbols, maps, sets, arrow functions) are more performant natively than via transpilation and I do not care about supporting Node.js before version 4.  The following is my .babelrc configuration file showing the blacklist I am using:

{
  "blacklist": [
    "es3.memberExpressionLiterals",
    "es3.propertyLiterals",
    "es5.properties.mutators",
    "es6.blockScoping",
    "es6.classes",
    "es6.constants",
    "es6.arrowFunctions",
    "es6.spec.symbols",
    "es6.templateLiterals",
    "es6.literals",
    "regenerator"
  ],
  "optional": [
    "asyncToGenerator"
  ]
}

Case Study: Walking a Directory

In Street, I need to walk a directory of files so I can generate a manifest of file paths and their hashes for comparison to a previous manifest.  The directory walking code was hairy; most of it was from Stack Overflow.  Here’s the current state (cleaned up):

var fs = require('fs')
var path = require('path')

function oldFindFilePaths (dir, done) {
  var filePaths = []
  fs.readdir(dir, function(err, filelist) {
    if (err) return done(err)
    var i = 0

    ;(function next() {
      var file = filelist[i++]
      if (!file) return done(null, filePaths)

      file = path.join(dir, file)

      fs.stat(file, function(err, stat) {
        if (err) return done(err)
        if (stat && stat.isDirectory()) {
          _findFilePaths(file, function(err, res) {
            if (err) return done(err)
            filePaths = filePaths.concat(res)
            next()
          })
        } else {
          filePaths.push(file)
          next()
        }
      })
    })()
  })
}

I never really liked this because it is not intuitive to me. The function is an unwieldy set of multiple recursive calls that make me feel gross.  Once I got it working, I was wary of touching it again.

There must be a better way to do this! I can either spend some time refactoring this to make it nicer, or see if a rewrite is more elegant and perhaps performant. I am willing to take a small performance hit.

The following is my first iteration:

import fs from 'fs'
import path from 'path'

async function findFilePaths (dir: string): Promise<Array> {
  var foundPaths = []
  var files = fs.readdirSync(dir)

  while (files.length > 0) {
    let file = files.pop()
    if (!file) break

    let filePath = path.join(dir, file)
    let stat = fs.statSync(filePath)

    if (stat.isDirectory()) {
      foundPaths = foundPaths.concat(await findFilePaths(filePath))
    } else {
      foundPaths.push(filePath)
    }
  }

  return foundPaths
}

Do not be thrown off by the Type Annotations.  I really enjoy FlowType and find it useful for finding many kinds of bugs.  All those annotations get stripped during babel transpilation.

This function was much clearer. I love ES7 Async functions. They wrap a function’s logic in a Promise and then resolve with the returned value or reject if errors are thrown. Inside an Async Function, you can await on Promises. If they resolve, the value resolved with is returned. If they reject, the value (best if an error instance) rejected with is thrown.

Notice that I replaced my asynchronous fs calls with synchronous ones. The callbacks were just too nasty, and since this is a CLI application they were not that helpful for performance as I was using them.

This was much clearer, but still not ideal to me. I am not a fan of while loops and instead prefer a more functional approach using map, filter, and reduce when possible. Also, calling fs.readdirSync was ok in this usage, but those fs.statSync calls seemed inefficient as they would block on each call to a file descriptor. Perhaps I could make them async again but parallelize them.

This lead me to my next iteration:

async function newFindFilePaths2 (dir: string): Promise<Array<string>> {
  var files = await new Promise((resolve, reject) =>; {
    fs.readdir(dir, (err, files) =>; err ? reject(err) : resolve(files))
  })

  var statResultPromises = files.map(file =>; new Promise((resolve, reject) => {
    var filepath = path.join(dir, file)
    fs.stat(filepath, (err, stat) =>; err ? reject(err) : resolve({filepath, stat}))
  }))

  var results = await Promise.all(statResultPromises)
  var {subDirs, foundPaths} = results.reduce((memo, result) =>; {
    if (result.stat.isDirectory()) {
      memo.subDirs.push(result.filepath)
    } else {
      memo.foundPaths.push(result.filepath)
    }
    return memo
  }, {subDirs: [], foundPaths: []})

  var subDirPaths = await Promise.all(subDirs.map(findFilePaths2))
  return foundPaths.concat(...subDirPaths)
}

Notice the while loop is gone; replaced with map and reducefs.stat happen in parallel for a list of files. The fs.readdir call is also async because I will do recursive calls to this function in parallel for all subdirectories I find.

I am also a fan of destructuring and spreading. It makes for more concise and elegant code. My favorite example here is taking the results of recursive calls to findFilePaths2, which are arrays of strings, and then spreading them into arguments to the foundPaths.concat function call to join all the paths into a single array.

This is excellent, but can be cleaned up and broken into a few different functions. This brings me to my last iteration:

function listFiles (dir) {
  return new Promise((resolve, reject) => {
    fs.readdir(dir,
               (err, files) => err ? reject(err) : resolve(files))
  })
}

function getStatMapFn (dir) {
  return file => new Promise((resolve, reject) => {
    var filepath = path.join(dir, file)
    fs.stat(filepath,
            (err, stat) => err ? reject(err) : resolve({filepath, stat}))
  })
}

function partitionByType (memo, result) {
  if (result.stat.isDirectory()) {
    memo.subDirs.push(result.filepath)
  } else {
    memo.foundPaths.push(result.filepath)
  }
  return memo
}

async function newFindFilePaths3 (dir: string): Promise<Array<string>> {
  var files = await listFiles(dir)
  var results = await Promise.all(files.map(getStatMapFn(dir)))
  var {subDirs, foundPaths} = results.reduce(partitionByType,
                                             {subDirs: [], foundPaths: []})

  var subDirPaths = await Promise.all(subDirs.map(findFilePaths3))
  return foundPaths.concat(...subDirPaths)
}

Even though it is more lines of code, I prefer this to the previous. A few pure, helper functions and one function that puts them all together concisely and elegantly. So beautiful!

Running Times Compared

Lets check the execution time to see if we did any better.  This is just meant as a dirty comparison, not super scientific.

Function Execution Time (11 files, 2 dirs)
oldFindFilePaths  (callback hell) 1.8 ms
newFindFilePaths  (while loop) 12.1 ms
newFindFilePaths2  (map/reduce) 13.3 ms
newFindFilePaths3 (final map/reduce) 13.4 ms

Darn! The old function appears to be the most performant with a small number of files.  The difference between my last two iterations is negligible which makes sense because they are really the same thing just refactored slightly.

But what happens when there are more files and subdirectories?

Function Execution Time (11 files, 2 dirs) Execution Time (1300 files, 200 dirs) Execution Time (10800 files, 2400 dirs)
oldFindFilePaths  (callback hell) 1.8 ms 41.8 ms 269.6 ms
newFindFilePaths  (while loop) 12.1 ms 36.9 ms 182.6 ms
newFindFilePaths2  (map/reduce) 13.3 ms 60.8 ms 413.8 ms
newFindFilePaths3 (final map/reduce) 13.4 ms 61.5 ms 416.1 ms

Interesting!  The synchronous while loop started beating all the cases once files started to be in the 1000s spread over 100s of subdirectories.

Conclusion

I think I will probably end up going with the while loop function because it is the simplest and has better performance at scale.  And in the end, I mainly just wanted something with a simple API that I could hide behind a promise.

My theory of its superior performance over large directories is that the synchronous file system queries act like a kind of naive throttling system; it stops the VM from making thousands of concurrent function calls and file system queries which would bog it down.  That’s just my intuition though.

Advertisements
Optimizing the Performance of a Node.js Function

Leave a Comment

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s