Headless Chrome

Google Chrome version 59 will ship with the headless option. This means you can test your web applications using chrome without needing a running window manager or xvfb.

There is one problem: the latest chromedriver (version 2.29) does not support versions of Chrome higher than 58.

The solution is to build the latest chromedriver that does support the latest chrome/chromium. Unfortunately, Google does not make nightly builds of chromedriver public. You have to download the chromium source and build chromedriver yourself.

UPDATE: It turns out that chromedriver still relies on XOrg for keycode decoding when handling sendKeys commands from selenium. Thus, you still need xvfb for real testing to work.

How all the pieces work together

In my case, I am running the ruby version of cucumber. Cucumber uses the capybara gem to send commands to selenium-webdriver. Selenium-webdriver in turn starts up your local copy of chromedriver, which then starts up chrome and controls the browser through a special debug port.

This means our system has to have a copy of the latest chromium installed, along with our custom chromedriver build.

To get the latest chromium, I used a tool on github that downloads the latest snapshot compiled for linux.

To get the latest chromedriver, I followed the build instructions but ran it all in a docker container in order to maintain a clean building environment. I then copied the built chromedriver and its shared libraries out of the container.

Once I had the built chrome and chromedriver, I was ready to run selenium/capybara tests against it.

Configuring the tests

When configuring capybara, you need to tell selenium-webdriver the path to your custom chromium binary and send the --headless flag, along with other flags you’ll likely need in a CI build node environment.

For running in docker:

Capybara.register_driver :headless_chromium do |app|
  caps = Selenium::WebDriver::Remote::Capabilities.chrome(
    "chromeOptions" => {
      'binary' => "/chromium-latest-linux/466395/chrome-linux/chrome",
      'args' => %w{headless no-sandbox disable-gpu}
  driver = Capybara::Selenium::Driver.new(
    browser: :chrome,
    desired_capabilities: caps

Capybara.default_driver = :headless_chromium

Now, capybara will drive a headless chromium!

I’ve created a repo that lets you test this all out.


Accidental Continuous Delivery and Docker

When I started at Faraday in the Summer of 2014, the company had begun a transition into a Saas product company. There were several individual apps that had grown into web services and queue workers. One of my first goals was to have a Continuous Integration system to run all the tests on each individual project.

Because many of our services relied on system libraries for GIS work and most of the tests were tightly coupled to a single postgres database, a lot of hosted CI solutions lost their appeal. I considered the various services we had that were written in varying versions of ruby and node. I also recognized that as our team grew and added programmers from different specialties (i.e. data scientists using python or R), I realized self-hosting a CI server configured to run all these different languages and platforms would become a mess. Especially when configuring deployments of our CI system.

But, I thought, what if the CI server could just be a web server and a bunch of dumb agents, and the projects themselves could define the platform they ran on? What if each test could run in its own sandbox with any external services linked in as needed? This idea is what initially attracted me to docker.

First steps

In late 2014 I set up a Jenkins server to do this very thing. One by one, I converted each project to define a CI-specific Dockerfile. This mainly involved making each app 12-Factor compatible, configured by environment variables.

I initially waded through the various plugins for Jenkins that dealt with docker, but many of them seemed to be related to strange use cases I wasn’t considering yet. None of them involved simply building images and running them on a docker host. As I looked around at various CI servers, I ran into GoCD. GoCD’s pipeline workflows really attracted me, as I recognized that Jenkins was pretty ill equiped to handle the multiple stages needed for building and running test containers along with backing services. Go also relies less on plugins and more on defining per-project shell scripts to handle job setup. Since I would be doing a lot of manual service startup and docker commands anyway, I decided to switch to Go.

Continuous Delivery is a Thing

One thing I learned while using Go is that it is built quite specifically for supporting a pipelined Continuous Delivery workflow. It was a concept I was never aware of, but quickly grew to appreciate and adopt. My coworker, Eric, was a big help in bringing CD ideas to fruition and pointed me to the excellent book on Continuous Delivery.

Some of the biggest changes and challenges in adopting CD were: Making small changes directly to the mainline. It’s a major change in mindset to stop making huge branches on each service and instead make small, backwards-compatible changes to individual services, deprecating old functionality as older services are upgraded. This meant I no longer needed to worry about how to hammer GoCD around to try and get it to test PRs and branches. Testing master is enough.

Treating CD build errors as “andon” events. That is, when tests break, it’s everyone’s immediate job to get the CD pipelines fixed. It’s interesting that the CD pipeline effectively becomes your “car factory” in the Toyota Production System/Lean Software sense. If there’s a problem in the pipeline, it is a blocker that prevents your team from pushing new software.

The docker/CD combination is pretty instrumental in the idea of “immutable infrastructure.” The CD system contains your entire app’s cloud configuration and you can deploy any version of your system (including the specific combinations of service versions) at any time.

From Test Images to Production

While our work on GoCD started as a way to test services, my team decided to move each service to run in docker in production. Our initial solution was to use docker-machine to spin up instances and docker-compose to define and run our cluster. This has worked exceedingly well, although it is not a zero-downtime or rapidly scalable solution. Luckily, services like Amazon ECS are providing easier ways to deploy dockerized apps. We provide a docker-compose.yml configuration and they mostly do the rest.

Development Environments

When I joined the team, all of the services were managed by some shell scripts sitting in a repo called “cage.” These scripts checked out branches on each service and used foreman to start the whole app cluster locally. As we migrated to docker, this cage tool became an app all its own.

One of the most difficult things about docker, I think, is using it in development. There are several tradeoffs involving how to install gems as you develop and making the service available for local testing/browsing.

I wasn’t attracted to the idea of rebuilding docker images and restarting the cluster each time I made a change. Instead, I came up with a way to run each app where local code changes appear immediately and libraries are shared in order to shorten image build/startup time. Initially I wrote separate Dockerfiles for dev, test, CI, and production environments, but later found that a single Dockerfile for all environments is sufficient.

Here’s an example docker-compose.yml and Dockerfile for a Rails app:

# docker-compose.yml
    - myservice:/app
    - /var/lib/docker/gems:/usr/local/bundle
    RAILS_ENV: development
    - myservice:/app
    - /var/lib/docker/gems:/usr/local/bundle
    RAILS_ENV: test
FROM ruby:2.1.5


ADD Gemfile /app/Gemfile
ADD Gemfile.lock /app/Gemfile.lock

RUN bundle

ADD . /app


CMD rails s -p 5000

Lines 5–8 of the Dockerfile allow us to make changes to the app code without forcing a full bundle install. Only a change to Gemfile/Gemfile.lock will trigger a bundle.

When interactively writing tests, all that is necessary is to run them with: docker-compose run myservicetest rspec. Now, our internal cage app makes this more succinct, but this is essentially what happens under the hood. Running docker-compose up -d myservice will start up a running rails app with local code changes recognized.

While there have been and are some bumps in the road in running a fully dockerized development environment, I think this approach adds several benefits: We can specify every service to run using a CD-tested and approved version, meaning we don’t have to go into each service repo and make sure we’re on the right branch/git commit when running the cluster locally. We also have a tool that modifies the docker-compose.yml to use specific versions of each service (using docker image tags) derived from GoCD’s last good build. This lets a developer work on her/his specific service while resting assured that every other service is at “last known good.” It also lets us select specific release combinations to debug locally.

We don’t need to manage the software configuration for every developer’s virtual machine. This used to be the case, where we had to be careful that the correct GIS libs, postgres version, ruby and node versions, etc. were installed on our vagrant VMs. Now, we just spin up a VM with docker-machine and let the containers do their own configurations.

Backing services like postgres, rabbitmq, etc. are easy to spin up for development. No more init scripts or arcane configs to worry about! Integration/acceptance tests can be run in a dedicated cluster. Using docker links, we can run an entire test suite (or even multiple concurrent suites) completely separate from the development cluster, without involving manual cluster management. It’s just a docker-compose up command away.


I feel a bit like Bilbo going on an Unexpected Journey through all of this, but it has been very rewarding. We’ve gone from weekly/monthly releases, to at-will releases, usually once or twice a week (non-zero downtime deployment is our biggest blocker at the moment). I love how the Continuous Delivery philosophy/tooling forces us to break our changes into smaller, independent units. While it seems like an impediment, I think it has made us more able to quickly address critical issues when we don’t have to wait for huge new feature branches to be merged in.


Back to the Browser - a JavaScript Workflow for UNIX Nerds

When Apple announced Mac OS X Lion, their tagline was “Back to the Mac” as they were bringing some features from iOS into the desktop-oriented Mac OS. In the JavaScript world, a similar thing has happened: innovations in the Node.js space can be brought back to the browser. These innovations have made JavaScript development faster and cleaner with command-line tools and the npm packaging system.

As I began writing serious JavaScript libraries and apps, I wanted the same kind of workflow I enjoy when writing Ruby code. I wanted to write my code in vi, run tests in the command line, organize my code into classes and modules, and use versioned packages similar to Ruby gems. At the time, the standard way to write JavaScript was to manage separate files by hand and concatenate them into a single file. One of the only testing frameworks in town was Jasmine, which required you to run tests in the browser. Since then, there has been an explosion of command-line code packaging and testing frameworks in the Node.js community that have lent themselves well to client side development. What follows is the approach I find to be the most productive.

Here’s a list of the tools that correspond to their Ruby world counterparts:

Application Ruby Javascript
Testing RSpec vows, buster |
Package management rubygems, bundler npm, browserify |
Code organization require CommonJS |
Build tools jeweler, rubygems browserify |

By installing Node.js, you have access to a command-line JavaScript runtime, testing, package management, and application building. Running tests from the command-line allows you to more easily use tools like guard, run focused unit tests, and easily set up continuous integration.


Many JavaScripters run Jasmine in the browser for testing. While it does the job, its syntax is extremely verbose and it breaks the command-line-only workflow. There is a Node.js package for running Jasmine from the command line, but I have found it to be buggy and not as feature rich as a typical command line testing tool. Instead I prefer vows or buster.js. Each supports a simpler “hash” based syntax, as opposed to Jasmine’s verbose syntax:

    // Jasmine

describe('MyClass', function() {
  describe('#myMethod', function() {
    before(function() {
      this.instance = new MyClass('foo');
    it('returns true by default', function() {
    it('returns false sometimes', function() {
    // Vows

  '#myMethod': {
    topic: new MyClass('foo'),

    'returns true by default': function(instance) {
    'returns false sometimes': function(instance) {

Vows and buster can be used just like rspec to run tests from the command line:

> vows test/my-class-test.js
OK >> 22 honored

One advantage that buster has over vows is that it can run its tests both from the command line and from a browser in case you want to run some integration tests in a real browser environment.

For mocks and stubs, you can use the excellent sinon library, which is included by default with buster.js.

Integration testing

In addition to unit testing, it’s always good run a full integration test. Since every browser has its own quirks, it’s best to run integration tests in each browser. I write cucumber tests using capybara to automatically drive either a “headless” (in-memory) webkit browser with capybara-webkit and/or GUI browsers like Firefox and Chrome with selenium.

In features/support/env.rb you can define which type of browser is used to run the tests by defining custom drivers

    require 'selenium-webdriver'

    Capybara.register_driver :selenium_chrome do |app|
      Capybara::Selenium::Driver.new app, :browser => :chrome

    Capybara.register_driver :selenium_firefox do |app|
      Capybara::Selenium::Driver.new app, :browser => :firefox

    if ENV['BROWSER'] == 'chrome'
      Capybara.current_driver = :selenium_chrome
    elsif ENV['BROWSER'] == 'firefox'
      Capybara.current_driver = :selenium_firefox
      require 'capybara-webkit'
      Capybara.default_driver = :webkit

Now you can choose your browser with an environment variable: BROWSER=firefox cucumber features

If you are testing an app apart from a framework like Sinatra or Rails, you can use Rack to serve a static page that includes your built app in a <script> tag. For example, you could have an html directory with an index.html file in it:

    <title>Test App</title>
    <script type="text/javascript" src="application.js"></script>
  <body><div id="app"></div></body>

When you’re ready to run an integration test, compile your code into application.js using browserify:

> browserify -e lib/main.js -o html/application.js

Then tell cucumber to load your test file as the web app to test:

    # features/support/env.rb
    require 'rack'
    require 'rack/directory'

    Capybara.app = Rack::Builder.new do
      run Rack::Directory.new(File.expand_path('../../../html/', __FILE__))

Once cucumber is set up, you can start writing integration tests just as you would with Rails:

# features/logging_in.feature

Feature: Logging in

Scenario: Successful in-log
  Given I am on the home page
  When I log in as derek
  Then I should see a welcome message
    # features/step_definitions/log_in_steps.rb

    Given %r{I am on the home page} do
      visit '/index.html'
    When %r{I log in as derek} do
      click '#login'
      fill_in 'username', :with => 'derek'
      fill_in 'password', :with => 'secret'
      click 'input[type=submit]'
    Then %r{I should see a welcome message} do
      page.should =~ /Welcome, derek!/

Package management

One of the joys of Ruby is its package manager, rubygems. With a simple gem install you can add a library to your app. There has been an explosion of JavaScript package managers lately. Each one adds the basic ability to gather all of your libraries and application code, resolve the dependencies, and concatenate them into a single application file. I prefer browserify over all the others for two reasons. First, you can use any Node.js package, which opens you up to many more utilities and libraries than other managers. Second, it uses Node.js’ CommonJS module system, which is a very simple and elegant module system.

In your project’s root, place a package.json file that defines the project’s dependencies:

      "dependencies": {
        "JSONPath": "0.4.2",
        "underscore": "*",
        "jquery": "1.8.1"
      "devDependencies": {
        "browserify": "*",
        "vows": "*"

Run npm install and all of your project’s dependencies will be installed into the node_modules directory. In your project you can then make use of these packages:

    var _ = require('underscore'),
        jsonpath = require('JSONPath'),
        myJson = "...";

    _.each(jsonpath(myJson, '$.books'), function(book) {

If you’re looking for packages available for certain tasks, simply run npm search <whatever> to find pacakges related to your search terms. Some packages are tagged with “browser” if they are specifically meant for client side apps, so you can include “browser” as one of your search terms to limit your results accordingly. Many of the old standbys, like jquery, backbone, spine, and handlebars are there.

Code organization

As JavaScript applications get more complex, it becomes prudent to split your code into separate modules, usually placed in separate files. In the Ruby world, this was easily done by require-ing each file. Node.js introduced many people (including me) to the CommonJS module system. It’s a simple and elegant way to modularize your code and allows you to separate each module into its own file. Browserify allows you to write your code in the CommonJS style and it will roll all of your code up into a single file appropriate for the browser.

Ruby structure

For example, my Ruby project may look like:


Where lib/my_library.rb looks like:

    require 'my_library/book'

    class MyLibrary
      def initialize(foo)
        @book = Book.parse(foo)

And lib/my_library/book.rb looks like:

    require 'jsonpath'

class MyLibrary
  class Book
    def self.parse(foo)
      JSONPath.eval(foo, '$.store.book\[0\]')

And spec/my_library/book_spec.rb looks like:

    require 'json'
    require 'helper'
    require 'my_library/book'

    describe MyLibrary::Book do
      describe '.parse' do
        it 'parses a book object' do
          json = File.read('support/book.json')
          book = Book.parse(JSON.parse(json))
          book.title.should == "Breakfast at Tiffany's"

JavaScript structure

A javascript project would look similar:


Where lib/my-library.js looks like:

    var Book = require('./my-library/book');

var MyLibrary = function(foo) {
  this.book = new Book(foo);

module.exports = MyLibrary;

And lib/my-library/book.js looks like:

    var jsonpath = require('jsonpath');

var Book = {
  parse: function(foo) {
    return jsonpath(foo, '$.store.book\[0\]');

module.exports = Book;

And test/my-library/book-test.js looks like:

    var fs = require('fs');
var helper = require('../helper'),
    Book = require('../../lib/my_library/book');
    // NOTE: there are ways to set up your modules 
    // to be able to use relative require()s but
    // it is beyond the scope of this article

  '.parse': {
    'parses a book object': function() {
      var json = fs.readFileSync('support/book.json'),
          book = Book.parse(JSON.parse(json));
      assert.equal(book.title, "Breakfast at Tiffany's");

Build tools

Browserify will build concatenated JavaScript files when you’re ready to deploy your code on a website or as a general-purpose library. Its usage is simple:

> browserify -e <main_application_startup_code> -o <path_to_built_file>

Building a library

If we were building the library in the section above, we could run browserify -e lib/my-library.js -o build/my-library.js. Then, any user of your library can use your library with the require function:

    <script type="text/javascript" src="jquery.js"></script>
    <script type="text/javascript" src="my-library.js"></script>
    <script type="text/javascript">
      var myLibrary = require('my-library');
      $.ajax('/lib.json', function(data) {

You can also save the library user some time with a custom entry point for browsers:

    // in /browser.js
    window.MyLibrary = require('my-library');

Then run `browserify -e browser.js -o build/my-library.js

And the library user would use it thusly:

    <script type="text/javascript" src="jquery.js"></script>
    <script type="text/javascript" src="my-library.js"></script>
    <script type="text/javascript">
      $.ajax('/lib.json', function(data) {

Building a web app

A spine app might look something like:

    // in app/main.js
    var $ = require('jquery'),
        Spine = require('spine');

    Spine.$ = $;

    var MainController = require('./controllers/main-controller');

    var ApplicationController = Spine.Controller.sub({
      init: function() {
        var main = new MainController();
          '/': function() { main.active(); }

    Spine.Route.setup({ history: true });

It would be built with browserify -e app/main.js -o build/application.js and the application.js added to your website with a <script> tag.

You can extend browserify with plugins like templatify, which precompiles HTML/Handlebar templates into your app.

Together, npm packages, command-line testing and build tools, and modular code organization help you quickly build non-trivial JavaScript libraries and applications just as easily as it was in Ruby land. I’ve developed several in-production projects using this workflow, such as our CM1 JavaScript client library, our flight search browser plugin, and hootroot.com.



Graphite - Beyond the Basics

The Graphite and statsd systems have been popular choices lately for recording system statistics, but there isn’t much written beyond how to get the basic system set up. Here are a few tips that will make your life easier.

Graphite + statsd - the rundown

The graphite and statsd system consists of three main applications

  • carbon: a service that receives and stores statistics
  • statsd: a node server that provides an easier and more performant, UDP-based protocol for receiving stats which are passed off to carbon
  • graphite: a web app that creates graphs out of the statistics recorded by carbon

Use graphiti

Several alternative front-ends to graphite have been written. I chose to use graphiti because it had the most customizable graphs. Note that graphiti is just a facade on top of graphite - you still need the graphite web app running for it to work. Graphiti makes it easy to quickly create graphs. I’ll cover this later.

The flow looks like:

|App| ==[UDP]==> |statsd| ==> |carbon| ==> |.wsp file|

|.wsp file| ==> |graphite| ==> |graphiti| ==> |pretty graphs on your dashboard|

Use chef-solo to install it

If you’re familiar with chef, you can use the cookboos that the community has already developed for installing graphite and friends. If not, this would be a good opportunity to learn. You can use chef-solo to easily deploy graphite to a single server. I plan to write a “getting started with chef-solo” post soon, so stay tuned!

Chef saved me a ton of time setting up python, virtualenv, graphite, carbon, whisper, statsd, and many other tools since there are no OS-specific packages for some of these.

Use sensible storage schemas

The default chef setup of graphite stores all stats with the following storage schema rule:

priority = 0
pattern = ^.*
retentions = 60:100800,900:63000

The retentions setting is the most important. It’s a comma-delimited list of data resolutions and amounts.

  • The number before the colon is the size of the bucket that holds data in seconds. A value of 60 means that 60 seconds worth of data is grouped together in the bucket. A larger number means the data is less granular, but more space efficient.
  • The number after the colon is the number of data buckets to store at that granularity. 100800 will cover (100800 * 60) = 70 days of data. That’s (100800 * 12) = 1.2MiB of space for those 70 days. A bigger number means more disk space and longer seek times.

Alternatively, you can specify retentions using time format shortcuts. For example, 1m:7d means “store 7 days worth of 1-minute granular data.”

Use a good stats client

In the ruby world, there are two popular client libraries: fozzie and statsd-ruby. Both provide the standard operations like counting events, timing, and gauging values.

Fozzie differs in that it integrates with Rails or rack apps by adding a rack middleware that automatically tracks timing statistics for every path in your web app. This can save time, but it also has the downside of sending too much noise to your statsd server and can cause excessive disk space consumption unless you implement tight storage schema rules. It also adds a deep hierarchy of namespaces based on the client machine name, app name, and current environment. This can be an issue on heroku web apps where the machine name changes frequently.

If you want more control over your namespacing, statsd-ruby is the way to go. Otherwise, fozzie may be worth using for its added conveniences.

Make sure you don’t run out of disk space

Seriously, if you do run out of disk, the graphite (whisper) data files can become corrupted and force you to delete them and start over. I learned this the hard way :) Make sure your storage schemas are strict enough because each separate stat requires its own file that can be several megabytes in size.

Use graphiti for building graphs and dashboards

Graphiti has a great interface for building graphs. You can even fork it and deploy your own custom version that fits your company’s needs and/or style. It’s a small rack app that uses redis to store graph and dashboard settings. There’s even a chef cookbook for it!

When setting up graphiti, remember to set up a cron job to run rake graphiti:metrics periodically so that you can search for metric namespaces from graphiti.

Use graphite’s built-in functions for summarizing and calculating data

Graphite provides a wealth of functions that run aggregate operations on data before it is graphed.

For example, let’s say we’re tracking hit counts on our app’s home page. We’re using several web servers for load balancing and our stats data is namespaced by server under stats.my_app.server-a.production.home-page.hits and stats.my_app.server-b.production.home-page.hits. If we told graphite to graph results for stats.my_app.*.production.home-page.hits we would get two graph lines – one for server-a and one for server-b. To combine them into a single measurement, use the sumSeries function. You can then use the alias function to give it a friendlier display name like “Home page.”

Graphiti has a peculiar way of specifying which function to use. In a normal series list, you have the following structure:

"targets": [

The {} is an object used to specify the list of functions to apply, in order, on the series specified in the parent array. Each graphite function is specified as a key and its parameters as the value. A true value indicates the function needs no parameters and an array is provided if the function requires multiple parameters.

You’ll notice in the function documentation that each function usually takes two initial arguments, a context and a series name. In graphiti, you won’t need to specify those first two arguments.

Here’s an example of sumSeries and alias used together. Note that the order matters!

"targets": [
      "sumSeries": true,
      "alias": "Homepage hits"

Different graph areaMode for different applications

While not well documented, graphite has a few options for displaying graph lines. By default, the “stacked” area mode stacks each measurement on top of each other into an area chart that combines multiple measurements that are uniquely shaded. This can be good for seeing grand totals. The blank option plots each measurement as a line on a line chart. This is preferable for comparing measurements.

Different metrics for different events

Each stats recording method provided by statsd-ruby and fozzie has different behavior, which isn’t well documented anywhere.

  • Stats.count is the base method for sending a count of some event for a given instance. It’s rarely used alone.
  • Stats.increment and Stats.decrement will adjust a count of an event. It’s useful for counting things like number of hits on a page, number of times an activity occurs, etc. It will be graphed as “average number of events per second”. So if your web app runs Stats.increment 'hits' 8 times over a 1 second period, the graph will draw a value of 8 for that second. Sometimes you will see fractional numbers charted. This is because graphite may average the data over a time period based on your schema storage settings and charting resolution.
  • Stats.timing will take a block and store the amount of time the code in the block took to execute. It also keeps track of average, min, and max times, as well as standard deviation and total number of occurrences.
  • Stats.gauge tracks absolute values over time. This is useful for tracking measurements like CPU, memory, and disk usage.
  • Fozzie provides Stats.event 'party' to track when an event happens. This is useful for tracking things like deploys or restarts. Equivalent functionality can be obtained in statsd-ruby by running Stats.count 'party', Time.now.to_i.

Bonus tip: Graphs on your Mac dashboard

If you’re using a mac, you can add your favorite graphs to your dashboard. Create a graph in graphiti, then view it on the graphiti dashboard with Safari. Click File->Open in Dashboard… and select the graph image with the select box. Now, you can quickly see important graphs at the press of a button!

Overall, statsd is a great tool and can add great visibility into your applications.


Add node.js/CommonJS style require() to client-side JavaScript with browserify

While working on Careplane recently, I ran into a problem that was made easier by JSONPath. I was already enjoying the ability to test my JavaScript with Node.js, Jasmine, and other npm packages. It would be nice if I could get the same package system I have with npm and use it with my client-side JavaScript.

With browserify, now I can!

There are three major benefits to using browserify, which I’ll detail below.

Using npm Modules in Your Client-Side Scripts

In Careplane, I wanted to use the JSONPath npm package. With browserify, I can write the following into my client-side code:

/* clientside.js */
var jsonpath = require('JSONPath');
var $ = require('jquery-browserify');

var owned_data = jsonpath.eval(myData, '$.paths[*].to.data[?(@.owner == "Derek")]');

From the command-line, I can create a single JavaScript file (I’ll call it application.js) that contains my code and includes the JSONPath package. I can even bake jQuery into application.js!

> npm install browserify
> npm install JSONPath
> npm install jquery-browserify
> ./node_modules/.bin/browserify -r JSONPath -r jquery-browserify -e clientside.js -o application.js

Now that I have a nicely wrapped application.js, I can include it from a web page, or in this case, the browser plugin I’m developing.

Here’s an example of use from a web page:

<script type="text/javascript" src="application.js"></script>

N.B. Not all npm packages will work client-side, especially ones that depend on core Node.js modules like fs or http.

Organize Your Code

Careplane has a ton of JavaScript split into a separate file for each class. Previously, I was using simple concatenation and a hand-crafted file list to construct an application.js and to determine script load order. Now it can be managed automatically by browserify because each of my class files can require other class files.

For instance, the Kayak class in src/drivers/Kayak.js depends on the Driver class in src/Driver.js. I can now do this:

/* src/Driver.js */

var Driver = function() {};

/* ... */

module.exports = Driver;
/* src/Kayak.js */
var Driver = require('../Driver')

var Kayak = function(foo) { this.foo = foo; };
Kayak.prototype = new Driver();

/* ... */

module.exports = Kayak;
/* src/main.js */

var Kayak = require('./drivers/Kayak'); // paths are relative to src, main.js' cwd

> ./node_modules/.bin/browserify -e src/main.js -o application.js

Super simple!

Uniformity With Testing Environment

I like to run my Jasmine JavaScript tests from the command-line and so does our Continuous Integration system. With jasmine-node, I had to maintain a list of files in my src directory that were loaded in a certain order in order to run my tests. Now, each spec file can use Node.js’ CommonJS require statement to require files from my src directory, and all dependencies are automatically managed.

/* spec/javascripts/drivers/KayakSpec.js */

describe('Kayak', function() {
  var Kayak = require('src/drivers/Kayak');
  /* ... */
> rake examples[KayakSpec]
Starting tests

When I run my specs from the command-line with Node.js (I have an examples rake task that runs node and jasmine), Node’s built-in CommonJS require is used and it recognizes the require statements I wrote into src/drivers/Kayak.js.

I can also run my Jasmine specs from the standard Jasmine web server. All I have to do is create a browserified script that is included before Jasmine runs its tests in my browser.

/* src/jasmine.js */
var Kayak = require('./drivers/Kayak');  // browserify needs a starting point to resolve all dependencies
# spec/javascripts/support/jasmine.yml
  - spec/javascripts/support/application.js
> ./node_modules/.bin/browserify -e src/jasmine.js -o spec/javascripts/support/application.js
> rake jasmine:server
your tests are here:
[2011-08-04 16:15:56] INFO  WEBrick 1.3.1
[2011-08-04 16:15:56] INFO  ruby 1.9.2 (2011-02-18) [x86_64-darwin10.4.0]
[2011-08-04 16:15:56] INFO  WEBrick::HTTPServer#start: pid=61833 port=8888

When I visit http://localhost:8888 in my browser, my Jasmine tests all run!

The only drawback is that my code has to be re-browserified whenever it changes. This only adds about a second of overhead while I wait for my watchr script to run the browserification.


If you’d like to take a look at the environment I set up for Careplane, you can check out the source.

I hope you enjoy browserify as much as I have!