Author Archives: admin

Browserify handbook



This document covers how to use browserify to build modular applications.

browserify is a tool for compiling node-flavored commonjs modules for the browser.

You can use browserify to organize your code and use third-party libraries even if you don’t use node itself in any other capacity except for bundling and installing packages with npm.

The module system that browserify uses is the same as node, so packages published to npm that were originally intended for use in node but not browsers will work just fine in the browser too.

Increasingly, people are publishing modules to npm which are intentionally designed to work in both node and in the browser using browserify and many packages on npm are intended for use in just the browser. npm is for all javascript, front or backend alike.

node packaged manuscript

You can install this handbook with npm, appropriately enough. Just do:

npm install -g browserify-handbook

Now you will have a browserify-handbook command that will open this readme file in your $PAGER. Otherwise, you may continue reading this document as you are presently doing.

node packaged modules

Before we can dive too deeply into how to use browserify and how it works, it is important to first understand how the node-flavored version of the commonjs module system works.


In node, there is a require() function for loading code from other files.

If you install a module with npm:

npm install uniq

Then in a file nums.js we can require('uniq'):

var uniq = require('uniq');
var nums = [ 5, 2, 1, 3, 2, 5, 4, 2, 0, 1 ];

The output of this program when run with node is:

$ node nums.js
[ 0, 1, 2, 3, 4, 5 ]

You can require relative files by requiring a string that starts with a .. For example, to load a file foo.js from main.js, in main.js you can do:

var foo = require('./foo.js');

If foo.js was in the parent directory, you could use ../foo.js instead:

var foo = require('../foo.js');

or likewise for any other kind of relative path. Relative paths are always resolved with respect to the invoking file’s location.

Note that require() returned a function and we assigned that return value to a variable called uniq. We could have picked any other name and it would have worked the same. require() returns the exports of the module name that you specify.

How require() works is unlike many other module systems where imports are akin to statements that expose themselves as globals or file-local lexicals with names declared in the module itself outside of your control. Under the node style of code import with require(), someone reading your program can easily tell where each piece of functionality came from. This approach scales much better as the number of modules in an application grows.


To export a single thing from a file so that other files may import it, assign over the value at module.exports:

module.exports = function (n) {
    return n * 111

Now when some module main.js loads your foo.js, the return value of require('./foo.js') will be the exported function:

var foo = require('./foo.js');

This program will print:


You can export any kind of value with module.exports, not just functions.

For example, this is perfectly fine:

module.exports = 555

and so is this:

var numbers = [];
for (var i = 0; i < 100; i++) numbers.push(i);

module.exports = numbers;

There is another form of doing exports specifically for exporting items onto an object. Here, exports is used instead of module.exports:

exports.beep = function (n) { return n * 1000 }
exports.boop = 555

This program is the same as:

module.exports.beep = function (n) { return n * 1000 }
module.exports.boop = 555

because module.exports is the same as exports and is initially set to an empty object.

Note however that you can’t do:

// this doesn't work
exports = function (n) { return n * 1000 }

because the export value lives on the module object, and so assigning a new value for exports instead of module.exports masks the original reference.

Instead if you are going to export a single item, always do:

// instead
module.exports = function (n) { return n * 1000 }

If you’re still confused, try to understand how modules work in the background:

var module = {
  exports: {}

// If you require a module, it's basically wrapped in a function
(function(module, exports) {
  exports = function (n) { return n * 1000 };
}(module, module.exports))

console.log(module.exports); // it's still an empty object :(

Most of the time, you will want to export a single function or constructor with module.exports because it’s usually best for a module to do one thing.

The exports feature was originally the primary way of exporting functionality and module.exports was an afterthought, but module.exports proved to be much more useful in practice at being more direct, clear, and avoiding duplication.

In the early days, this style used to be much more common:

foo.js: = function (n) { return n * 111 }


var foo = require('./foo.js');

but note that the is a bit superfluous. Using module.exports it becomes more clear:


module.exports = function (n) { return n * 111 }


var foo = require('./foo.js');

bundling for the browser

To run a module in node, you’ve got to start from somewhere.

In node you pass a file to the node command to run a file:

$ node robot.js
beep boop

In browserify, you do this same thing, but instead of running the file, you generate a stream of concatenated javascript files on stdout that you can write to a file with the > operator:

$ browserify robot.js > bundle.js

Now bundle.js contains all the javascript that robot.js needs to work. Just plop it into a single script tag in some html:

    <script src="bundle.js"></script>

Bonus: if you put your script tag right before the </body>, you can use all of the dom elements on the page without waiting for a dom onready event.

There are many more things you can do with bundling. Check out the bundling section elsewhere in this document.

how browserify works

Browserify starts at the entry point files that you give it and searches for any require() calls it finds using static analysis of the source code’s abstract syntax tree.

For every require() call with a string in it, browserify resolves those module strings to file paths and then searches those file paths for require() calls recursively until the entire dependency graph is visited.

Each file is concatenated into a single javascript file with a minimal require() definition that maps the statically-resolved names to internal IDs.

This means that the bundle you generate is completely self-contained and has everything your application needs to work with a pretty negligible overhead.

For more details about how browserify works, check out the compiler pipeline section of this document.

how node_modules works

node has a clever algorithm for resolving modules that is unique among rival platforms.

Instead of resolving packages from an array of system search paths like how $PATH works on the command line, node’s mechanism is local by default.

If you require('./foo.js') from /beep/boop/bar.js, node will look for ./foo.js in /beep/boop/foo.js. Paths that start with a ./ or ../ are always local to the file that calls require().

If however you require a non-relative name such as require('xyz') from /beep/boop/foo.js, node searches these paths in order, stopping at the first match and raising an error if nothing is found:


For each xyz directory that exists, node will first look for a xyz/package.json to see if a "main" field exists. The "main" field defines which file should take charge if you require() the directory path.

For example, if /beep/node_modules/xyz is the first match and/beep/node_modules/xyz/package.json has:

  "name": "xyz",
  "version": "1.2.3",
  "main": "lib/abc.js"

then the exports from /beep/node_modules/xyz/lib/abc.js will be returned by require('xyz').

If there is no package.json or no "main" field, index.js is assumed:


If you need to, you can reach into a package to pick out a particular file. For example, to load the lib/clone.js file from the dat package, just do:

var clone = require('dat/lib/clone.js')

The recursive node_modules resolution will find the first dat package up the directory hierarchy, then the lib/clone.js file will be resolved from there. This require('dat/lib/clone.js') approach will work from any location where you can require('dat').

node also has a mechanism for searching an array of paths, but this mechanism is deprecated and you should be using node_modules/ unless you have a very good reason not to.

The great thing about node’s algorithm and how npm installs packages is that you can never have a version conflict, unlike most every other platform. npm installs the dependencies of each package into node_modules.

Each library gets its own local node_modules/ directory where its dependencies are stored and each dependency’s dependencies has its own node_modules/ directory, recursively all the way down.

This means that packages can successfully use different versions of libraries in the same application, which greatly decreases the coordination overhead necessary to iterate on APIs. This feature is very important for an ecosystem like npm where there is no central authority to manage how packages are published and organized. Everyone may simply publish as they see fit and not worry about how their dependency version choices might impact other dependencies included in the same application.

You can leverage how node_modules/ works to organize your own local application modules too. See the avoiding ../../../../../../.. section for more.



Concatenation has some downsides, but these can be very adequately addressed with development tooling.

source maps

Browserify supports a --debug/-d flag and opts.debug parameter to enable source maps. Source maps tell the browser to convert line and column offsets for exceptions thrown in the bundle file back into the offsets and filenames of the original sources.

The source maps include all the original file contents inline so that you can simply put the bundle file on a web server and not need to ensure that all the original source contents are accessible from the web server with paths set up correctly.


The downside of inlining all the source files into the inline source map is that the bundle is twice as large. This is fine for debugging locally but not practical for shipping source maps to production. However, you can use exorcist to pull the inline source map out into a separate file:

browserify main.js --debug | exorcist > bundle.js


Running a command to recompile your bundle every time can be slow and tedious. Luckily there are many tools to solve this problem. Some of these tools support live-reloading to various degrees and others have a more traditional manual refresh cycle.

These are just a few of the tools you can use, but there are many more on npm! There are many different tools here that encompass many different tradeoffs and development styles. It can be a little bit more work up-front to find the tools that responate most strongly with your own personal expectations and experience, but I think this diversity helps programmers to be more effective and provides more room for creativity and experimentation. I think diversity in tooling and a smaller browserify core is healthier in the medium to long term than picking a few “winners” by including them in browserify core (which creates all kinds of havoc in meaningful versioning and bitrot in core).

That said, here are a few modules you might want to consider for setting up a browserify development workflow. But keep an eye out for other tools not (yet) on this list!


You can use watchify interchangeably with browserify but instead of writing to an output file once, watchify will write the bundle file and then watch all of the files in your dependency graph for changes. When you modify a file, the new bundle file will be written much more quickly than the first time because of aggressive caching.

You can use -v to print a message every time a new bundle is written:

$ watchify browser.js -d -o static/bundle.js -v
610598 bytes written to static/bundle.js  0.23s
610606 bytes written to static/bundle.js  0.10s
610597 bytes written to static/bundle.js  0.14s
610606 bytes written to static/bundle.js  0.08s
610597 bytes written to static/bundle.js  0.08s
610597 bytes written to static/bundle.js  0.19s

Here is a handy configuration for using watchify and browserify with the package.json “scripts” field:

  "build": "browserify browser.js -o static/bundle.js",
  "watch": "watchify browser.js -o static/bundle.js --debug --verbose",

To build the bundle for production do npm run build and to watch files for during development do npm run watch.

Learn more about npm run.


If you would rather spin up a web server that automatically recompiles your code when you modify it, check out beefy.

Just give beefy an entry file:

beefy main.js

and it will set up shop on an http port.


In a similar spirit to beefy but in a more minimal form is wzrd.

Just npm install -g wzrd then you can do:

wzrd app.js

and open up http://localhost:9966 in your browser.

browserify-middleware, enchilada

If you are using express, check out browserify-middleware or enchilada.

They both provide middleware you can drop into an express application for serving browserify bundles.


livereactload is a tool for react that automatically updates your web page state when you modify your code.

livereactload is just an ordinary browserify transform that you can load with -t livereactload, but you should consult the project readme for more information.


budo is a browserify development server with a focus on incremental bundling and live reloading, including for css.

First make sure the watchify command is installed along with budo:

npm install -g watchify budo

then tell budo to watch a file and listen on http://localhost:9966

budo app.js

Now every time you update app.js or any other file in your dependency graph, the code will update after a refresh.

or to automatically reload the page live when a file changes, you can do:

budo app.js --live

Check out budo-chrome for a way to configure budo to update the code live without even reloading the page (sometimes called hot reloading).

using the api directly

You can just use the API directly from an ordinary http.createServer() for development too:

var browserify = require('browserify');
var http = require('http');

http.createServer(function (req, res) {
    if (req.url === '/bundle.js') {
        res.setHeader('content-type', 'application/javascript');
        var b = browserify(__dirname + '/main.js').bundle();
        b.on('error', console.error);
    else res.writeHead(404, 'not found')


If you use grunt, you’ll probably want to use the grunt-browserify plugin.


If you use gulp, you should use the browserify API directly.

Here is a guide for getting started with gulp and browserify.

Here is a guide on how to make browserify builds fast with watchify using gulp from the official gulp recipes.


In order to make more npm modules originally written for node work in the browser, browserify provides many browser-specific implementations of node core libraries:

events, stream, url, path, and querystring are particularly useful in a browser environment.

Additionally, if browserify detects the use of Bufferprocessglobal__filename, or__dirname, it will include a browser-appropriate definition.

So even if a module does a lot of buffer and stream operations, it will probably just work in the browser, so long as it doesn’t do any server IO.

If you haven’t done any node before, here are some examples of what each of those globals can do. Note too that these globals are only actually defined when you or some module you depend on uses them.


In node all the file and network APIs deal with Buffer chunks. In browserify the Buffer API is provided by buffer, which uses augmented typed arrays in a very performant way with fallbacks for old browsers.

Here’s an example of using Buffer to convert a base64 string to hex:

var buf = Buffer('YmVlcCBib29w', 'base64');
var hex = buf.toString('hex');

This example will print:



In node, process is a special object that handles information and control for the running process such as environment, signals, and standard IO streams.

Of particular consequence is the process.nextTick() implementation that interfaces with the event loop.

In browserify the process implementation is handled by the process module which just providesprocess.nextTick() and little else.

Here’s what process.nextTick() does:

setTimeout(function () {
}, 0);

process.nextTick(function () {


This script will output:


process.nextTick(fn) is like setTimeout(fn, 0), but faster because setTimeout is artificially slower in javascript engines for compatibility reasons.


In node, global is the top-level scope where global variables are attached similar to how window works in the browser. In browserify, global is just an alias for the window object.


__filename is the path to the current file, which is different for each file.

To prevent disclosing system path information, this path is rooted at the opts.basedir that you pass to browserify(), which defaults to the current working directory.

If we have a main.js:

var bar = require('./foo/bar.js');

console.log('here in main.js, __filename is:', __filename);

and a foo/bar.js:

module.exports = function () {
    console.log('here in foo/bar.js, __filename is:', __filename);

then running browserify starting at main.js gives this output:

$ browserify main.js | node
here in main.js, __filename is: /main.js
here in foo/bar.js, __filename is: /foo/bar.js


__dirname is the directory of the current file. Like __filename__dirname is rooted at theopts.basedir.

Here’s an example of how __dirname works:


console.log('in main.js __dirname=' + __dirname);


console.log('in abc.js, __dirname=' + __dirname);


$ browserify main.js | node
in abc.js, __dirname=/x/y/z
in main.js __dirname=/


Instead of browserify baking in support for everything, it supports a flexible transform system that are used to convert source files in-place.

This way you can require() files written in coffee script or templates and everything will be compiled down to javascript.

To use coffeescript for example, you can use the coffeeify transform. Make sure you’ve installed coffeeify first with npm install coffeeify then do:

$ browserify -t coffeeify > bundle.js

or with the API you can do:

var b = browserify('');

The best part is, if you have source maps enabled with --debug or opts.debug, the bundle.js will map exceptions back into the original coffee script source files. This is very handy for debugging with firebug or chrome inspector.

writing your own

Transforms implement a simple streaming interface. Here is a transform that replaces $CWD with theprocess.cwd():

var through = require('through2');

module.exports = function (file) {
    return through(function (buf, enc, next) {
        this.push(buf.toString('utf8').replace(/\$CWD/g, process.cwd()));

The transform function fires for every file in the current package and returns a transform stream that performs the conversion. The stream is written to and by browserify with the original file contents and browserify reads from the stream to obtain the new contents.

Simply save your transform to a file or make a package and then add it with -t ./your_transform.js.

For more information about how streams work, check out the stream handbook.


browser field

You can define a "browser" field in the package.json of any package that will tell browserify to override lookups for the main field and for individual modules.

If you have a module with a main entry point of main.js for node but have a browser-specific entry point at browser.js, you can do:

  "name": "mypkg",
  "version": "1.2.3",
  "main": "main.js",
  "browser": "browser.js"

Now when somebody does require('mypkg') in node, they will get the exports from main.js, but when they do require('mypkg') in a browser, they will get the exports from browser.js.

Splitting up whether you are in the browser or not with a "browser" field in this way is greatly preferrable to checking whether you are in a browser at runtime because you may want to load different modules based on whether you are in node or the browser. If the require() calls for both node and the browser are in the same file, browserify’s static analysis will include everything whether you use those files or not.

You can do more with the “browser” field as an object instead of a string.

For example, if you only want to swap out a single file in lib/ with a browser-specific version, you could do:

  "name": "mypkg",
  "version": "1.2.3",
  "main": "main.js",
  "browser": {
    "lib/foo.js": "lib/browser-foo.js"

or if you want to swap out a module used locally in the package, you can do:

  "name": "mypkg",
  "version": "1.2.3",
  "main": "main.js",
  "browser": {
    "fs": "level-fs-browser"

You can ignore files (setting their contents to the empty object) by setting their values in the browser field to false:

  "name": "mypkg",
  "version": "1.2.3",
  "main": "main.js",
  "browser": {
    "winston": false

The browser field only applies to the current package. Any mappings you put will not propagate down to its dependencies or up to its dependents. This isolation is designed to protect modules from each other so that when you require a module you won’t need to worry about any system-wide effects it might have. Likewise, you shouldn’t need to wory about how your local configuration might adversely affect modules far away deep into your dependency graph.

browserify.transform field

You can configure transforms to be automatically applied when a module is loaded in a package’s browserify.transform field. For example, we can automatically apply the brfs transform with this package.json:

  "name": "mypkg",
  "version": "1.2.3",
  "main": "main.js",
  "browserify": {
    "transform": [ "brfs" ]

Now in our main.js we can do:

var fs = require('fs');
var src = fs.readFileSync(__dirname + '/foo.txt', 'utf8');

module.exports = function (x) { return src.replace(x, 'zzz') };

and the fs.readFileSync() call will be inlined by brfs without consumers of the module having to know. You can apply as many transforms as you like in the transform array and they will be applied in order.

Like the "browser" field, transforms configured in package.json will only apply to the local package for the same reasons.

configuring transforms

Sometimes a transform takes configuration options on the command line. To apply these from package.json you can do the following.

on the command line

browserify -t coffeeify \
           -t [ browserify-ngannotate --ext .coffee ] \
  > index.js

in package.json

"browserify": {
  "transform": [
    ["browserify-ngannotate", {"ext": ".coffee"}]

finding good modules

Here are some useful heuristics for finding good modules on npm that work in the browser:

  • I can install it with npm
  • code snippet on the readme using require() – from a quick glance I should see how to integrate the library into what I’m presently working on
  • has a very clear, narrow idea about scope and purpose
  • knows when to delegate to other libraries – doesn’t try to do too many things itself
  • written or maintained by authors whose opinions about software scope, modularity, and interfaces I generally agree with (often a faster shortcut than reading the code/docs very closely)
  • inspecting which modules depend on the library I’m evaluating – this is baked into the package page for modules published to npm

Other metrics like number of stars on github, project activity, or a slick landing page, are not as reliable.

module philosophy

People used to think that exporting a bunch of handy utility-style things would be the main way that programmers would consume code because that is the primary way of exporting and importing code on most other platforms and indeed still persists even on npm.

However, this kitchen-sink mentality toward including a bunch of thematically-related but separable functionality into a single package appears to be an artifact for the difficulty of publishing and discovery in a pre-github, pre-npm era.

There are two other big problems with modules that try to export a bunch of functionality all in one place under the auspices of convenience: demarcation turf wars and finding which modules do what.

Packages that are grab-bags of features waste a ton of time policing boundaries about which new features belong and don’t belong. There is no clear natural boundary of the problem domain in this kind of package about what the scope is, it’s all somebody’s smug opinion.

Node, npm, and browserify are not that. They are avowedly ala-carte, participatory, and would rather celebrate disagreement and the dizzying proliferation of new ideas and approaches than try to clamp down in the name of conformity, standards, or “best practices”.

Nobody who needs to do gaussian blur ever thinks “hmm I guess I’ll start checking generic mathematics, statistics, image processing, and utility libraries to see which one has gaussian blur in it. Was it stats2 or image-pack-utils or maths-extra or maybe underscore has that one?” No. None of this. Stop it. They npm search gaussian and they immediately see ndarray-gaussian-filter and it does exactly what they want and then they continue on with their actual problem instead of getting lost in the weeds of somebody’s neglected grand utility fiefdom.

organizing modules

avoiding ../../../../../../..

Not everything in an application properly belongs on the public npm and the overhead of setting up a private npm or git repo is still rather large in many cases. Here are some approaches for avoiding the../../../../../../../ relative paths problem.


The simplest thing you can do is to symlink your app root directory into your node_modules/ directory.

Did you know that symlinks work on windows too?

To link a lib/ directory in your project root into node_modules, do:

ln -s ../lib node_modules/app

and now from anywhere in your project you’ll be able to require files in lib/ by doing require('app/foo.js') to get lib/foo.js.


People sometimes object to putting application-specific modules into node_modules because it is not obvious how to check in your internal modules without also checking in third-party modules from npm.

The answer is quite simple! If you have a .gitignore file that ignores node_modules:


You can just add an exception with ! for each of your internal application modules:


Please note that you can’t unignore a subdirectory, if the parent is already ignored. So instead of ignoring node_modules, you have to ignore every directory inside node_modules with thenode_modules/* trick, and then you can add your exceptions.

Now anywhere in your application you will be able to require('foo') or require('bar') without having a very large and fragile relative path.

If you have a lot of modules and want to keep them more separate from the third-party modules installed by npm, you can just put them all under a directory in node_modules such asnode_modules/app:


Now you will be able to require('app/foo') or require('app/bar') from anywhere in your application.

In your .gitignore, just add an exception for node_modules/app:


If your application had transforms configured in package.json, you’ll need to create a separate package.json with its own transform field in your node_modules/foo or node_modules/app/foocomponent directory because transforms don’t apply across module boundaries. This will make your modules more robust against configuration changes in your application and it will be easier to independently reuse the packages outside of your application.

custom paths

You might see some places talk about using the $NODE_PATH environment variable or opts.paths to add directories for node and browserify to look in to find modules.

Unlike most other platforms, using a shell-style array of path directories with $NODE_PATH is not as favorable in node compared to making effective use of the node_modules directory.

This is because your application is more tightly coupled to a runtime environment configuration so there are more moving parts and your application will only work when your environment is setup correctly.

node and browserify both support but discourage the use of $NODE_PATH.

non-javascript assets

There are many browserify transforms you can use to do many things. Commonly, transforms are used to include non-javascript assets into bundle files.


One way of including any kind of asset that works in both node and the browser is brfs.

brfs uses static analysis to compile the results of fs.readFile() and fs.readFileSync() calls down to source contents at compile time.

For example, this main.js:

var fs = require('fs');
var html = fs.readFileSync(__dirname + '/robot.html', 'utf8');

applied through brfs would become something like:

var fs = require('fs');
var html = "<b>beep boop</b>";

when run through brfs.

This is handy because you can reuse the exact same code in node and the browser, which makes sharing modules and testing much simpler.

fs.readFile() and fs.readFileSync() accept the same arguments as in node, which makes including inline image assets as base64-encoded strings very easy:

var fs = require('fs');
var imdata = fs.readFileSync(__dirname + '/image.png', 'base64');
var img = document.createElement('img');
img.setAttribute('src', 'data:image/png;base64,' + imdata);

If you have some css you want to inline into your bundle, you can do that too with the assistence of a module such as insert-css:

var fs = require('fs');
var insertStyle = require('insert-css');

var css = fs.readFileSync(__dirname + '/style.css', 'utf8');

Inserting css this way works fine for small reusable modules that you distribute with npm because they are fully-contained, but if you want a more wholistic approach to asset management using browserify, check out atomify and parcelify.




reusable components

Putting these ideas about code organization together, we can build a reusable UI component that we can reuse across our application or in other applications.

Here is a bare-bones example of an empty widget module:

module.exports = Widget;

function Widget (opts) {
    if (!(this instanceof Widget)) return new Widget(opts);
    this.element = document.createElement('div');

Widget.prototype.appendTo = function (target) {
    if (typeof target === 'string') target = document.querySelector(target);

Handy javascript constructor tip: you can include a this instanceof Widget check like above to let people consume your module with new Widget or Widget(). It’s nice because it hides an implementation detail from your API and you still get the performance benefits and indentation wins of using prototypes.

To use this widget, just use require() to load the widget file, instantiate it, and then call.appendTo() with a css selector string or a dom element.

Like this:

var Widget = require('./widget.js');
var w = Widget();

and now your widget will be appended to the DOM.

Creating HTML elements procedurally is fine for very simple content but gets very verbose and unclear for anything bigger. Luckily there are many transforms available to ease importing HTML into your javascript modules.

Let’s extend our widget example using brfs. We can also use domify to turn the string thatfs.readFileSync() returns into an html dom element:

var fs = require('fs');
var domify = require('domify');

var html = fs.readFileSync(__dirname + '/widget.html', 'utf8');

module.exports = Widget;

function Widget (opts) {
    if (!(this instanceof Widget)) return new Widget(opts);
    this.element = domify(html);

Widget.prototype.appendTo = function (target) {
    if (typeof target === 'string') target = document.querySelector(target);

and now our widget will load a widget.html, so let’s make one:

<div class="widget">
  <h1 class="name"></h1>
  <div class="msg"></div>

It’s often useful to emit events. Here’s how we can emit events using the built-in events module and the inherits module:

var fs = require('fs');
var domify = require('domify');
var inherits = require('inherits');
var EventEmitter = require('events').EventEmitter;

var html = fs.readFileSync(__dirname + '/widget.html', 'utf8');

inherits(Widget, EventEmitter);
module.exports = Widget;

function Widget (opts) {
    if (!(this instanceof Widget)) return new Widget(opts);
    this.element = domify(html);

Widget.prototype.appendTo = function (target) {
    if (typeof target === 'string') target = document.querySelector(target);
    this.emit('append', target);

Now we can listen for 'append' events on our widget instance:

var Widget = require('./widget.js');
var w = Widget();
w.on('append', function (target) {
    console.log('appended to: ' + target.outerHTML);

We can add more methods to our widget to set elements on the html:

var fs = require('fs');
var domify = require('domify');
var inherits = require('inherits');
var EventEmitter = require('events').EventEmitter;

var html = fs.readFileSync(__dirname + '/widget.html', 'utf8');

inherits(Widget, EventEmitter);
module.exports = Widget;

function Widget (opts) {
    if (!(this instanceof Widget)) return new Widget(opts);
    this.element = domify(html);

Widget.prototype.appendTo = function (target) {
    if (typeof target === 'string') target = document.querySelector(target);

Widget.prototype.setName = function (name) {
    this.element.querySelector('.name').textContent = name;

Widget.prototype.setMessage = function (msg) {
    this.element.querySelector('.msg').textContent = msg;

If setting element attributes and content gets too verbose, check out hyperglue.

Now finally, we can toss our widget.js and widget.html into node_modules/app-widget. Since our widget uses the brfs transform, we can create a package.json with:

  "name": "app-widget",
  "version": "1.0.0",
  "private": true,
  "main": "widget.js",
  "browserify": {
    "transform": [ "brfs" ]
  "dependencies": {
    "brfs": "^1.1.1",
    "inherits": "^2.0.1"

And now whenever we require('app-widget') from anywhere in our application, brfs will be applied to our widget.js automatically! Our widget can even maintain its own dependencies. This way we can update dependencies in one widgets without worrying about breaking changes cascading over into other widgets.

Make sure to add an exclusion in your .gitignore for node_modules/app-widget:


You can read more about shared rendering in node and the browser if you want to learn about sharing rendering logic between node and the browser using browserify and some streaming html libraries.

testing in node and the browser

Testing modular code is very easy! One of the biggest benefits of modularity is that your interfaces become much easier to instantiate in isolation and so it’s easy to make automated tests.

Unfortunately, few testing libraries play nicely out of the box with modules and tend to roll their own idiosyncratic interfaces with implicit globals and obtuse flow control that get in the way of a clean design with good separation.

People also make a huge fuss about “mocking” but it’s usually not necessary if you design your modules with testing in mind. Keeping IO separate from your algorithms, carefully restricting the scope of your module, and accepting callback parameters for different interfaces can all make your code much easier to test.

For example, if you have a library that does both IO and speaks a protocol, consider separating the IO layer from the protocol using an interface like streams.

Your code will be easier to test and reusable in different contexts that you didn’t initially envision. This is a recurring theme of testing: if your code is hard to test, it is probably not modular enough or contains the wrong balance of abstractions. Testing should not be an afterthought, it should inform your whole design and it will help you to write better interfaces.

testing libraries


Tape was specifically designed from the start to work well in both node and browserify. Suppose we have an index.js with an async interface:

module.exports = function (x, cb) {
    setTimeout(function () {
        cb(x * 100);
    }, 1000);

Here’s how we can test this module using tape. Let’s put this file in test/beep.js:

var test = require('tape');
var hundreder = require('../');

test('beep', function (t) {

    hundreder(5, function (n) {
        t.equal(n, 500, '5*100 === 500');

Because the test file lives in test/, we can require the index.js in the parent directory by doingrequire('../')index.js is the default place that node and browserify look for a module if there is no package.json in that directory with a main field.

We can require() tape like any other library after it has been installed with npm install tape.

The string 'beep' is an optional name for the test. The 3rd argument to t.equal() is a completely optional description.

The t.plan(1) says that we expect 1 assertion. If there are not enough assertions or too many, the test will fail. An assertion is a comparison like t.equal(). tape has assertion primitives for:

  • t.equal(a, b) – compare a and b strictly with ===
  • t.deepEqual(a, b) – compare a and b recursively
  • t.ok(x) – fail if x is not truthy

and more! You can always add an additional description argument.

Running our module is very simple! To run the module in node, just run node test/beep.js:

$ node test/beep.js
TAP version 13
# beep
ok 1 5*100 === 500

# tests 1
# pass  1

# ok

The output is printed to stdout and the exit code is 0.

To run our code in the browser, just do:

$ browserify test/beep.js > bundle.js

then plop bundle.js into a <script> tag:

<script src="bundle.js"></script>

and load that html in a browser. The output will be in the debug console which you can open with F12, ctrl-shift-j, or ctrl-shift-k depending on the browser.

This is a bit cumbersome to run our tests in a browser, but you can install the testling command to help. First do:

npm install -g testling

And now just do browserify test/beep.js | testling:

$ browserify test/beep.js | testling

TAP version 13
# beep
ok 1 5*100 === 500

# tests 1
# pass  1

# ok

testling will launch a real browser headlessly on your system to run the tests.

Now suppose we want to add another file, test/boop.js:

var test = require('tape');
var hundreder = require('../');

test('fraction', function (t) {

    hundreder(1/20, function (n) {
        t.equal(n, 5, '1/20th of 100');

test('negative', function (t) {

    hundreder(-3, function (n) {
        t.equal(n, -300, 'negative number');

Here our test has 2 test() blocks. The second test block won’t start to execute until the first is completely finished, even though it is asynchronous. You can even nest test blocks by usingt.test().

We can run test/boop.js with node directly as with test/beep.js, but if we want to run both tests, there is a minimal command-runner we can use that comes with tape. To get the tape command do:

npm install -g tape

and now you can run:

$ tape test/*.js
TAP version 13
# beep
ok 1 5*100 === 500
# fraction
ok 2 1/20th of 100
# negative
ok 3 negative number

# tests 3
# pass  3

# ok

and you can just pass test/*.js to browserify to run your tests in the browser:

$ browserify test/* | testling

TAP version 13
# beep
ok 1 5*100 === 500
# fraction
ok 2 1/20th of 100
# negative
ok 3 negative number

# tests 3
# pass  3

# ok

Putting together all these steps, we can configure package.json with a test script:

  "name": "hundreder",
  "version": "1.0.0",
  "main": "index.js",
  "devDependencies": {
    "tape": "^2.13.1",
    "testling": "^1.6.1"
  "scripts": {
    "test": "tape test/*.js",
    "test-browser": "browserify test/*.js | testlingify"

Now you can do npm test to run the tests in node and npm run test-browser to run the tests in the browser. You don’t need to worry about installing commands with -g when you use npm run: npm automatically sets up the $PATH for all packages installed locally to the project.

If you have some tests that only run in node and some tests that only run in the browser, you could have subdirectories in test/ such as test/server and test/browser with the tests that run both places just in test/. Then you could just add the relevant directory to the globs:

  "name": "hundreder",
  "version": "1.0.0",
  "main": "index.js",
  "devDependencies": {
    "tape": "^2.13.1",
    "testling": "^1.6.1"
  "scripts": {
    "test": "tape test/*.js test/server/*.js",
    "test-browser": "browserify test/*.js test/browser/*.js | testlingify"

and now server-specific and browser-specific tests will be run in addition to the common tests.

If you want something even slicker, check out prova once you have gotten the basic concepts.


The core assert module is a fine way to write simple tests too, although it can sometimes be tricky to ensure that the correct number of callbacks have fired.

You can solve that problem with tools like macgyver but it is appropriately DIY.


code coverage



This section covers bundling in more detail.

Bundling is the step where starting from the entry files, all the source files in the dependency graph are walked and packed into a single output file.

saving bytes

One of the first things you’ll want to tweak is how the files that npm installs are placed on disk to avoid duplicates.

When you do a clean install in a directory, npm will ordinarily factor out similar versions into the topmost directory where 2 modules share a dependency. However, as you install more packages, new packages will not be factored out automatically. You can however use the npm dedupe command to factor out packages for an already-installed set of packages in node_modules/. You could also remove node_modules/ and install from scratch again if problems with duplicates persist.

browserify will not include the same exact file twice, but compatible versions may differ slightly. browserify is also not version-aware, it will include the versions of packages exactly as they are laid out in node_modules/ according to the require() algorithm that node uses.

You can use the browserify --list and browserify --deps commands to further inspect which files are being included to scan for duplicates.


You can generate UMD bundles with --standalone that will work in node, the browser with globals, and AMD environments.

Just add --standalone NAME to your bundle command:

$ browserify foo.js --standalone xyz > bundle.js

This command will export the contents of foo.js under the external module name xyz. If a module system is detected in the host environment, it will be used. Otherwise a window global named xyz will be exported.

You can use dot-syntax to specify a namespace hierarchy:

$ browserify foo.js --standalone > bundle.js

If there is already a foo or a in the host environment in window global mode, browserify will attach its exports onto those objects. The AMD and module.exports modules will behave the same.

Note however that standalone only works with a single entry or directly-required file.

external bundles

ignoring and excluding

In browserify parlance, “ignore” means: replace the definition of a module with an empty object. “exclude” means: remove a module completely from a dependency graph.

Another way to achieve many of the same goals as ignore and exclude is the “browser” field in package.json, which is covered elsewhere in this document.


Ignoring is an optimistic strategy designed to stub in an empty definition for node-specific modules that are only used in some codepaths. For example, if a module requires a library that only works in node but for a specific chunk of the code:

var fs = require('fs');
var path = require('path');
var mkdirp = require('mkdirp');

exports.convert = convert;
function convert (src) {
    return src.replace(/beep/g, 'boop');

exports.write = function (src, dst, cb) {
    fs.readFile(src, function (err, src) {
        if (err) return cb(err);
        mkdirp(path.dirname(dst), function (err) {
            if (err) return cb(err);
            var out = convert(src);
            fs.writeFile(dst, out, cb);

browserify already “ignores” the 'fs' module by returning an empty object, but the .write()function here won’t work in the browser without an extra step like a static analysis transform or a runtime storage fs abstraction.

However, if we really want the convert() function but don’t want to see mkdirp in the final bundle, we can ignore mkdirp with b.ignore('mkdirp') or browserify --ignore mkdirp. The code will still work in the browser if we don’t call write() because require('mkdirp') won’t throw an exception, just return an empty object.

Generally speaking it’s not a good idea for modules that are primarily algorithmic (parsers, formatters) to do IO themselves but these tricks can let you use those modules in the browser anyway.

To ignore foo on the command-line do:

browserify --ignore foo

To ignore foo from the api with some bundle instance b do:



Another related thing we might want is to completely remove a module from the output so thatrequire('modulename') will fail at runtime. This is useful if we want to split things up into multiple bundles that will defer in a cascade to previously-defined require() definitions.

For example, if we have a vendored standalone bundle for jquery that we don’t want to appear in the primary bundle:

$ npm install jquery
$ browserify -r jquery --standalone jquery > jquery-bundle.js

then we want to just require('jquery') in a main.js:

var $ = require('jquery');
$(window).click(function () { document.body.bgColor = 'red' });

defering to the jquery dist bundle so that we can write:

<script src="jquery-bundle.js"></script>
<script src="bundle.js"></script>

and not have the jquery definition show up in bundle.js, then while compiling the main.js, you can--exclude jquery:

browserify main.js --exclude jquery > bundle.js

To exclude foo on the command-line do:

browserify --exclude foo

To exclude foo from the api with some bundle instance b do:


browserify cdn


Unfortunately, some packages are not written with node-style commonjs exports. For modules that export their functionality with globals or AMD, there are packages that can help automatically convert these troublesome packages into something that browserify can understand.


One way to automatically convert non-commonjs packages is with browserify-shim.

browserify-shim is loaded as a transform and also reads a "browserify-shim" field frompackage.json.

Suppose we need to use a troublesome third-party library we’ve placed in ./vendor/foo.js that exports its functionality as a window global called FOO. We can set up our package.json with:

  "browserify": {
    "transform": "browserify-shim"
  "browserify-shim": {
    "./vendor/foo.js": "FOO"

and now when we require('./vendor/foo.js'), we get the FOO variable that ./vendor/foo.jstried to put into the global scope, but that attempt was shimmed away into an isolated context to prevent global pollution.

We could even use the browser field to make require('foo') work instead of always needing to use a relative path to load ./vendor/foo.js:

  "browser": {
    "foo": "./vendor/foo.js"
  "browserify": {
    "transform": "browserify-shim"
  "browserify-shim": {
    "foo": "FOO"

Now require('foo') will return the FOO export that ./vendor/foo.js tried to place on the global scope.


Most of the time, the default method of bundling where one or more entry files map to a single bundled output file is perfectly adequate, particularly considering that bundling minimizes latency down to a single http request to fetch all the javascript assets.

However, sometimes this initial penalty is too high for parts of a website that are rarely or never used by most visitors such as an admin panel. This partitioning can be accomplished with the technique covered in the ignoring and excluding section, but factoring out shared dependencies manually can be tedious for a large and fluid dependency graph.

Luckily, there are plugins that can automatically factor browserify output into separate bundle payloads.


factor-bundle splits browserify output into multiple bundle targets based on entry-point. For each entry-point, an entry-specific output file is built. Files that are needed by two or more of the entry files get factored out into a common bundle.

For example, suppose we have 2 pages: /x and /y. Each page has an entry point, x.js for /x andy.js for /y.

We then generate page-specific bundles bundle/x.js and bundle/y.js with bundle/common.jscontaining the dependencies shared by both x.js and y.js:

browserify x.js y.js -p [ factor-bundle -o bundle/x.js -o bundle/y.js ] \
  -o bundle/common.js

Now we can simply put 2 script tags on each page. On /x we would put:

<script src="/bundle/common.js"></script>
<script src="/bundle/x.js"></script>

and on page /y we would put:

<script src="/bundle/common.js"></script>
<script src="/bundle/y.js"></script>

You could also load the bundles asynchronously with ajax or by inserting a script tag into the page dynamically but factor-bundle only concerns itself with generating the bundles, not with loading them.


partition-bundle handles splitting output into multiple bundles like factor-bundle, but includes a built-in loader using a special loadjs() function.

partition-bundle takes a json file that maps source files to bundle files:

  "entry.js": ["./a"],
  "common.js": ["./b"],
  "common/extra.js": ["./e", "./d"]

Then partition-bundle is loaded as a plugin and the mapping file, output directory, and destination url path (required for dynamic loading) are passed in:

browserify -p [ partition-bundle --map mapping.json \
  --output output/directory --url directory ]

Now you can add:

<script src="entry.js"></script>

to your page to load the entry file. From inside the entry file, you can dynamically load other bundles with a loadjs() function:

a.addEventListener('click', function() {
  loadjs(['./e', './d'], function(e, d) {
    console.log(e, d);

compiler pipeline

Since version 5, browserify exposes its compiler pipeline as a labeled-stream-splicer.

This means that transformations can be added or removed directly into the internal pipeline. This pipeline provides a clean interface for advanced customizations such as watching files or factoring bundles from multiple entry points.

For example, we could replace the built-in integer-based labeling mechanism with hashed IDs by first injecting a pass-through transform after the “deps” have been calculated to hash source files. Then we can use the hashes we captured to create our own custom labeler, replacing the built-in “label” transform:

var browserify = require('browserify');
var through = require('through2');
var shasum = require('shasum');

var b = browserify('./main.js');

var hashes = {};
var hasher = through.obj(function (row, enc, next) {
    hashes[] = shasum(row.source);

var labeler = through.obj(function (row, enc, next) { = hashes[];

    Object.keys(row.deps).forEach(function (key) {
        row.deps[key] = hashes[row.deps[key]];

b.pipeline.get('label').splice(0, 1, labeler);


Now instead of getting integers for the IDs in the output format, we get file hashes:

$ node bundle.js
(function e(t,n,r){function s(o,u){if(!n[o]){if(!t[o]){var a=typeof require=="function"&&require;if(!u&&a)return a(o,!0);if(i)return i(o,!0);var f=new Error("Cannot find module '"+o+"'");throw f.code="MODULE_NOT_FOUND",f}var l=n[o]={exports:{}};t[o][0].call(l.exports,function(e){var n=t[o][1][e];return s(n?n:e)},l,l.exports,e,t,n,r)}return n[o].exports}var i=typeof require=="function"&&require;for(var o=0;o<r.length;o++)s(r[o]);return s})({"5f0a0e3a143f2356582f58a70f385f4bde44f04b":[function(require,module,exports){
var foo = require('./foo.js');
var bar = require('./bar.js');

console.log(foo(3) + bar(4));

module.exports = function (n) { return n * 100 };

module.exports = function (n) { return n + 1 };


Note that the built-in labeler does other things like checking for the external, excluded configurations so replacing it will be difficult if you depend on those features. This example just serves as an example for the kinds of things you can do by hacking into the compiler pipeline.

build your own browserify

labeled phases

Each phase in the browserify pipeline has a label that you can hook onto. Fetch a label with.get(name) to return a labeled-stream-splicer handle at the appropriate label. Once you have a handle, you can .push().pop().shift().unshift(), and .splice() your own transform streams into the pipeline or remove existing transform streams.


The recorder is used to capture the inputs sent to the deps phase so that they can be replayed on subsequent calls to .bundle(). Unlike in previous releases, v5 can generate bundle output multiple times. This is very handy for tools like watchify that re-bundle when a file has changed.


The deps phase expects entry and require() files or objects as input and calls module-deps to generate a stream of json output for all of the files in the dependency graph.

module-deps is invoked with some customizations here such as:

  • setting up the browserify transform key for package.json
  • filtering out external, excluded, and ignored files
  • setting the default extensions for .js and .json plus options configured in theopts.extensions parameter in the browserify constructor
  • configuring a global insert-module-globals transform to detect and implement processBuffer,global__dirname, and __filename
  • setting up the list of node builtins which are shimmed by browserify


This transform adds module.exports= in front of files with a .json extension.


This transform removes byte order markers, which are sometimes used by windows text editors to indicate the endianness of files. These markers are ignored by node, so browserify ignores them for compatibility.


This transform checks for syntax errors using the syntax-error package to give informative syntax errors with line and column numbers.


This phase uses deps-sort to sort the rows written to it in order to make the bundles deterministic.


The transform at this phase uses dedupe information provided by deps-sort in the sort phase to remove files that have duplicate contents.


This phase converts file-based IDs which might expose system path information and inflate the bundle size into integer-based IDs.

The label phase will also normalize path names based on the opts.basedir or process.cwd() to avoid exposing system path information.


This phase emits a 'dep' event for each row after the label phase.


If opts.debug was given to the browserify() constructor, this phase will transform input to addsourceRoot and sourceFile properties which are used by browser-pack in the pack phase.


This phase converts rows with 'id' and 'source' parameters as input (among others) and generates the concatenated javascript bundle as output using browser-pack.


This is an empty phase at the end where you can easily tack on custom post transformations without interfering with existing mechanics.


browser-unpack converts a compiled bundle file back into a format very similar to the output ofmodule-deps.

This is very handy if you need to inspect or transform a bundle that has already been compiled.

For example:

$ browserify src/main.js | browser-unpack
{"id":1,"source":"module.exports = function (n) { return n * 100 };","deps":{}}
{"id":2,"source":"module.exports = function (n) { return n + 1 };","deps":{}}
{"id":3,"source":"var foo = require('./foo.js');\nvar bar = require('./bar.js');\n\nconsole.log(foo(3) + bar(4));","deps":{"./bar.js":1,"./foo.js":2},"entry":true}

This decomposition is needed by tools such as factor-bundle and bundle-collapser.


When loaded, plugins have access to the browserify instance itself.

using plugins

Plugins should be used sparingly and only in cases where a transform or global transform is not powerful enough to perform the desired functionality.

You can load a plugin with -p on the command-line:

$ browserify main.js -p foo > bundle.js

would load a plugin called foofoo is resolved with require(), so to load a local file as a plugin, preface the path with a ./ and to load a plugin from node_modules/foo, just do -p foo.

You can pass options to plugins with square brackets around the entire plugin expression, including the plugin name as the first argument:

$ browserify one.js two.js \
  -p [ factor-bundle -o bundle/one.js -o bundle/two.js ] \
  > common.js

This command-line syntax is parsed by the subarg package.

To see a list of browserify plugins, browse npm for packages with the keyword “browserify-plugin”:

authoring plugins

To author a plugin, write a package that exports a single function that will receive a bundle instance and options object as arguments:

// example plugin

module.exports = function (b, opts) {
  // ...

Plugins operate on the bundle instance b directly by listening for events or splicing transforms into the pipeline. Plugins should not overwrite bundle methods unless they have a very good reason.

Writing modular javascript


Writing Modular JavaScript With AMD, CommonJS & ES Harmony

Modularity: The Importance Of Decoupling Your Application

When we say an application is modular, we generally mean it’s composed of a set of highly decoupled, distinct pieces of functionality stored in modules. As you probably know, loose coupling facilitates easier maintainability of apps by removing dependencies where possible. When this is implemented efficiently, its quite easy to see how changes to one part of a system may affect another.

Unlike some more traditional programming languages however, the current iteration of JavaScript (ECMA-262) doesn’t provide developers with the means to import such modules of code in a clean, organized manner. It’s one of the concerns with specifications that haven’t required great thought until more recent years where the need for more organized JavaScript applications became apparent.

Instead, developers at present are left to fall back on variations of the module or object literal patterns. With many of these, module scripts are strung together in the DOM with namespaces being described by a single global object where it’s still possible to incur naming collisions in your architecture. There’s also no clean way to handle dependency management without some manual effort or third party tools.

Whilst native solutions to these problems will be arriving in ES Harmony, the good news is that writing modular JavaScript has never been easier and you can start doing it today.

In this article, we’re going to look at three formats for writing modular JavaScript: AMD,CommonJS and proposals for the next version of JavaScript, Harmony.

Prelude A Note On Script Loaders

It’s difficult to discuss AMD and CommonJS modules without talking about the elephant in the room – script loaders. At present, script loading is a means to a goal, that goal being modular JavaScript that can be used in applications today – for this, use of a compatible script loader is unfortunately necessary. In order to get the most out of this article, I recommend gaining a basic understanding of how popular script loading tools work so the explanations of module formats make sense in context.

There are a number of great loaders for handling module loading in the AMD and CJS formats, but my personal preferences are RequireJS and curl.js. Complete tutorials on these tools are outside the scope of this article, but I can recommend reading John Hann’s post about curl.js and James Burke’s RequireJS API documentation for more.

From a production perspective, the use of optimization tools (like the RequireJS optimizer) to concatenate scripts is recommended for deployment when working with such modules. Interestingly, with the Almond AMD shim, RequireJS doesn’t need to be rolled in the deployed site and what you might consider a script loader can be easily shifted outside of development.

That said, James Burke would probably say that being able to dynamically load scripts after page load still has its use cases and RequireJS can assist with this too. With these notes in mind, let’s get started.

AMD A Format For Writing Modular JavaScript In The Browser

The overall goal for the AMD (Asynchronous Module Definition) format is to provide a solution for modular JavaScript that developers can use today. It was born out of Dojo’s real world experience using XHR+eval and proponents of this format wanted to avoid any future solutions suffering from the weaknesses of those in the past.

The AMD module format itself is a proposal for defining modules where both the module and dependencies can be asynchronously loaded. It has a number of distinct advantages including being both asynchronous and highly flexible by nature which removes the tight coupling one might commonly find between code and module identity. Many developers enjoy using it and one could consider it a reliable stepping stone towards the module system proposed for ES Harmony.

AMD began as a draft specification for a module format on the CommonJS list but as it wasn’t able to reach full concensus, further development of the format moved to the amdjsgroup.

Today it’s embraced by projects including Dojo (1.7), MooTools (2.0), Firebug (1.8) and even jQuery (1.7). Although the term CommonJS AMD format has been seen in the wild on occasion, it’s best to refer to it as just AMD or Async Module support as not all participants on the CJS list wished to pursue it.

Getting Started With Modules

The two key concepts you need to be aware of here are the idea of a define method for facilitating module definition and a require method for handling dependency loadingdefine is used to define named or unnamed modules based on the proposal using the following signature:

    module_id /*optional*/, 
    [dependencies] /*optional*/, 
    definition function /*function for instantiating the module or object*/

As you can tell by the inline comments, the module_id is an optional argument which is typically only required when non-AMD concatenation tools are being used (there may be some other edge cases where it’s useful too). When this argument is left out, we call the module anonymous.

When working with anonymous modules, the idea of a module’s identity is DRY, making it trivial to avoid duplication of filenames and code. Because the code is more portable, it can be easily moved to other locations (or around the file-system) without needing to alter the code itself or change its ID. The module_id is equivalent to folder paths in simple packages and when not used in packages. Developers can also run the same code on multiple environments just by using an AMD optimizer that works with a CommonJS environment such as r.js.

Back to the define signature, the dependencies argument represents an array of dependencies which are required by the module you are defining and the third argument (‘definition function’) is a function that’s executed to instantiate your module. A barebone module could be defined as follows:

Understanding AMD: define()

// A module_id (myModule) is used here for demonstration purposes only
    ['foo', 'bar'], 
    // module definition function
    // dependencies (foo and bar) are mapped to function parameters
    function ( foo, bar ) {
        // return a value that defines the module export
        // (i.e the functionality we want to expose for consumption)
        // create your module here
        var myModule = {
                console.log('Yay! Stuff');
        return myModule;
// An alternative example could be..
    ['math', 'graph'], 
    function ( math, graph ) {
        // Note that this is a slightly different pattern
        // With AMD, it's possible to define modules in a few
        // different ways due as it's relatively flexible with
        // certain aspects of the syntax
        return {
            plot: function(x, y){
                return graph.drawPie(math.randomGrid(x,y));

require on the other hand is typically used to load code in a top-level JavaScript file or within a module should you wish to dynamically fetch dependencies. An example of its usage is:

Understanding AMD: require()

// Consider 'foo' and 'bar' are two external modules
// In this example, the 'exports' from the two modules loaded are passed as
// function arguments to the callback (foo and bar)
// so that they can similarly be accessed
require(['foo', 'bar'], function ( foo, bar ) {
        // rest of your code here

Dynamically-loaded Dependencies

define(function ( require ) {
    var isReady = false, foobar;
    // note the inline require within our module definition
    require(['foo', 'bar'], function (foo, bar) {
        isReady = true;
        foobar = foo() + bar();
    // we can still return a module
    return {
        isReady: isReady,
        foobar: foobar

Understanding AMD: plugins

The following is an example of defining an AMD-compatible plugin:

// With AMD, it's possible to load in assets of almost any kind
// including text-files and HTML. This enables us to have template
// dependencies which can be used to skin components either on
// page-load or dynamically.
define(['./templates', 'text!./','css!./template.css'],
    function( templates, template ){
        // do some fun template stuff here.

Loading AMD Modules Using require.js

    function( myModule ){
        // start the main module which in-turn
        // loads other modules
        var module = new myModule();

Loading AMD Modules Using curl.js

    function( myModule ){
        // start the main module which in-turn
        // loads other modules
        var module = new myModule();

Modules With Deferred Dependencies

// This could be compatible with jQuery's Deferred implementation,
// futures.js (slightly different syntax) or any one of a number
// of other implementations
define(['lib/Deferred'], function( Deferred ){
    var defer = new Deferred(); 
        function( template, data ){
            defer.resolve({ template: template, data:data });
    return defer.promise();

Why Is AMD A Better Choice For Writing Modular JavaScript?

  • Provides a clear proposal for how to approach defining flexible modules.
  • Significantly cleaner than the present global namespace and <script> tag solutions many of us rely on. There’s a clean way to declare stand-alone modules and dependencies they may have.
  • Module definitions are encapsulated, helping us to avoid pollution of the global namespace.
  • Works better than some alternative solutions (eg. CommonJS, which we’ll be looking at shortly). Doesn’t have issues with cross-domain, local or debugging and doesn’t have a reliance on server-side tools to be used. Most AMD loaders support loading modules in the browser without a build process.
  • Provides a ‘transport’ approach for including multiple modules in a single file. Other approaches like CommonJS have yet to agree on a transport format.
  • It’s possible to lazy load scripts if this is needed.

Related Reading

The RequireJS Guide To AMD

What’s the fastest way to load AMD modules?

AMD vs. CJS, what’s the better format?

AMD Is Better For The Web Than CommonJS Modules

The Future Is Modules Not Frameworks

AMD No Longer A CommonJS Specification

On Inventing JavaScript Module Formats And Script Loaders

The AMD Mailing List

AMD Modules With jQuery

The Basics

Unlike Dojo, jQuery really only comes with one file, however given the plugin-based nature of the library, we can demonstrate how straight-forward it is to define an AMD module that uses it below.

    function($, colorPlugin, _){
        // Here we've passed in jQuery, the color plugin and Underscore
        // None of these will be accessible in the global scope, but we
        // can easily reference them below.
        // Pseudo-randomize an array of colors, selecting the first
        // item in the shuffled array
        var shuffleColor = _.first(_.shuffle(['#666','#333','#111']));
        // Animate the background-color of any elements with the class
        // 'item' on the page using the shuffled color
        $('.item').animate({'backgroundColor': shuffleColor });
        return {};
        // What we return can be used by other modules

There is however something missing from this example and it’s the concept of registration.

Registering jQuery As An Async-compatible Module

One of the key features that landed in jQuery 1.7 was support for registering jQuery as an asynchronous module. There are a number of compatible script loaders (including RequireJS and curl) which are capable of loading modules using an asynchronous module format and this means fewer hacks are required to get things working.

As a result of jQuery’s popularity, AMD loaders need to take into account multiple versions of the library being loaded into the same page as you ideally don’t want several different versions loading at the same time. Loaders have the option of either specifically taking this issue into account or instructing their users that there are known issues with third party scripts and their libraries.

What the 1.7 addition brings to the table is that it helps avoid issues with other third party code on a page accidentally loading up a version of jQuery on the page that the owner wasn’t expecting. You don’t want other instances clobbering your own and so this can be of benefit.

The way this works is that the script loader being employed indicates that it supports multiple jQuery versions by specifying that a property, define.amd.jQuery is equal to true. For those interested in more specific implementation details, we register jQuery as a named module as there is a risk that it can be concatenated with other files which may use AMD’s define() method, but not use a proper concatenation script that understands anonymous AMD module definitions.

The named AMD provides a safety blanket of being both robust and safe for most use-cases.

// Account for the existence of more than one global 
// instances of jQuery in the document, cater for testing 
// .noConflict()

var jQuery = this.jQuery || "jQuery", 
$ = this.$ || "$",
originaljQuery = jQuery,
original$ = $,

define(['jquery'] , function ($) {
    return function () {};

// The very easy to implement flag stating support which 
// would be used by the AMD loader
define.amd = {
    jQuery: true

Smarter jQuery Plugins

I’ve recently discussed some ideas and examples of how jQuery plugins could be written using Universal Module Definition (UMD) patterns here. UMDs define modules that can work on both the client and server, as well as with all popular script loaders available at the moment. Whilst this is still a new area with a lot of concepts still being finalized, feel free to look at the code samples in the section title AMD && CommonJS below and let me know if you feel there’s anything we could do better.

What Script Loaders & Frameworks Support AMD?



AMD Conclusions

The above are very trivial examples of just how useful AMD modules can truly be, but they hopefully provide a foundation for understanding how they work.

You may be interested to know that many visible large applications and companies currently use AMD modules as a part of their architecture. These include IBM and the BBC iPlayer, which highlight just how seriously this format is being considered by developers at an enterprise-level.

For more reasons why many developers are opting to use AMD modules in their applications, you may be interested in this post by James Burke.

CommonJS A Module Format Optimized For The Server

CommonJS are a volunteer working group which aim to design, prototype and standardize JavaScript APIs. To date they’ve attempted to ratify standards for both modules and packages. The CommonJS module proposal specifies a simple API for declaring modules server-side and unlike AMD attempts to cover a broader set of concerns such as io, filesystem, promises and more.

Getting Started

From a structure perspective, a CJS module is a reusable piece of JavaScript which exports specific objects made available to any dependent code – there are typically no function wrappers around such modules (so you won’t see define used here for example).

At a high-level they basically contain two primary parts: a free variable named exports which contains the objects a module wishes to make available to other modules and a require function that modules can use to import the exports of other modules.

Understanding CJS: require() and exports

// package/lib is a dependency we require
var lib = require('package/lib');
// some behaviour for our module
function foo(){
    lib.log('hello world!');
// export (expose) foo to other modules = foo;

Basic consumption of exports

// define more behaviour we would like to expose
function foobar(){ = function(){
                console.log('Hello foo');
 = function(){
                console.log('Hello bar');
// expose foobar to other modules
exports.foobar = foobar;
// an application consuming 'foobar'
// access the module relative to the path
// where both usage and module files exist
// in the same directory
var foobar = require('./foobar').foobar,
    test   = new foobar();; // 'Hello bar'

AMD-equivalent Of The First CJS Example

define(['package/lib'], function(lib){
    // some behaviour for our module
    function foo(){
        lib.log('hello world!');
    // export (expose) foo for other modules
    return {
        foobar: foo

Consuming Multiple Dependencies

var modA = require('./foo');
var modB = require('./bar'); = function(){
    console.log('Im an application!');
} = function(){
    return modA.helloWorld();
bar.js = 'bar';
exports.helloWorld = function(){
    return 'Hello World!!''

What Loaders & Frameworks Support CJS?


Is CJS Suitable For The Browser?

There are developers that feel CommonJS is better suited to server-side development which is one reason there’s currently a level of disagreement over which format should and will be used as the de facto standard in the pre-Harmony age moving forward. Some of the arguments against CJS include a note that many CommonJS APIs address server-oriented features which one would simply not be able to implement at a browser-level in JavaScript – for example, iosystem and js could be considered unimplementable by the nature of their functionality.

That said, it’s useful to know how to structure CJS modules regardless so that we can better appreciate how they fit in when defining modules which may be used everywhere. Modules which have applications on both the client and server include validation, conversion and templating engines. The way some developers are approaching choosing which format to use is opting for CJS when a module can be used in a server-side environment and using AMD if this is not the case.

As AMD modules are capable of using plugins and can define more granular things like constructors and functions this makes sense. CJS modules are only able to define objects which can be tedious to work with if you’re trying to obtain constructors out of them.

Although it’s beyond the scope of this article, you may have also noticed that there were different types of ‘require’ methods mentioned when discussing AMD and CJS.

The concern with a similar naming convention is of course confusion and the community are currently split on the merits of a global require function. John Hann’s suggestion here is that rather than calling it ‘require’, which would probably fail to achieve the goal of informing users about the different between a global and inner require, it may make more sense to rename the global loader method something else (e.g. the name of the library). It’s for this reason that a loader like curl.js uses curl() as opposed to require.

Related Reading

Demystifying CommonJS Modules

JavaScript Growing Up

The RequireJS Notes On CommonJS

Taking Baby Steps With Node.js And CommonJS – Creating Custom Modules

Asynchronous CommonJS Modules for the Browser

The CommonJS Mailing List

AMD && CommonJS Competing, But Equally Valid Standards

Whilst this article has placed more emphasis on using AMD over CJS, the reality is that both formats are valid and have a use.

AMD adopts a browser-first approach to development, opting for asynchronous behaviour and simplified backwards compatability but it doesn’t have any concept of File I/O. It supports objects, functions, constructors, strings, JSON and many other types of modules, running natively in the browser. It’s incredibly flexible.

CommonJS on the other hand takes a server-first approach, assuming synchronous behaviour, no global baggage as John Hann would refer to it as and it attempts to cater for the future (on the server). What we mean by this is that because CJS supports unwrapped modules, it can feel a little more close to the specifications, freeing you of the define() wrapper that AMD enforces. CJS modules however only support objects as modules.

Although the idea of yet another module format may be daunting, you may be interested in some samples of work on hybrid AMD/CJS and Univeral AMD/CJS modules.

GIT: submodules



It often happens that while working on one project, you need to use another project from within it. Perhaps it’s a library that a third party developed or that you’re developing separately and using in multiple parent projects. A common issue arises in these scenarios: you want to be able to treat the two projects as separate yet still be able to use one from within the other.

Here’s an example. Suppose you’re developing a web site and creating Atom feeds. Instead of writing your own Atom-generating code, you decide to use a library. You’re likely to have to either include this code from a shared library like a CPAN install or Ruby gem, or copy the source code into your own project tree. The issue with including the library is that it’s difficult to customize the library in any way and often more difficult to deploy it, because you need to make sure every client has that library available. The issue with vendoring the code into your own project is that any custom changes you make are difficult to merge when upstream changes become available.

Git addresses this issue using submodules. Submodules allow you to keep a Git repository as a subdirectory of another Git repository. This lets you clone another repository into your project and keep your commits separate.

Starting with Submodules

We’ll walk through developing a simple project that has been split up into a main project and a few sub-projects.

Let’s start by adding an existing Git repository as a submodule of the repository that we’re working on. To add a new submodule you use the git submodule add command with the URL of the project you would like to start tracking. In this example, we’ll add a library called “DbConnector”.

$ git submodule add
Cloning into 'DbConnector'...
remote: Counting objects: 11, done.
remote: Compressing objects: 100% (10/10), done.
remote: Total 11 (delta 0), reused 11 (delta 0)
Unpacking objects: 100% (11/11), done.
Checking connectivity... done.

By default, submodules will add the subproject into a directory named the same as the repository, in this case “DbConnector”. You can add a different path at the end of the command if you want it to go elsewhere.

If you run git status at this point, you’ll notice a few things.

$ git status
On branch master
Your branch is up-to-date with 'origin/master'.

Changes to be committed:
  (use "git reset HEAD <file>..." to unstage)

	new file:   .gitmodules
	new file:   DbConnector

First you should notice the new .gitmodules file. This is a configuration file that stores the mapping between the project’s URL and the local subdirectory you’ve pulled it into:

$ cat .gitmodules
[submodule "DbConnector"]
	path = DbConnector
	url =

If you have multiple submodules, you’ll have multiple entries in this file. It’s important to note that this file is version-controlled with your other files, like your .gitignore file. It’s pushed and pulled with the rest of your project. This is how other people who clone this project know where to get the submodule projects from.

Since the URL in the .gitmodules file is what other people will first try to clone/fetch from, make sure to use a URL that they can access if possible. For example, if you use a different URL to push to than others would to pull from, use the one that others have access to. You can overwrite this value locally with git config submodule.DbConnector.url PRIVATE_URL for your own use.

The other listing in the git status output is the project folder entry. If you run git diff on that, you see something interesting:

$ git diff --cached DbConnector
diff --git a/DbConnector b/DbConnector
new file mode 160000
index 0000000..c3f01dc
--- /dev/null
+++ b/DbConnector
@@ -0,0 +1 @@
+Subproject commit c3f01dc8862123d317dd46284b05b6892c7b29bc

Although DbConnector is a subdirectory in your working directory, Git sees it as a submodule and doesn’t track its contents when you’re not in that directory. Instead, Git sees it as a particular commit from that repository.

If you want a little nicer diff output, you can pass the --submodule option to git diff.

$ git diff --cached --submodule
diff --git a/.gitmodules b/.gitmodules
new file mode 100644
index 0000000..71fc376
--- /dev/null
+++ b/.gitmodules
@@ -0,0 +1,3 @@
+[submodule "DbConnector"]
+       path = DbConnector
+       url =
Submodule DbConnector 0000000...c3f01dc (new submodule)

When you commit, you see something like this:

$ git commit -am 'added DbConnector module'
[master fb9093c] added DbConnector module
 2 files changed, 4 insertions(+)
 create mode 100644 .gitmodules
 create mode 160000 DbConnector

Notice the 160000 mode for the DbConnector entry. That is a special mode in Git that basically means you’re recording a commit as a directory entry rather than a subdirectory or a file.

Facebook SDK API


Quickstart: Facebook SDK for JavaScript

The Facebook SDK for JavaScript provides a rich set of client-side functionality that:

  • Enables you to use the Like Button and other Social Plugins on your site.
  • Enables you to use Facebook Login to lower the barrier for people to sign up on your site.
  • Makes it easy to call into Facebook’s Graph API.
  • Launch Dialogs that let people perform various actions like sharing stories.
  • Facilitates communication when you’re building a game or an app tab on Facebook.

The SDK, social plugins and dialogs work on both desktop and mobile web browsers.

This quickstart will show you how to setup the SDK and get it to make some basic Graph API calls. If you don’t want to setup just yet, you can use our JavaScript test console to use all of the SDK methods, and explore some examples (you can skip the setup steps, but the rest of this quickstart can be tested in the console).

Basic Setup

The Facebook SDK for JavaScript doesn’t have any standalone files that need to be downloaded or installed, instead you simply need to include a short piece of regular JavaScript in your HTML that will asynchronously load the SDK into your pages. The async load means that it does not block loading other elements of your page.

The following snippet of code will give the basic version of the SDK where the options are set to their most common defaults. You should insert it directly after the opening <body> tag on each page you want to load it:

      window.fbAsyncInit = function() {
          appId      : 'your-app-id',
          xfbml      : true,
          version    : 'v2.3'

      (function(d, s, id){
         var js, fjs = d.getElementsByTagName(s)[0];
         if (d.getElementById(id)) {return;}
         js = d.createElement(s); = id;
         js.src = "//";
         fjs.parentNode.insertBefore(js, fjs);
       }(document, 'script', 'facebook-jssdk'));

This code will load and initialize the SDK. You must replace the value in your-app-id with the ID of your own Facebook App. You can find this ID using the App Dashboard.

Advanced Setup

As mentioned, the code above uses the common defaults for the options available when initializing the SDK. You can customize some of these options, if useful.

Changing SDK Language

In the basic setup snippet, the en_US version of the SDK is initialized, which means that all of the Facebook-generated buttons and plugins used on your site will be in US English. (However, pop-up dialogs generated by Facebook like the Login Dialog will be in the language the person has chosen on Facebook, even if they differ from what you’ve selected.) You can change this language by changing the js.src value in the snippet. Take a look at Localization to see the different locales that can be used. For example, if your site is in Spanish, using the following code to load the SDK will cause all Social Plugins to be rendered in Spanish.

      (function(d, s, id){
         var js, fjs = d.getElementsByTagName(s)[0];
         if (d.getElementById(id)) return;
         js = d.createElement(s); = id;
         js.src = "//";
         fjs.parentNode.insertBefore(js, fjs);
       }(document, 'script', 'facebook-jssdk'));

Login Status Check

If you set status to true in the FB.init() call, the SDK will attempt to get info about the current user immediately after init. Doing this can reduce the time it takes to check for the state of a logged in user if you’re using Facebook Login, but isn’t useful for pages that only have social plugins on them.

You can use FB.getLoginStatus to get a person’s login state. Read on for more about using Facebook Login with the JavaScript SDK.

Disabling XFBML Parsing

With xfbml set to true, the SDK will parse your page’s DOM to find and initialize any social plugins that have been added using XFBML. If you’re not using social plugins on the page, setting xfbml to false will improve page load times. You can find out more about this by looking at Social Plugins.

Triggering Code when the SDK loads

The function assigned to window.fbAsyncInit is run as soon as the SDK has completed loading. Any code that you want to run after the SDK is loaded should be placed within this function and after the call to FB.init. Any kind of JavaScript can be used here, but any SDK functions must be called afterFB.init.


To improve performance, the JavaScript SDK is loaded minified. You can also load a debug version of the JavaScript SDK that includes more logging and stricter argument checking as well as being non-minified. To do so, change the js.src value in your loading code to this:

js.src = "//";

More Initialization Options

The reference doc for the FB.init function provides a full list of available initialization options.

Using the SDK to add Social Plugins

Now that you’ve got the SDK setup, we can use it to perform a few common tasks. Social Plugins such as the Like Button and Comments Plugin can be inserted into HTML pages using the JavaScript SDK.

Let’s try adding a Like button, just copy and paste the line of code below anywhere inside the <body> of your page:


Reload your page, and you should see a Like button on it.

Using the SDK to trigger a Share dialog

The Share Dialog allows someone using a page to post a link to their timeline, or create an Open Graph story. Dialogs displayed using the JavaScript SDK are automatically formatted for the context in which they are loaded – mobile web, or desktop web.

Here we’ll show you how the FB.ui() method of the SDK can be used to invoke a really basic Share dialog. Add this snippet after the FB.init() call in the basic setup code:

  method: 'share',
  href: ''
}, function(response){});

Now when you reload your page, you’ll see a Share dialog appear over the top of the page. Let’s add a few extra parameters to the FB.ui call in order to make the Share dialog make a more complex call to publish an Open Graph action:

     method: 'share_open_graph',
     action_type: 'og.likes',
     action_properties: JSON.stringify({
    }, function(response){});

Now when you reload your page, you’ll see a Share dialog again, but this time with a preview of the Open Graph story. Once the dialog has been closed, either by posting the story or by cancelling, the response function will be triggered.

Read the FB.ui reference doc to see a full list of parameters that can be used, and the structure of the response object.

Using the SDK for Facebook Login

Facebook Login allows users to register or sign in to your app with their Facebook identity.

We have a full guide on how to use the JS SDK to implement Facebook Login. But for now, let’s just use some basic sample code, so you can see how it works. Insert the following after your original FB.init call:

FB.getLoginStatus(function(response) {
  if (response.status === 'connected') {
    console.log('Logged in.');
  else {

Read the Login guide to learn exactly what is happening here, but when you reload your page you should be prompted with the Login dialog for you app, if you haven’t already granted it permission.

Using the SDK to call the Graph API

To read or write data to the Graph API, you’ll use the JS SDK’s FB.api() method. The version parameter in the FB.init call is used to determine which Graph API version is used.

We have another quickstart guide for the Graph API, however here we’ll show you how the FB.api() method can publish a story on your behalf.

First, we need to get publish_actions permission in order to make publishing API calls. So add a line after FB.init like this:

FB.login(function(){}, {scope: 'publish_actions'});

This will trigger a login dialog that’ll request the relevant permissions. Next, now that your app can, let’s make the API call to publish. Add the API code into the response function of the FB.login call you added above:

 FB.api('/me/feed', 'post', {message: 'Hello, world!'});
}, {scope: 'publish_actions'});

Now, when you reload your page, you’ll be asked for permissions (if you haven’t granted them already) and then a status message will be posted to your profile:

Congratulations, you’ve learned how to use the JavaScript to perform a number of common tasks. Dig deeper into the guides linked in each section to learn more about specific methods, or other parts of Facebook Platform.
[Fuente :]

Facebook Login for the Web with the JavaScript SDK

Facebook apps can use one of several login flows, depending on the target device and the project. This guide takes you step-by-step through the login flow for web apps. The steps in this guide use Facebook’s JavaScript SDK, which is the recommended method to add Facebook Login to your website.

If for some reason you can’t use our JavaScript SDK you can also implement login without it. We’ve build a separate guide to follow if you need to implement login manually.


Later in this doc we will guide you through the login flow step-by-step and explain each step clearly – this will help you if you are trying to integrate Facebook Login into an existing login system, or just to integrate it with any server-side code you’re running. But before we do that, it’s worth showing how little code is required to implement login in a web application using the JavaScript SDK.

You will need a Facebook App ID before you start using the SDK, which you can create and retrieve on the App Dashboard. You’ll also need somewhere to host HTML files. If you don’t have hosting, you can get set up quickly with Parse.

This code will load and initialize the JavaScript SDK in your HTML page. Use your app ID where indicated.

<!DOCTYPE html>
<title>Facebook Login JavaScript Example</title>
<meta charset="UTF-8">
  // This is called with the results from from FB.getLoginStatus().
  function statusChangeCallback(response) {
    // The response object is returned with a status field that lets the
    // app know the current login status of the person.
    // Full docs on the response object can be found in the documentation
    // for FB.getLoginStatus().
    if (response.status === 'connected') {
      // Logged into your app and Facebook.
    } else if (response.status === 'not_authorized') {
      // The person is logged into Facebook, but not your app.
      document.getElementById('status').innerHTML = 'Please log ' +
        'into this app.';
    } else {
      // The person is not logged into Facebook, so we're not sure if
      // they are logged into this app or not.
      document.getElementById('status').innerHTML = 'Please log ' +
        'into Facebook.';

  // This function is called when someone finishes with the Login
  // Button.  See the onlogin handler attached to it in the sample
  // code below.
  function checkLoginState() {
    FB.getLoginStatus(function(response) {

  window.fbAsyncInit = function() {
    appId      : '{your-app-id}',
    cookie     : true,  // enable cookies to allow the server to access 
                        // the session
    xfbml      : true,  // parse social plugins on this page
    version    : 'v2.2' // use version 2.2

  // Now that we've initialized the JavaScript SDK, we call 
  // FB.getLoginStatus().  This function gets the state of the
  // person visiting this page and can return one of three states to
  // the callback you provide.  They can be:
  // 1. Logged into your app ('connected')
  // 2. Logged into Facebook, but not your app ('not_authorized')
  // 3. Not logged into Facebook and can't tell if they are logged into
  //    your app or not.
  // These three cases are handled in the callback function.

  FB.getLoginStatus(function(response) {


  // Load the SDK asynchronously
  (function(d, s, id) {
    var js, fjs = d.getElementsByTagName(s)[0];
    if (d.getElementById(id)) return;
    js = d.createElement(s); = id;
    js.src = "//";
    fjs.parentNode.insertBefore(js, fjs);
  }(document, 'script', 'facebook-jssdk'));

  // Here we run a very simple test of the Graph API after login is
  // successful.  See statusChangeCallback() for when this call is made.
  function testAPI() {
    console.log('Welcome!  Fetching your information.... ');
    FB.api('/me', function(response) {
      console.log('Successful login for: ' +;
      document.getElementById('status').innerHTML =
        'Thanks for logging in, ' + + '!';

  Below we include the Login Button social plugin. This button uses
  the JavaScript SDK to present a graphical Login button that triggers
  the FB.login() function when clicked.

<fb:login-button scope="public_profile,email" onlogin="checkLoginState();">

<div id="status">


Now you can test your app by going to the URL where you uploaded this HTML. Open your JavaScript console, and you’ll see the testAPI() function display a message with your name in the console log.

Congratulations, at this stage you’ve actually built a really basic page with Facebook Login. You can use this as the starting point for your own app, but it will be useful to read on and understand what is happening in the code above.

AngularJS: Modules


What is a Module?

You can think of a module as a container for the different parts of your app – controllers, services, filters, directives, etc.


Most applications have a main method that instantiates and wires together the different parts of the application.Angular apps don’t have a main method. Instead modules declaratively specify how an application should be bootstrapped. There are several advantages to this approach:

  • The declarative process is easier to understand.
  • You can package code as reusable modules.
  • The modules can be loaded in any order (or even in parallel) because modules delay execution.
  • Unit tests only have to load relevant modules, which keeps them fast.
  • End-to-end tests can use modules to override configuration.

The Basics

I’m in a hurry. How do I get a Hello World module working?


<div ng-app="myApp">
    {{ 'World' | greet }}


// declare a module
var myAppModule = angular.module('myApp', []);

// configure the module.
// in this example we will create a greeting filter
myAppModule.filter('greet', function() {
 return function(name) {
    return 'Hello, ' + name + '!';


it('should add Hello to the name', function() {
  expect(element(by.binding("'World' | greet")).getText()).toEqual('Hello, World!');


AngularJS: Directive resolve dependencies

If we want to wait for some server data for directive loading this is the pattern:

    .directive('trs', ['trs','$http', '$q', 'wordbee', function(trs,$http, $q,wordbee,wordbeeStrings) {

  'use strict';

  var translateArgs = function(str) {
    try {
      if (str[0] != '"' && str[0] != "'") {
        return, str);
      } else {
        // Strips the " or ' on the start and end
        // Used to be eval("trs(" + str + ")");
        return trs(str.slice(1, str.length-1));
    } catch (err) {
      $log.error('Reference error, trs directive shouldn\'t have dynamic vars: ' + args);
      throw err;

  //store the data so you don't load it twice.
  var directiveData,
  //declare a variable for you promise.

  //set up a promise that will be used to load the data
  function loadData(){

    //if we already have a promise, just return that
    //so it doesn't run twice.
    if(dataPromise) {
      return dataPromise;

    var deferred = $q.defer();
    dataPromise = deferred.promise;

    if(!_.isEmpty(wordbeeStrings)) {
      //if we already have data, return that.
    } else {
      console.log("TRS directive loadlanguage");
                  .then(function(data) {
                    directiveData = data;
                    wordbeeStrings = directiveData;
                    _.each(wordbeeStrings, function (val, key) {

                      val = val.replace(/&lt;/g, '<');

                      val = val.replace(/&gt;/g, '>');

                      wordbeeStrings[key] = val;


                    console.log("trs directive Language loaded!!!");
    return dataPromise;

  return {
    restrict: 'EA',
    scope: false,
    link: function(scope, elm, attrs) {
      //load the data, or check if it's loaded and apply it.
      loadData().then(function(data) {
        //success! set your scope values and
        // do whatever dom/plugin stuff you need to do here.
        // an $apply() may be necessary in some cases.
        //console.log("TRS directive --> ",attrs);
        if (attrs.hasOwnProperty('trs')) {
        } else {
          // otherwise we use <trs> </trs> syntax
      }, function() {
        //failure! update something to show failure.
        // again, $apply() may be necessary. = 'ERROR: failed to load data.';

AngularJS UI router


The de-facto solution to flexible routing with nested views

AngularUI Router is a routing framework for AngularJS, which allows you to organize the parts of your interface into a state machine. Unlike the $route service in the Angular ngRoute module, which is organized around URL routes, UI-Router is organized around states, which may optionally have routes, as well as other behavior, attached.

States are bound to namednested and parallel views, allowing you to powerfully manage your application’s interface.

Check out the sample app:

Note: UI-Router is under active development. As such, while this library is well-tested, the API may change. Consider using it in production applications only if you’re comfortable following a changelog and updating your usage accordingly.

Get Started

(1) Get UI-Router in one of the following ways:

  • clone & build this repository
  • download the release (or minified)
  • via Bower: by running $ bower install angular-ui-router from your console
  • or via npm: by running $ npm install angular-ui-router from your console
  • or via Component: by running $ component install angular-ui/ui-router from your console

(2) Include angular-ui-router.js (or angular-ui-router.min.js) in your index.html, after including Angular itself (For Component users: ignore this step)

(3) Add 'ui.router' to your main module’s list of dependencies (For Component users: replace'ui.router' with require('angular-ui-router'))

When you’re done, your setup should look similar to the following:

<!doctype html>
<html ng-app="myApp">
    <script src="//"></script>
    <script src="js/angular-ui-router.min.js"></script>
        var myApp = angular.module('myApp', ['ui.router']);
        // For Component users, it should look like this:
        // var myApp = angular.module('myApp', [require('angular-ui-router')]);

Nested States & Views

The majority of UI-Router’s power is in its ability to nest states & views.

(1) First, follow the setup instructions detailed above.

(2) Then, add a ui-view directive to the <body /> of your app.

<!-- index.html -->
    <div ui-view></div>
    <!-- We'll also add some navigation: -->
    <a ui-sref="state1">State 1</a>
    <a ui-sref="state2">State 2</a>

(3) You’ll notice we also added some links with ui-sref directives. In addition to managing state transitions, this directive auto-generates the href attribute of the <a /> element it’s attached to, if the corresponding state has a URL. Next we’ll add some templates. These will plug into the ui-view within index.html. Notice that they have their own ui-view as well! That is the key to nesting states and views.

<!-- partials/state1.html -->
<h1>State 1</h1>
<a ui-sref="state1.list">Show List</a>
<div ui-view></div>
<!-- partials/state2.html -->
<h1>State 2</h1>
<a ui-sref="state2.list">Show List</a>
<div ui-view></div>

(4) Next, we’ll add some child templates. These will get plugged into the ui-view of their parent state templates.

<!-- partials/state1.list.html -->
<h3>List of State 1 Items</h3>
  <li ng-repeat="item in items">{{ item }}</li>

5) Finally, we’ll wire it all up with $stateProvider. Set up your states in the module config, as in the following:

myApp.config(function($stateProvider, $urlRouterProvider) {
  // For any unmatched url, redirect to /state1
  // Now set up the states
    .state('state1', {
      url: "/state1",
      templateUrl: "partials/state1.html"
    .state('state1.list', {
      url: "/list",
      templateUrl: "partials/state1.list.html",
      controller: function($scope) {
        $scope.items = ["A", "List", "Of", "Items"];
    .state('state2', {
      url: "/state2",
      templateUrl: "partials/state2.html"
    .state('state2.list', {
      url: "/list",
      templateUrl: "partials/state2.list.html",
      controller: function($scope) {
        $scope.things = ["A", "Set", "Of", "Things"];

(6) See this quick start example in action.

Go to Quick Start Plunker for Nested States & Views

(7) This only scratches the surface

Dive Deeper!


React: A javascript library for building user interfaces



Lots of people use React as the V in MVC. Since React makes no assumptions about the rest of your technology stack, it’s easy to try it out on a small feature in an existing project.


React uses a virtual DOM diff implementation for ultra-high performance. It can also render on the server using Node.js — no heavy browser DOM required.


React implements one-way reactive data flow which reduces boilerplate and is easier to reason about than traditional data binding.

Getting Started


The easiest way to start hacking on React is using the following JSFiddle Hello World examples:

Starter Kit

Download the starter kit to get started.

In the root directory of the starter kit, create a helloworld.html with the following contents.

<!DOCTYPE html>
    <script src="build/react.js"></script>
    <script src="build/JSXTransformer.js"></script>
    <div id="example"></div>
    <script type="text/jsx">
        <h1>Hello, world!</h1>,

The XML syntax inside of JavaScript is called JSX; check out the JSX syntax to learn more about it. In order to translate it to vanilla JavaScript we use <script type="text/jsx"> and include JSXTransformer.js to actually perform the transformation in the browser.

Separate File

Your React JSX code can live in a separate file. Create the following src/helloworld.js.

  <h1>Hello, world!</h1>,

Then reference it from helloworld.html:

<script type="text/jsx" src="src/helloworld.js"></script>

Offline Transform

First install the command-line tools (requires npm):

npm install -g react-tools

Then, translate your src/helloworld.js file to plain JavaScript:

jsx --watch src/ build/

The file build/helloworld.js is autogenerated whenever you make a change.

  React.createElement('h1', null, 'Hello, world!'),

Update your HTML file as below:

<!DOCTYPE html>
    <title>Hello React!</title>
    <script src="build/react.js"></script>
    <!-- No need for JSXTransformer! -->
    <div id="example"></div>
    <script src="build/helloworld.js"></script>

Want CommonJS?

If you want to use React with browserifywebpack, or another CommonJS-compatible module system, just use the react npm package. In addition, the jsx build tool can be integrated into most packaging systems (not just CommonJS) quite easily.

Next Steps

Check out the tutorial and the other examples in the starter kit’s examples directory to learn more.

We also have a wiki where the community contributes with workflows, UI-components, routing, data management etc.

Good luck, and welcome!

A Simple Component

React components implement a render() method that takes input data and returns what to display. This example uses an XML-like syntax called JSX. Input data that is passed into the component can be accessed by render() via this.props.

JSX is optional and not required to use React. Try clicking on “Compiled JS” to see the raw JavaScript code produced by the JSX compiler.

This code displays “Hello John”:

var HelloMessage = React.createClass({
  render: function() {
    return <div>Hello {}</div>;

React.render(<HelloMessage name="John" />, mountNode);

A Stateful Component

In addition to taking input data (accessed via this.props), a component can maintain internal state data (accessed via this.state). When a component’s state data changes, the rendered markup will be updated by re-invoking render().

var Timer = React.createClass({
  getInitialState: function() {
    return {secondsElapsed: 0};
  tick: function() {
    this.setState({secondsElapsed: this.state.secondsElapsed + 1});
  componentDidMount: function() {
    this.interval = setInterval(this.tick, 1000);
  componentWillUnmount: function() {
  render: function() {
    return (
      <div>Seconds Elapsed: {this.state.secondsElapsed}</div>

React.render(<Timer />, mountNode);

An Application

Using props and state, we can put together a small Todo application. This example uses state to track the current list of items as well as the text that the user has entered. Although event handlers appear to be rendered inline, they will be collected and implemented using event delegation.

var TodoList = React.createClass({
  render: function() {
    var createItem = function(itemText) {
      return <li>{itemText}</li>;
    return <ul>{}</ul>;
var TodoApp = React.createClass({
  getInitialState: function() {
    return {items: [], text: ''};
  onChange: function(e) {
  handleSubmit: function(e) {
    var nextItems = this.state.items.concat([this.state.text]);
    var nextText = '';
    this.setState({items: nextItems, text: nextText});
  render: function() {
    return (
        <TodoList items={this.state.items} />
        <form onSubmit={this.handleSubmit}>
          <input onChange={this.onChange} value={this.state.text} />
          <button>{'Add #' + (this.state.items.length + 1)}</button>

React.render(<TodoApp />, mountNode);

A Component Using External Plugins

React is flexible and provides hooks that allow you to interface with other libraries and frameworks. This example uses Showdown, an external Markdown library, to convert the textarea’s value in real-time.

var converter = new Showdown.converter();

var MarkdownEditor = React.createClass({
  getInitialState: function() {
    return {value: 'Type some *markdown* here!'};
  handleChange: function() {
    this.setState({value: this.refs.textarea.getDOMNode().value});
  render: function() {
    return (
      <div className="MarkdownEditor">
          defaultValue={this.state.value} />
            __html: converter.makeHtml(this.state.value)

React.render(<MarkdownEditor />, mountNode);

More info

Cross Site Request Forgery (CSRF)


El CSRF (del inglés Cross-site request forgery o falsificación de petición en sitios cruzados) es un tipo de exploit malicioso de un sitio web en el que comandos no autorizados son transmitidos por un usuario en el cual el sitio web confía. Esta vulnerabilidad es conocida también por otros nombres como XSRF, enlace hostil, ataque de un click, cabalgamiento de sesión, y ataque automático.


Un ejemplo muy clásico se dá cuando un sitio web, llamemoslo “”, posee un sistema de administración de usuarios. En dicho sistema, cuando un administrador se loguea, y ejecuta el siguiente REQUEST GET, elimina al usuario de ID: “63”:

Una forma de ejecutar la vulnerabilidad CSRF, se daría si otro sitio web, llamemos “”, en su sitio web añade el siguiente código HTML: <img src="">

Cuando el usuario administrador (logueado en, navegue por este sitio atacante, su browser intentará buscar una imagen en la URL y al realizarse el REQUEST GET hacia esa URL eliminará al usuario 63.

404 on jquery min map file

If Chrome DevTools is reporting a 404 for a .map file (maybe or, but can happen with anything) first thing to know is this is only requested when using the DevTools. Your users will not be hitting this 404.

Now you can fix this or disable the sourcemap functionality.

Fix: get the files

Next, it’s an easy fix. Head to and click the Download the map filelink for your version, and you’ll want the uncompressed file downloaded as well.

enter image description here

Having the map file in place allows you do debug your minified jQuery via the original sources, which will save a lot of time and frustration if you don’t like dealing with variable names like a andc.

More about sourcemaps here: An Introduction to JavaScript Source Maps

Dodge: disable sourcemaps

Instead of getting the files, you can alternatively disable JavaScript source maps completely for now, in your settings. This is a fine choice if you never plan on debugging JavaScript on this page. Use the cog icon in the bottom right of the DevTools, to open settings, then: enter image description here