Author Archives: admin

Zuora: Hosted pages advanced integration

[Fuente: https://knowledgecenter.zuora.com/CA_Commerce/G_Hosted_Commerce_Pages/B_Payment_Pages_2.0/H_Integrate_Payment_Pages_2.0/A_Advanced_Integration_of_Payment_Pages_2.0]

When you want to have a full control over submission of payments and interaction with Zuora, you can implement a separate function in your client for Payment Page submission

Then you tie this function to an external submit button to invoked the function when the button is clicked. The response from payment creation is redirected to the configured callback page that you specified during the Payment Pages 2.0 configuration. 

This advanced integration is supported only for the inline style Payment Page forms.

Below is an additional checklist to implement an inline Payment Page form with the external submit button. See Integrate Payment Pages 2.0 for the standard implementation steps.

The following is a modified sample code from Integrate Payment Pages 2.0 to Website. The code was updated to render an inline Payment Page form with the external submit button.

<!DOCTYPE html>
<html>
<head>
<meta name="viewport" content="width=device-width, initial-scale=1, maximum-scale=1, user-scalable=no">
<!-- Zuora Public javascript library -->
<script type="text/javascript" src="https://static.zuora.com/Resources/libs/hosted/1.0.0/zuora-min.js"/></script>
<script>
<!-- optional params in case prepopulation of certain fields is desired-->
var prepopulateFields = {
   creditCardAddress1:"123 Any Street",
   creditCardAddress2:"Suite #999",
   creditCardCountry:"USA",
   creditCardHolderName:"John Doe"
};
 
// Sample params for rendering iFrame on the client side
var params = {
   tenantId:123,
   id:"ff80808145b3bf9d0145b3c6812b0008", <!-- pageId-->
   token:"qJ52b1iCyPXyZTcuQbfZa2qmKhD4qBGz",
   signature:"MjJmYjBmNTY3ZWI3ZjcyZTRmMjZlZWVhMTJhZDhiYWI1ZjUyMGRkNQ==",
   style:"inline",
   key:"MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQC59DglWlsd82ooSVYyXoniF5rln3edz+5tdPLVBXXPDVXDCI9w7sneaj+XQs5LFaHZby117XzE8CFmoskVd2tsGLvXr83gEQ7eCXUrY0NDBFlAs0t+ChkB18VXG2DBbeUCI2poZJpCbpQm4rSvqUeY+8H/+/Stf4hXFWVPEEWyjwIDQAB",
   submitEnabled:"false", 
   locale:"fr_FR", 
   url:"http://www.zuora.com/apps/PublicHostedPageLite.do",
   paymentGateway: "DefaultGateway" //payment gateway name
};

function forwardCallbackURL(response) {
   var callbackUrl = "<%=request.getContextPath()%>/callback?";
   for(id in response) {
      callbackUrl = callbackUrl+id+"="+encodeURIComponent(response[id])+"&";
   }
   window.location.replace(callbackUrl);
}

var callback = function (response) {
   if(!response.success) {
      // Requesting Payment Page failed. Error handling code should be added here. Simply forward to the callback url in sample code.
      forwardCallbackURL(response);
   }
};

function loadHostedPage() {
   Z.render(
      params,
      prepopulateFields,
      callback
   );
}

//External button: function to submit form
function submitPage() {
    Z.submit();
}
</script>
</head>

<body onload="loadHostedPage();">
   <div id="zuora_payment" ></div>
   <!--Add additional form fields as needed→
   
   <!--External button: button to submit form→
   <button onclick="submitPage()">Submit</button>
</body>

Implement the Callback Page

After the Payment Pages 2.0 form has been submitted, your user is redirected to the callback page. Zuora returns the following results to the callback page by appending them to the callback URL.

  • When a payment method is successfully created, the following are returned:
    • success
    • refId: Represents the newly created payment method ID in Zuora.
    • Any pass-through parameters specified in the client
    • Any form field marked as Returned in Response in the Payment Page configuration. Only the Credit Card type Payment Pages can have fields returned in response.

Example callback URL: http://s1.hpm.com:8000/thanks?success=true&refId=2c92c0f84979bc700149830f938c7e66&phone=%2B1%28123%29456-7896&email=mickey%40zuora.com&creditCardCountry=NPL&token=L8Zh6xjRyVuXQzg0s29VPSLZHL3nQKkG&refId=2c92c0f84979bc700149830f938c7e66&field_passthrough1=Residential

  • When a Payment Page submission fails, the following are returned:
    • success
    • errorCode
    • errorMessage

Example callback URL: http://s1.hpm.com:8000/thanks?success=false&errorCode=HostedPageFieldValidationError&errorMessage=creditCardType%3ANullValue%2C%0A&errorField_creditCardType=NullValue

You need to code the callback page to process the response back from Zuora and any pass through parameters from Payment Pages 2.0.

See Configure Payment Pages 2.0 for setting the Callback Path for the Payment Pages.

Here is a sample code that implements a callback page. This page parses the response information from the callback and composes a message and display it. Then it calls javascript methods, submitSucceed andsubmitFail, defined in the parent window for further procedure. For example, if the Payment Page submission failed, it calls submitFail that re-renders the Payment Page, using Z.renderWithErrorHandler, and displays customized error message, using Z.runAfterRender.

<%@ page language="java" contentType="text/html; charset=UTF-8" pageEncoding="UTF-8"%>
<%@ page import="com.zuora.hosted.lite.util.HPMHelper" %>
<%
    String message = "";
    if("Response_From_Submit_Page".equals(request.getParameter("responseFrom"))) {
        // Callback for submitting hosted page.
        if("true".equals(request.getParameter("success"))) {
            // Submitting hosted page succeeded.
            try {
                // Validate signature. Signature's expired time is 30 minutes.            
                HPMHelper.validSignature(request.getParameter("signature"), 1000 * 60 * 30);
            } catch(Exception e) {
                // TODO: Error handling code should be added here.            
                
                throw e;
            }
            message = "Hosted Page submitted successfully. The payment method id is " + request.getParameter("refId") + ".";
        } else {
            // Submitting hosted page failed.
            message = "Hosted Page failed to submit. The reason is: " + request.getParameter("errorMessage");
        }        
    } else {
        // Callback for requesting hosted page. And requesting hosted page failed.
        message = "Hosted Page failed to load. The reason is: " + request.getParameter("errorMessage");
    }
%>
<!DOCTYPE html>
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1, maximum-scale=1, user-scalable=no">
<link href="css/hpm2samplecode.css" rel="stylesheet" type="text/css" />
<title>Result</title>
</head>
<body>
    <div class="firstTitle"><font size="4"><%=message%></font></div>
</body>
</html>
<script type="text/javascript">
<%
if("Response_From_Submit_Page".equals(request.getParameter("responseFrom"))) {
    if("true".equals(request.getParameter("success"))) {
        // Submitting hosted page succeeded.
%>
        window.parent.submitSucceed();
<%
    } else {
        // Submitting hosted page failed.
%>
        window.parent.submitFail("<%=request.getQueryString()%>");
<%
    }
}
%>    
}
</script>
See Callback.jsp in Zuora GitHub for a complete sample implementation designed for all three styles of Payment Pages 2.0, overlay, inline with button inside, inline with button outside.

Callback Function vs Callback Page

The Callback URL in the advanced integration is not the same as the callback JavaScript function required for the basic integration. See Callback Function for the cases when you need to implement the JavaScript callback function.

The following table summarizes the Payment Pages 2.0 callback model to help you correctly design a callback function and/or a callback page.

Form with the Submit Button Inside Form with the Submit Button Outside
Additional fields required when configuring the Payment Page in Zuora application None Callback Path field
When is the JavaScript callback function called?
  • When the Payment Page request fails
  • When the Payment Page submission fails
  • When the payment method is successfully created
  • When the payment method creation fails
  • When a Payment Page request fails
What does the JavaScript callback function handle? Handles all success or error responses from the Z.render function. Handles only the error responses in Payment Page request from the Z.render function.
What happens after the Payment Page is submitted and Payment Method is created? Your user is redirected to the callback function.

The callback function processes the response back from Zuora and any pass through-parameters from Payment Pages 2.0.

Your user is redirected to the callback page.

The callback page processes the response back from Zuora and any pass-through parameters from Payment Pages 2.0.

Customize Error Messages in Advanced Integration

If you implement Payment Pages with the Submit button outside, some errors will be generated after the Payment Page is submitted. Zuora will forward these error messages based on your callback URL.

Use the Z.runAfterRender function to handle the server-side error messages: Z.runAfterRender(serverErrorMessageCallback)

The function takes one input parameter which is the name of your custom function that handles the server-side error messages.

To customize error messages for Payment Pages with an external submit button:

  1. Define the error handling function.
  2. Use Z.renderWithErrorHandler to handle the client-side error messages.
  3. Use Z.runAfterRender to handle the server-side error messages.

 

Pixel tagging

[Fuente: http://www.signal.co/resources/tag-management-101/]

Tag Management 101

Tags are the medium used to facilitate the collection and sharing of data between your website and the various technologies you utilize that rely upon this data (e.g. analytics platforms, marketing vendors). With rapid expansion of the digital marketing ecosystem there are thousands of marketing vendors that each have their own tag (or multiple tags) that they wish you to add to your site to enable them to turn on their technologies and facilitate your digital marketing initiatives.

Today sites can incorporate dozens to hundreds of these third party tags, and with this come challenges pertaining to implementation agility and speed, data quality, transparency and control over data collection, performance loss from the loading of tags, etc. Tag management solutions (TMS) have been developed to solve for a myriad of these challenges and to make the management of tags simpler and easier for both technical and non-technical users.

What is a tag?

A tag — sometimes referred to as a pixel or beacon — is the means by which data is collected on a website.

What do tags look like?

A tag may be a simple 1×1 transparent pixel or image tag loaded onto the web page or it could take the form of JavaScript code that allows for more advanced data collection.

Where are all these tags?

Tags are incorporated into the HTML/JavaScript code delivered a web browser or app when a web page loads. Here’s a sample of what you might see if you look at the source code on a typical website using third-party tags.
SampleTag

What do tags do?

Tags power online marketing and analytics. Specifically, they can:

  • Instruct web browsers to collect data;
  • Set cookies;
  • Extend audiences between multiple websites;
  • Integrate third-party content into a website (e.g. social media widgets, video players, ads, etc.).
  • Everything from ad campaigns to Google Analytics runs on tags.

What’s the difference between a tag and a cookie?

Tags are not cookies and cookies are not tags. Rather, a tag can be used to set a cookie. Cookies are text-only strings of code placed on a computer or device for a variety of purposes including remembering a user’s preferences or the contents of their online shopping cart.

What kind of data can be collected through a tag?

Tags can capture any action or event on a website or device. This may include:

  • User Context: Implicit information such as the IP address of your mobile phone, the type of web browser you are using or how you were referred to the site (e.g. search, clickthrough from an ad, etc.)
  • User Profile: Anonymous data stored in cookies such as a Profile ID or targeting criteria
  • User Behavior: Data including the products, content or ads you viewed, links clicked, time on the page, etc.

How many tags are on a typical website?

The average enterprise website may have anywhere from 50-150 third-party tags on the site at any given time. This number doesn’t reflect the high volume of fourth-party/piggybacked tags that are often appended to existing tags already in place. Third-party tags represent a wide range of data-driven applications including analytics services such as Omniture and Webtrends to digital marketing services that power social media, video, advertising, retargeting, search, etc. Some service providers offer more than one tag for various products in their portfolio which can contribute to the number of tags on a brand’s site.

How do tags collect and send data?

When a user’s browser loads a webpage, the tag tells the browser to connect to a third-party marketing or analytics provider’s server for the purposes of data collection. The multi-step process looks something like this:

What are some of the pain points associated with tags?

Despite a tremendous amount of innovation in online marketing, the tag-driven method of gathering data from website activity has not changed since the 1990s. Tags often create internal conflict between marketing, IT and privacy stakeholders and can introduce pain points for everyone involved in online marketing.

  • Control and Ownership: When a site owner puts third-party code on their site, control over the data collection process is ceded to the third-party provider. The more tags, the more third parties with control over the site owner’s data.
  • Implementation: The traditional process of managing multiple third-party tags is a well-known operational challenge that requires site owners to place custom code on specific pages of their site. For many large brands, this typically requires the IT team to make the changes as part of a scheduled deployment process which can delay campaigns and lead to lost revenue opportunities.
  • Privacy: Multiple tags on a website put privacy at risk because third parties have access to the data collected on the site (see Control and Ownership above). Also, many brands must adapt their sites to comply with privacy regulation across markets and geographies which becomes increasingly difficult when data collection is in the hands of third parties.
  • Performance: Every new tag added to a site can introduce additional latency and degrade the customer experience. It is the end user visiting the website who bears the brunt of this overhead when they load pages containing hundreds of lines of third-party code.
  • Data loss: Sometimes tags fail to fire. For every failed tag, data is not collected and revenue opportunities may be lost.
  • Piggybacking: It is possible for tags to be chained together through a process called “piggybacking.” This enables tags to be appended to existing tags already in place on the website without making any changes to the page code. Piggybacking can add dozens of tags to a site and introduce services that the site owner may not be aware are on the site. Read more about the history of tags, tag containers and piggybacking on the “History of Tags” page of our website.

What is a tag container?

The tag container was originally introduced a decade ago by the major ad networks as a way to add a lot of tags to a website and manage them all in one place. Most online marketing professionals are familiar with Doubleclick’s Floodlight tag or the Atlas Universal Action Tag (UAT).

Containers are intended to make it easy to add a lot of tags to a website by injecting them into the browser through JavaScript or an invisible frame. Many companies – including Signal – offer more sophisticated versions of the tag container to address the growing complexity and operational challenges introduced by multiple third-party tags.

What are the limitations of tag containers?

At its core, a tag container is still sending third-party code through the end user’s browser putting data ownership, performance or privacy at risk. There is also a danger of data loss for the site owner and third-party provider when browser tags don’t fire properly. Tag containers do not solve for the inevitability of tags going dark. When it comes to scalability and performance, tag containers are dependent on optimizations to the browser which is yet another third-party service the site owner does not control.

Is there an alternative to the tag container and tag-driven approaches to data collection?

Yes, we’re so glad you asked! Signal’s vision is to fundamentally solve the problems that have emerged from the industry’s now outdated marketing infrastructure. Through technical integration with third-party service providers, Signal makes browser-based optimizations a non-issue by eliminating third-party tags from the browser and delivering data directly to partners through the cloud.

Technology has come a long way since the 1990s and Signal believes the future is a world where tags no longer get in the way of faster websites, better marketing programs and innovation for everyone.

SEO/SEM guide

[Fuente: http://www.dealerspan.com/themes/corp/tpl/content/services/SEO_Learning_Center.pdf]

SEO Definition

Search engine optimization (SEO) is the process of improving the volume and quality of traffic to a web site from search engines via “natural” (“organic” or “algorithmic”) search results. Usually, the earlier a site is presented in the search results, or the higher it “ranks”, the more searchers will visit that site. SEO can also target different kinds of search, including image search, local search, and industry-specific vertical search engines.

SEO is not necessarily an appropriate strategy for every website, and other Internet marketing strategies can be much more effective, depending on the site operator’s goals.[36]A successful Internet marketing campaign may drive organic search results to pages, but it also may involve the use of paid advertising on search engines and other pages, building high quality web pages to engage and persuade, addressing technical issues that may keep search engines from crawling and indexing those sites, setting up analytics programs to enable site owners to measure their successes, and improving a site’s conversion rate.

SEM Definition

Search Engine Marketing, or SEM, is a form of Internet Marketing that seeks to promote websites by increasing their visibility in the Search Engine results pages (SERPs) and has a proven ROI (Return on Investment). According to the Search Engine Marketing Professionals Organization, SEM methods include: Search Engine Optimization (or SEO), paid placement, and paid inclusion. Other sources, including the New York Times define SEM as the practice of buying paid search listings, different from SEO which seeks to obtain better free search listings.

Industry Standards and Our Application

The search industry is always changing, updating and re-inventing itself, and as industry standards and trends change, we will update our evaluation criteria to give you the best overall information possible. Our goal is to provide reports that will allow you to see your Meta and keyword data, so that you have the information about your Website to begin evaluating your SEO needs. We urge you to learn as much about SEO as possible and apply it, therefore, we are providing basic information to act as a launch point for those who want to pursue it further. Much SEO data is open to debate and opinion, even among respected experts in the field, therefore, we will take a safe, moderate and general approach to give our users the best possible overview for use over the greatest number of applications.

SEO vs SEM

Search engine marketing involves click costs. Search engine optimization works through free traffic. Those two facts are the basis of a popular myth: that it’s easier to get good ROI through SEO than it is to get the same ROI through SEM. In SEM, you decide the landing page your visitors see. In SEO, a search engine spider decides on the landing page visitors see. That’s a difference in control, and that difference makes all the difference.

3 Tags

Next we talk about 3 different tags. The Title tag, the Description tag and the Keyword tag. These ‘tags’ are housed in the source code and if you right click on any website and ‘view source’ you will see these tags towards the top of the document.

Title Tags

Title tags are what the user will see at the top of the browser window above the (File Edit View, etc). For example, go to www.rimrockauto.com and you will see Rimrock Auto Group | New and Used Cars in Billings Montana. Search engines will grab any or all of those words and index them. When a person types in ‘New Cars Billings’ a search engine will know that rimrockauto.com has those words indexed and it will catapult your search engine rankings towards the top based on relevance. Relevance meaning: all of those words were in the title tag. This is overly generalized, but a nice illustration.

Title tags are currently considered the most important Meta tag from a search engine perspective, the Title tag is a MUST. Getting it right can take a little time and research, but it is worth the effort. And, as with all Meta tags, place the most important keywords at the start and the lesser ones at the end of the tag.

Recommended length varies by search engine and directory. The average accepted length is about 70 characters (this includes letters and spaces). One major engine considers 60 the right length, where several of the other major players prefer 70 characters. Some larger and more moderate size engines and directories allow up to 130 characters.

What if your tag is too long? Most engines will simply truncate (or chop off) whatever characters go over their limits, which is why it is very important to place the best keywords at the front of the tag. If you submit your site to a large variety of search indexes and engines, remember that anything over 60-70 characters could be removed, and plan accordingly.

Best way to start? Figure out the best keywords for your Web page. Go through and list search terms you would like new visitors to use to find your page, check the search popularity of the words/terms, and look at your competitor’s sites and see what their tags say, and how they rank in the search engines. This will all help you build a short list of highly relevant keywords and search phrases. Once you have the list, prioritize them by importance, whether it is your importance or by search popularity is up to you. Use the very best ones in your title tag. You no longer need to include your Website name or domain in a title tag, unless you have a highly recognizable name or domain. You want to focus on describing the page and its products in your Title tag. Write it like a title, with each word capitalized, but do not write the title in all caps. It is also not recommended to use punctuation in the Title tag.

Description Tags

Not considered as important as it used to be, the description tag still has a valid purpose. Even though it has less value to a spider or “bot” driven search engine these days, it is still highly useful for search directories and various online listings. Think of it as a short, keyword-rich “classified ad.” In just a few words you want to convey your Web page’s message and do it with as many rich keywords as possible. Write it in a basic sentence structure, beginning with a capital letter and ending with a period. (A couple short sentences are fine, too, as long as the character count stays good.)

Recommended length varies by search engine and directory. The average accepted length is about 200 characters (this includes letters and spaces). One major engine considers 170 the right length, where several of the other major players prefer 200 characters. Some larger and more moderate size engines and directories allow up to 250 characters, with one having no limits. Keep in mind that there are a growing number of search engines that do not use Description tags, and it is mostly valued by search indexes and directories.

Keyword Tags

Currently considered less important than the Title and Description Meta tags, it is debated whether Keyword tags are of much use at all anymore. However, right now it does not hurt to have them, and there are still some engines and directories that do take them into account to some degree.

Recommended length varies by search engine and directory. The average accepted length is approximately 800 – 900 characters (this includes letters and spaces). Some allow 1000 characters, where others prefer under 800.

It is best to use the right keywords, not over use them, and stay relevant for the page. If only two or three words fit that guideline, then use them. Do not worry about trying to fill the Keywords tag to make it longer. The key is relevance, not length. You do not want to waste any opportunity to use relevant keywords, but if you do not need the space, you will not be penalized for it.

What is the best way to format this tag? There are two accepted ways to format the keywords. They both agree to separate each keyword and keyword phrase with a comma. Where they differ is, some advocate to use a comma and space (keyword, word, word2), and others prefer commas and no spaces (keyword,word,word2). Either format is accepted by the majority of search engines that use Keyword tags.

Why are Keyword tags losing their value? Search engines are always adjusting and changing their algorithms, methodology and criteria, always seeking to improve search technology and eliminate unfair ranking and cheating. The current trend is leaning away from a Keywords Meta tag and putting higher value on keywords used in the copywriting of a Web page’s textual content. The desire of the search industry is to bring a closer match to what the site visitor sees and what a search engine uses to rank a site, all in the goal of making the search experience more precise for the Internet user.

White hat versus black hat

SEO techniques are classified by some into two broad categories: techniques that search engines recommend as part of good design, and those techniques that search engines do not approve of and attempt to minimize the effect of, referred to as spamdexing. Some industry commentators classify these methods, and the practitioners who employ them, as either white hat SEO, or black hat SEO. White hats tend to produce results that last a long time, whereas black hats anticipate that their sites will eventually be banned once the search engines discover what they are doing.

A SEO tactic, technique or method is considered white hat if it conforms to the search engines’ guidelines and involves no deception. As the search engine guidelines are not written as a series of rules or commandments, this is an important distinction to note. White hat SEO is not just about following guidelines, but is about ensuring that the content a search engine indexes and subsequently ranks is the same content a user will see.

White hat advice is generally summed up as creating content for users, not for search engines, and then making that content easily accessible to the spiders, rather than attempting to game the algorithm. White hat SEO is in many ways similar to web development that promotes accessibility, although the two are not identical.

Black hat SEO attempts to improve rankings in ways that are disapproved of by the search engines, or involve deception. One black hat technique uses text that is hidden, either as text colored similar to the background, in region invisible to the user like an invisible div or positioned off screen. Another method gives a different page depending on whether the page is being requested by a human visitor or a search engine, a technique known as cloaking.

Search engines may penalize sites they discover using black hat methods, either by reducing their rankings or eliminating their listings from their databases altogether. Such penalties can be applied either automatically by the search engines’ algorithms, or by a manual site review.

One infamous example was the February 2006 Google removal of both BMW Germany and Ricoh Germany for use of deceptive practices.Both companies, however, quickly apologized, fixed the offending pages, and were restored to Google’s list.

Dealerspan’s Strategies for SEM, SEO

1. Build our websites to meet current SEO strategies. a. Strategically written Title tags on EVERY page b. Strategically written Keyword tags on every page c. Strategically written Descriptions on every page d. Alt tags on every image e. Text on every page that can be indexed. No image only pages f. Static URL as much as possible g. If dynamic URL’s are needed, make them as recognizable as possible never going past 2 levels of directories. h. Index-able inventory (non Ajax). i. Standards compliant markup j. Try to use links whenever possible (outbound and inbound) k. Site maps l. Submit site maps to search engines

2. Stay in tune with latest SEO trends and become astute to what this means.

3. Have someone monitor search engine paid placement. This is the core of SEM. This would be a full time position that simply makes sure dealerships have paid placement in major search engines. In essence we would be managing either hands on, or through tools provided to dealership their paid placements on search engine.

4. In-house monitoring SEO standards on websites. This means we do not put out under par websites.

Search Engine Marketing Guide

How To Tips for Online Marketing

Our Search Engine Marketing Guide will help you understand the basics of the modern online marketing and show you ways to increase your website’s popularity on the Internet. SiteGround’s SEM experts prepared this tutorial based on our own experience and years of hard work. We hope to save you time, hassle, and money when fighting for better search engine rankings,lowering your pay-per-click costs, and getting exposure on social media.

AngularJS : Unit Testing in AngularJS: Services, Controllers & Providers

[Fuente : http://www.sitepoint.com/unit-testing-angularjs-services-controllers-providers/]

AngularJS is designed with testability in mind. Dependency injection is one of the prominent features of the framework that makes unit testing easier. AngularJS defines a way to neatly modularize the application and divide it into different components such as controllers, directives, filters or animations. This model of development means that the individual pieces work in isolation and the application can scale easily over a long period of time. As extensibility and testability go hand-in-hand, it is easy to test AngularJS code.

As per the definition of unit testing, the system under test should be tested in isolation. So, any external objects needed by the system have to be replaced with mock objects. As the name itself says, the mock objects do not perform an actual task; rather they are used to meet the expectations of the system under test. If you need a refresher on mocking, please refer to one of my previous articles: Mocking Dependencies in AngularJS Tests.

In this article, I will share a set of tips on testing services, controllers and providers in AngularJS. The code snippets have been written using Jasmine and can be run with the Karma test runner. You can download the code used in this article from our GitHub repo, where you will also find instructions on running the tests.

Testing Services

Services are one of the most common components in an AngularJS application. They provide a way to define re-usable logic in a central place so that one doesn’t need to repeat the same logic over and over. The singleton nature of the service makes it possible to share the same piece of data across multiple controllers, directives and even other services.

A service can depend on a set of other services to perform its task. Say, a service named A depends on the services B, C and D to perform its task. While testing the service A, the dependencies B, C and D have to be replaced with mocks.

We generally mock all the dependencies, except certain utility services like $rootScope and $parse. We create spies on the methods that have to be inspected in the tests (in Jasmine, mocks are referred to as spies) using jasmine.createSpy() which will return a brand new function.

Let’s consider the following service:

1
2
3
4
5
6
7
8
9
10
11
12
13
angular.module('services', [])
  .service('sampleSvc', ['$window', 'modalSvc', function($window, modalSvc){
    this.showDialog = function(message, title){
      if(title){
        modalSvc.showModalDialog({
          title: title,
          message: message
        });
      } else {
        $window.alert(message);
      }
    };
  }]);

This service has just one method (showDialog). Depending on the value of the input this method receives, it calls one of two services that are injected into it as dependencies ($window or modalSvc).

To test sampleSvc we need to mock both of the dependent services, load the angular module that contains our service and get references to all the objects:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
var mockWindow, mockModalSvc, sampleSvcObj;
beforeEach(function(){
  module(function($provide){
    $provide.service('$window', function(){
      this.alert= jasmine.createSpy('alert');
    });
    $provide.service('modalSvc', function(){
      this.showModalDialog = jasmine.createSpy('showModalDialog');
    });
  });
  module('services');
});
beforeEach(inject(function($window, modalSvc, sampleSvc){
  mockWindow=$window;
  mockModalSvc=modalSvc;
  sampleSvcObj=sampleSvc;
}));

Now we can test the behavior of the showDialog method. The two test cases we can write for the method are as follows:

  • it calls alert if no title is parameter is passed in
  • it calls showModalDialog if both title and message parameters are present

The following snippet shows these tests:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
it('should show alert when title is not passed into showDialog', function(){
  var message="Some message";
  sampleSvcObj.showDialog(message);
  expect(mockWindow.alert).toHaveBeenCalledWith(message);
  expect(mockModalSvc.showModalDialog).not.toHaveBeenCalled();
});
it('should show modal when title is passed into showDialog', function(){
  var message="Some message";
  var title="Some title";
  sampleSvcObj.showDialog(message, title);
  expect(mockModalSvc.showModalDialog).toHaveBeenCalledWith({
    message: message,
    title: title
  });
  expect(mockWindow.alert).not.toHaveBeenCalled();
});

This method doesn’t have a lot of logic to test, whereas the services in typical web apps would normally contain a lot of functionality. You can use the technique demonstrated in this tip for mocking and getting the references to services. The service tests should cover every possible scenario that was assumed while writing the service.

Factories and values can also be tested using the same technique.

Testing Controllers

The set-up process for testing a controller is quite different from that of a service. This is because controllers are not injectable, rather they are instantiated automatically when a route loads or, an ng-controller directive is compiled. As we don’t have the views loading in tests, we need to manually instantiate the controller under test.

As the controllers are generally tied to a view, the behavior of methods in the controllers depends on the views. Also, some additional objects may get added to the scope after the view has been compiled. One of the most common examples of this is a form object. In order to make the tests work as expected, these objects have to be manually created and added to the controller.

A controller can be of one of the following types:

  • Controller used with $scope
  • Controller used with Controller as syntax

If you’re not sure on the difference, you can read more about it here. Either way, we will discuss both of these cases.

Testing Controllers with $scope

Consider the following controller:

1
2
3
4
5
6
7
8
9
10
11
angular.module('controllers',[])
  .controller('FirstController', ['$scope','dataSvc', function($scope, dataSvc) {
    $scope.saveData = function () {
      dataSvc.save($scope.bookDetails).then(function (result) {
        $scope.bookDetails = {};
        $scope.bookForm.$setPristine();
      });
    };
    $scope.numberPattern = /^\d*$/;
  }]);

To test this controller, we need to create an instance of the controller by passing in a $scope object and a mocked object of the service (dataSvc). As the service contains an asynchronous method, we need to mock that using the mocking promise technique I outlined in a previous article.

The following snippet mocks the dataSvc service:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
module(function($provide){
  $provide.factory('dataSvc', ['$q', function($q)
    function save(data){
      if(passPromise){
        return $q.when();
      } else {
        return $q.reject();
      }
    }
    return{
      save: save
    };
  }]);
});

We can then create a new scope for the controller using the $rootScope.$new method. After creating an instance of the controller, we have all the fields and methods on this new $scope.

1
2
3
4
5
6
7
8
9
beforeEach(inject(function($rootScope, $controller, dataSvc){
  scope=$rootScope.$new();
  mockDataSvc=dataSvc;
  spyOn(mockDataSvc,'save').andCallThrough();
  firstController = $controller('FirstController', {
    $scope: scope,
    dataSvc: mockDataSvc
  });
}));

As the controller adds a field and a method to $scope, we can check if they are set to right values and if the methods have the correct logic. The sample controller above adds a regular expression to check for a valid number. Let’s add a spec to test the behavior of the regular expression:

1
2
3
4
5
it('should have assigned right pattern to numberPattern', function(){
    expect(scope.numberPattern).toBeDefined();
    expect(scope.numberPattern.test("100")).toBe(true);
    expect(scope.numberPattern.test("100aa")).toBe(false);
});

If a controller initializes any objects with default values, we can check their values in the spec.

To test the saveData method, we need to set some values for the bookDetails and bookForm objects. These objects would be bound to UI elements, so are created at runtime when the view is compiled. As already mentioned, we need to manually initialize them with some values before calling the saveDatamethod.

The following snippet tests this method:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
it('should call save method on dataSvc on calling saveData', function(){
    scope.bookDetails = {
      bookId: 1,
      name: "Mastering Web application development using AngularJS",
      author:"Peter and Pawel"
    };
    scope.bookForm = {
      $setPristine: jasmine.createSpy('$setPristine')
    };
    passPromise = true;
    scope.saveData();
    scope.$digest();
    expect(mockDataSvc.save).toHaveBeenCalled();
    expect(scope.bookDetails).toEqual({});
    expect(scope.bookForm.$setPristine).toHaveBeenCalled();
});

Testing Controllers with ‘Controller as’ Syntax

Testing a controller which uses the Controller as syntax is easier than testing the one using $scope. In this case, an instance of the controller plays the role of a model. Consequently, all actions and objects are available on this instance.

Consider the following controller:

1
2
3
4
5
6
7
8
9
10
11
12
13
angular.module('controllers',[])
  .controller('SecondController', function(dataSvc){
    var vm=this;
    vm.saveData = function () {
      dataSvc.save(vm.bookDetails).then(function(result) {
        vm.bookDetails = {};
        vm.bookForm.$setPristine();
      });
    };
    vm.numberPattern = /^\d*$/;
  });

The process of invoking this controller is similar to the process discussed earlier. The only difference is, we don’t need to create a $scope.

1
2
3
4
5
beforeEach(inject(function($controller){
  secondController = $controller('SecondController', {
    dataSvc: mockDataSvc
  });
}));

As all members and methods in the controller are added to this instance, we can access them using the instance reference.

The following snippet tests the numberPattern field added to the above controller:

1
2
3
4
5
it('should have set pattern to match numbers', function(){
  expect(secondController.numberPattern).toBeDefined();
  expect(secondController.numberPattern.test("100")).toBe(true);
  expect(secondController.numberPattern.test("100aa")).toBe(false);
});

Assertions of the saveData method remain the same. The only difference in this approach is with the way we initialize values to the bookDetails and bookForm objects.

The following snippet shows the spec:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
it('should call save method on dataSvc on calling saveData', function ()
  secondController.bookDetails = {
    bookId: 1,
    name: "Mastering Web application development using AngularJS",
    author: "Peter and Pawel"
  };
  secondController.bookForm = {
    $setPristine: jasmine.createSpy('$setPristine')
  };
  passPromise = true;
  secondController.saveData();
  rootScope.$digest();
  expect(mockDataSvc.save).toHaveBeenCalled();
  expect(secondController.bookDetails).toEqual({});
  expect(secondController.bookForm.$setPristine).toHaveBeenCalled();
});

Testing Providers

Providers are used to expose an API for application-wide configuration that must be made before the application starts. Once the configuration phase of an AngularJS application is over, interaction with providers is disallowed. Consequently, providers are only accessible in config blocks, or other provider blocks. We cannot obtain a provider instance using an inject block, rather we need to pass a callback to the module block.

Let’s consider the following provider which depends on a constant (appConstants) a second provider (anotherProvider):

1
2
3
4
5
6
7
8
9
10
11
12
13
angular.module('providers', [])
  .provider('sample', function(appConstants, anotherProvider){
    this.configureOptions = function(options){
      if(options.allow){
        anotherProvider.register(appConstants.ALLOW);
      } else {
        anotherProvider.register(appConstants.DENY);
      }
    };
    this.$get = function(){};
  });

In order to test this, we first need to mock the dependencies. You can see how to do this in the sample code.

Before testing the provider, we need to ensure that the module is loaded and ready. In tests, loading of the modules is deferred till an inject block is executed or, the first test is executed. In a couple of projects, I have seen some tests which use an empty first test to load the module. I am not a fan of this approach as the test doesn’t do anything and adds a count to your total number of tests. Instead, I use an empty inject block to get the modules loaded.

The following snippet gets the references and loads the modules:

1
2
3
4
5
6
7
8
9
beforeEach(module("providers"));
beforeEach(function(){
  module(function(anotherProvider, appConstants, sampleProvider){
    anotherProviderObj=anotherProvider;
    appConstantsObj=appConstants;
    sampleProviderObj=sampleProvider;
  });
});
beforeEach(inject());

Now that we have all of the references, we can call methods defined in the providers and test them:

1
2
3
4
5
it('should call register with allow', function(){
  sampleProviderObj.configureOptions({allow:true});
  expect(anotherProviderObj.register).toHaveBeenCalled();
  expect(anotherProviderObj.register).toHaveBeenCalledWith(appConstantsObj.ALLOW);
});

Conclusion

Unit testing becomes tricky at times, but it is worth spending the time on it as it ensures the correctness of the application. AngularJS makes it easier to unit test the code written using the framework. I hope this article gives you enough idea to expand and enhance the tests in your applications. In a future article we will continue looking at how to test other pieces of your code.

Javascript unit testing : How to setup Karma JavaScript test runner

[Fuente: http://toon.io/how-to-setup-karma-javascript-test-runner/]

Karma is a test runner developed by the Angular team trying to bring a productive test environment to developers. As a test runner it allows us to run our client side JavaScript tests in real browsers from the command line.

Though we find plenty of information about Karma on their website, I found it not all obvious what the different configuration settings in its config file do. We try to explain this using an example project.

JAVASCRIPT PROJECT WITH KARMA, MOCHA AND CHAI

The project has the following directory structure (get if from github):

// Contains all source files
src/js/
src/vendor/

// Contains the tests
tests/unit/

// Contains nodejs modules
node_modules/

INSTALL TESTING LIBS

First, install Karma and some plugins:

// Install karma, mocha adapter and chai
npm install karma karma-mocha chai --save-dev

Everything is installed in the node_modules directory. Karma is the test runner. As a test runner it:

  • Starts a webserver to serve our JavaScript source and test files
  • Loads all files in the correct order
  • Spins up browsers to run our tests in

Mocha is a test framework. It is responsible to run our tests and report back which failed.

Chai is an assertion library. It provides a nice api to check whether our code does the things we expect it to do.

CREATE KARMA CONFIG FILE

Next we create the Karma config file in our directory root running this command:

// Create Karma config file
./node_modules/karma/bin/karma init

This will prompt us for some questions and generates a config file. Items of interests to us are:

basePath: '',

The base path tells Karma where to load our files from. Setting it to '' means all files will be looked up from the root of our project.

files: [
    "node_modules/chai/chai.js",
    "src/**/*.js",
    "tests/**/*.js"
],

Files tells Karma which files it should load relative to the base path. These are:

  • All test related libraries
  • Our source code to test
  • The tests themselves

Note we instruct Karma to load chai.js. Forgetting this means chai will not be loaded.

This will generate karma.conf.js file including karma configuration.

What about Mocha? Doesn’t it need to be loaded too?

Yes it should. However, Karma will autoload all sibling node modules starting with karma-. This means if Karma is installed in the node_modules directory, all other modules starting with karma- will be autoloaded for us. And mocha is fetched as the karma-mocha node module. No need to manually load it ourselves.

frameworks: ['mocha'],

Frameworks instructs Karma to use the mocha testing framework.

browsers: ['Chrome'],

Browsers tells Karma to run the tests in Chrome. Note, Chrome needs to be installed on our computer. Running the tests from the command line will spin up a new Chrome window.

The karma-chrome-launcher node module needs to be installed tookarma init command will auto install it for us. If you add a browser later on, don’t forget to install its runner too.

port: 9876,

Karma will start its own server on this port to serve our JavaScript files and tests. The default is ok, only change it when we’re already using that port.

autoWatch: true,

Auto watch instructs Karma to rerun the tests whenever a file changes.

singleRun: false

Single run tells Karma to shut down the browser when all tests have ran. This is useful for Continuous Integration systems where only one run is needed. For development, set it to false as this will keep the browser window opened. This means no time is wasted spinning up a new browser every time tests need to be executed.

RUN THE TESTS

Now we can run the tests like:

./node_modules/karma/bin/karma start

Notice we can also run the tests with ./node_modules/karma/bin/karma run. This will not spin up the server to load our test files. Use it in combination with karma start. Start will spin up the browser and load the files, run will instruct karma to rerun the tests.

USE KARMA-CLI

It’s tiresome to always do ./node_modules/karma/bin/karma start instead of just karma start. Therefore install karma-cli globally.

npm install -g karma-cli

This tool executes the Karma commands for the project in the current directory. By exposing the commands in a separate module then the Karma implementation, we can use different versions of Karma for different projects on our computer.

Note: we might have installed Karma globally before. If so, we need to delete it first as it will mess up which plugins it will autoload. For example: we might have installed karma-mocha in our project dir but not globally. Executing Karma (as global installed module) will fail to load karma-mocha (since “global” Karma will only look into the globally installed modules). Just uninstall karma as a global module and install karma-cli globally instead.

SHARE THIS POST

  

AUTHOR

Toon Ketels

Hi. I’m Toon. I’m a freelance JavaScript developer. I specialize in writing great JavaScript using tools like backbone/marionette/angular and node.js.

Javascript : Qunit : Introduction to unit testing

[Fuente: https://qunitjs.com/intro/ ]

You probably know that testing is good, but the first hurdle to overcome when trying to write unit tests for client-side code is the lack of any actual units; JavaScript code is written for each page of a website or each module of an application and is closely intermixed with back-end logic and related HTML. In the worst case, the code is completely mixed with HTML, as inline events handlers.

This is likely the case when no JavaScript library for some DOM abstraction is being used; writing inline event handlers is much easier than using the DOM APIs to bind those events. More and more developers are picking up a library such as jQuery to handle the DOM abstraction, allowing them to move those inline events to distinct scripts, either on the same page or even in a separate JavaScript file. However, putting the code into separate files doesn’t mean that it is ready to be tested as a unit.

What is a unit anyway? In the best case, it is a pure function that you can deal with in some way — a function that always gives you the same result for a given input. This makes unit testing pretty easy, but most of the time you need to deal with side effects, which here means DOM manipulations. It’s still useful to figure out which units we can structure our code into and to build unit tests accordingly.

Building Unit Tests

With that in mind, we can obviously say that starting with unit testing is much easier when starting something from scratch. But that’s not what this article is about. This article is to help you with the harder problem: extracting existing code and testing the important parts, potentially uncovering and fixing bugs in the code.

The process of extracting code and putting it into a different form, without modifying its current behavior, is called refactoring. Refactoring is an excellent method of improving the code design of a program; and because any change could actually modify the behaviour of the program, it is safest to do when unit tests are in place.

This chicken-and-egg problem means that to add tests to existing code, you have to take the risk of breaking things. So, until you have solid coverage with unit tests, you need to continue manually testing to minimize that risk.

That should be enough theory for now. Let’s look at a practical example, testing some JavaScript code that is currently mixed in with and connected to a page. The code looks for links with title attributes, using those titles to display when something was posted, as a relative time value, like “5 days ago”:

<html xmlns="http://www.w3.org/1999/xhtml">
<head>
  <meta name="generator" content=
  "HTML Tidy for Linux/x86 (vers 25 March 2009), see www.w3.org" />
  <meta charset="utf-8" />

  <title>Mangled date examples</title>
  <script type="text/javascript">
//<![CDATA[

function prettyDate(time) {
    var date = new Date(time || ""),
        diff = (((new Date()).getTime() - date.getTime()) / 1000),
        day_diff = Math.floor(diff / 86400);

    if (isNaN(day_diff) || day_diff < 0 || day_diff >= 31)
        return;

    return day_diff == 0 && (
            diff < 60 && "just now" ||
            diff < 120 && "1 minute ago" ||
            diff < 3600 && Math.floor(diff / 60) +
            " minutes ago" ||
            diff < 7200 && "1 hour ago" ||
            diff < 86400 && Math.floor(diff / 3600) +
            " hours ago") ||
        day_diff == 1 && "Yesterday" ||
        day_diff < 7 && day_diff + " days ago" ||
        day_diff < 31 && Math.ceil(day_diff / 7) +
        " weeks ago";
}
window.onload = function() {
    var links = document.getElementsByTagName("a");
    for (var i = 0; i < links.length; i++) {
        if (links[i].title) {
            var date = prettyDate(links[i].title);
            if (date) {
                links[i].innerHTML = date;
            }
        }
    }
};

  //]]>
  </script>
</head>

<body>
  <ul>
    <li class="entry">
      <p>blah blah blah...</p><small class="extra">Posted <span class="time"><a href=
      "#2008/01/blah/57/" title="2008-01-28T20:24:17Z"><span>January 28th,
      2008</span></a></span> by <span class="author"><a href="#john/">John
      Resig</a></span></small>
    </li><!-- more list items -->
  </ul>
</body>
</html>

If you ran that example, you’d see a problem: none of the dates get replaced. The code works, though. It loops through all anchors on the page and checks for a title property on each. If there is one, it passes it to the prettyDate function. If prettyDate returns a result, it updates the innerHTML of the link with the result.

Make Things Testable

The problem is that for any date older then 31 days, prettyDate just returns undefined (implicitly, with a single return statement), leaving the text of the anchor as is. So, to see what’s supposed to happen, we can hardcode a “current” date:

<!doctype html>
<html>
<head>
  <meta charset="utf-8">
  <title>Mangled date examples</title>
  <script>
  function prettyDate(now, time){
    var date = new Date(time || ""),
      diff = (((new Date(now)).getTime() - date.getTime()) / 1000),
      day_diff = Math.floor(diff / 86400);

    if ( isNaN(day_diff) || day_diff < 0 || day_diff >= 31 )
      return;

    return day_diff == 0 && (
        diff < 60 && "just now" ||
        diff < 120 && "1 minute ago" ||
        diff < 3600 && Math.floor( diff / 60 ) +
          " minutes ago" ||
        diff < 7200 && "1 hour ago" ||
        diff < 86400 && Math.floor( diff / 3600 ) +
          " hours ago") ||
      day_diff == 1 && "Yesterday" ||
      day_diff < 7 && day_diff + " days ago" ||
      day_diff < 31 && Math.ceil( day_diff / 7 ) +
        " weeks ago";
  }
  window.onload = function() {
    var links = document.getElementsByTagName("a");
    for ( var i = 0; i < links.length; i++ ) {
      if ( links[i].title ) {
        var date = prettyDate("2008-01-28T22:25:00Z",
          links[i].title);
        if ( date ) {
          links[i].innerHTML = date;
        }
      }
    }
  };
  </script>
</head>
<body>

<ul>
  <li class="entry">
    <p>blah blah blah...</p>
    <small class="extra">
      Posted <span class="time">
        <a href="#2008/01/blah/57/" title="2008-01-28T20:24:17Z">
          <span>January 28th, 2008</span>
        </a>
      </span>
      by <span class="author"><a href="#john/">John Resig</a></span>
    </small>
  </li>
  <!-- more list items -->
</ul>

</body>
</html>

Now, the links should say “2 hours ago,” “Yesterday” and so on. That’s something, but still not an actual testable unit. So, without changing the code further, all we can do is try to test the resulting DOM changes. Even if that did work, any small change to the markup would likely break the test, resulting in a really bad cost-benefit ratio for a test like that.

Refactoring, Stage 0

Instead, let’s refactor the code just enough to have something that we can unit test.

We need to make two changes for this to happen: pass the current date to the prettyDate function as an argument, instead of having it just use new Date, and extract the function to a separate file so that we can include the code on a separate page for unit tests.

<!doctype html>
<html>
<head>
  <meta charset="utf-8">
  <title>Refactored date examples</title>
  <script src="prettydate.js"></script>
  <script>
  window.onload = function() {
    var links = document.getElementsByTagName("a");
    for ( var i = 0; i < links.length; i++ ) {
      if ( links[i].title ) {
        var date = prettyDate("2008-01-28T22:25:00Z",
          links[i].title);
        if ( date ) {
          links[i].innerHTML = date;
        }
      }
    }
  };
  </script>
</head>
<body>

<ul>
  <li class="entry">
    <p>blah blah blah...</p>
    <small class="extra">
      Posted <span class="time">
        <a href="#2008/01/blah/57/" title="2008-01-28T20:24:17Z">
          <span>January 28th, 2008</span>
        </a>
      </span>
      by <span class="author"><a href="#john/">John Resig</a></span>
    </small>
  </li>
  <!-- more list items -->
</ul>

</body>
</html>

Here’s the contents of prettydate.js:

function prettyDate(now, time){
  var date = new Date(time || ""),
    diff = (((new Date(now)).getTime() - date.getTime()) / 1000),
    day_diff = Math.floor(diff / 86400);

  if ( isNaN(day_diff) || day_diff < 0 || day_diff >= 31 )
    return;

  return day_diff == 0 && (
      diff < 60 && "just now" ||
      diff < 120 && "1 minute ago" ||
      diff < 3600 && Math.floor( diff / 60 ) +
        " minutes ago" ||
      diff < 7200 && "1 hour ago" ||
      diff < 86400 && Math.floor( diff / 3600 ) +
        " hours ago") ||
    day_diff == 1 && "Yesterday" ||
    day_diff < 7 && day_diff + " days ago" ||
    day_diff < 31 && Math.ceil( day_diff / 7 ) +
      " weeks ago";
}

Now that we have something to test, let’s write some actual unit tests:

<!doctype html>
<html>
<head>
  <meta charset="utf-8">
  <title>Refactored date examples</title>
  <script src="prettydate.js"></script>
  <script>
  function test(then, expected) {
    results.total++;
    var result = prettyDate("2008/01/28 22:25:00", then);
    if (result !== expected) {
      results.bad++;
      console.log("Expected " + expected +
        ", but was " + result);
    }
  }
  var results = {
    total: 0,
    bad: 0
  };
  test("2008/01/28 22:24:30", "just now");
  test("2008/01/28 22:23:30", "1 minute ago");
  test("2008/01/28 21:23:30", "1 hour ago");
  test("2008/01/27 22:23:30", "Yesterday");
  test("2008/01/26 22:23:30", "2 days ago");
  test("2007/01/26 22:23:30", undefined);
  console.log("Of " + results.total + " tests, " +
    results.bad + " failed, " +
    (results.total - results.bad) + " passed.");
  </script>
</head>
<body>

</body>
</html>
  • Run this example. (Make sure to enable a console such as Firebug or Chrome’s Web Inspector.)

This will create an ad-hoc testing framework, using only the console for output. It has no dependencies to the DOM at all, so you could just as well run it in a non-browser JavaScript environment, such as Node.js or Rhino, by extracting the code in the script tag to its own file.

If a test fails, it will output the expected and actual result for that test. In the end, it will output a test summary with the total, failed and passed number of tests.

If all tests have passed, like they should here, you would see the following in the console:

Of 6 tests, 0 failed, 6 passed.

To see what a failed assertion looks like, we can change something to break it:

Expected 2 day ago, but was 2 days ago.

Of 6 tests, 1 failed, 5 passed.

While this ad-hoc approach is interesting as a proof of concept (you really can write a test runner in just a few lines of code), it’s much more practical to use an existing unit testing framework that provides better output and more infrastructure for writing and organizing tests.

The QUnit JavaScript Test Suite

The choice of framework is mostly a matter of taste. For the rest of this article, we’ll use QUnit (pronounced “q-unit”), because its style of describing tests is close to that of our ad-hoc test framework.

<!doctype html>
<html>
<head>
  <meta charset="utf-8">
  <title>Refactored date examples</title>

  <link rel="stylesheet" href="//code.jquery.com/qunit/qunit-1.18.0.css">
  <script src="//code.jquery.com/qunit/qunit-1.18.0.js"></script>
  <script src="prettydate.js"></script>

  <script>
  QUnit.test("prettydate basics", function( assert ) {
    var now = "2008/01/28 22:25:00";
    assert.equal(prettyDate(now, "2008/01/28 22:24:30"), "just now");
    assert.equal(prettyDate(now, "2008/01/28 22:23:30"), "1 minute ago");
    assert.equal(prettyDate(now, "2008/01/28 21:23:30"), "1 hour ago");
    assert.equal(prettyDate(now, "2008/01/27 22:23:30"), "Yesterday");
    assert.equal(prettyDate(now, "2008/01/26 22:23:30"), "2 days ago");
    assert.equal(prettyDate(now, "2007/01/26 22:23:30"), undefined);
  });
  </script>
</head>
<body>

<div id="qunit"></div>

</body>
</html>

Three sections are worth a closer look here. Along with the usual HTML boilerplate, we have three included files: two files for QUnit (qunit.css and qunit.js) and the previous prettydate.js.

Then, there’s another script block with the actual tests. The test method is called once, passing a string as the first argument (naming the test) and passing a function as the second argument (which will run the actual code for this test). This code then defines the now variable, which gets reused below, then calls the equal method a few times with varying arguments. The equal method is one of several assertions that QUnit provides through the first parameter in the callback function of the test block. The first argument is the result of a call to prettyDate, with thenow variable as the first argument and a date string as the second. The second argument to equal is the expected result. If the two arguments to equal are the same value, then the assertion will pass; otherwise, it will fail.

Finally, in the body element is some QUnit-specific markup. These elements are optional. If present, QUnit will use them to output the test results.

The result is this:

With a failed test, the result would look something like this:

Because the test contains a failing assertion, QUnit doesn’t collapse the results for that test, and we can see immediately what went wrong. Along with the output of the expected and actual values, we get a diff between the two, which can be useful for comparing larger strings. Here, it’s pretty obvious what went wrong.

Refactoring, Stage 1

The assertions are currently somewhat incomplete because we aren’t yet testing the n weeks ago variant. Before adding it, we should consider refactoring the test code. Currently, we are calling prettyDate for each assertion and passing the now argument. We could easily refactor this into a custom assertion method:

QUnit.test("prettydate basics", function( assert ) {
  function date(then, expected) {
    assert.equal(prettyDate("2008/01/28 22:25:00", then), expected);
  }
  date("2008/01/28 22:24:30", "just now");
  date("2008/01/28 22:23:30", "1 minute ago");
  date("2008/01/28 21:23:30", "1 hour ago");
  date("2008/01/27 22:23:30", "Yesterday");
  date("2008/01/26 22:23:30", "2 days ago");
  date("2007/01/26 22:23:30", undefined);
});

Here we’ve extracted the call to prettyDate into the date function, inlining the now variable into the function. We end up with just the relevant data for each assertion, making it easier to read, while the underlying abstraction remains pretty obvious.

Testing The DOM manipulation

Now that the prettyDate function is tested well enough, let’s shift our focus back to the initial example. Along with the prettyDate function, it also selected some DOM elements and updated them, within the window load event handler. Applying the same principles as before, we should be able to refactor that code and test it. In addition, we’ll introduce a module for these two functions, to avoid cluttering the global namespace and to be able to give these individual functions more meaningful names.

<!doctype html>
<html>
<head>
  <meta charset="utf-8">
  <title>Refactored date examples</title>
  <link rel="stylesheet" href="//code.jquery.com/qunit/qunit-1.18.0.css">
  <script src="//code.jquery.com/qunit/qunit-1.18.0.js"></script>
  <script src="prettydate2.js"></script>
  <script>
  QUnit.test("prettydate.format", function( assert ) {
    function date(then, expected) {
      assert.equal(prettyDate.format("2008/01/28 22:25:00", then),
        expected);
    }
    date("2008/01/28 22:24:30", "just now");
    date("2008/01/28 22:23:30", "1 minute ago");
    date("2008/01/28 21:23:30", "1 hour ago");
    date("2008/01/27 22:23:30", "Yesterday");
    date("2008/01/26 22:23:30", "2 days ago");
    date("2007/01/26 22:23:30", undefined);
  });

  QUnit.test("prettyDate.update", function( assert ) {
    var links = document.getElementById("qunit-fixture")
      .getElementsByTagName("a");
    assert.equal(links[0].innerHTML, "January 28th, 2008");
    assert.equal(links[2].innerHTML, "January 27th, 2008");
    prettyDate.update("2008-01-28T22:25:00Z");
    assert.equal(links[0].innerHTML, "2 hours ago");
    assert.equal(links[2].innerHTML, "Yesterday");
  });

  QUnit.test("prettyDate.update, one day later", function( assert ) {
    var links = document.getElementById("qunit-fixture")
      .getElementsByTagName("a");
    assert.equal(links[0].innerHTML, "January 28th, 2008");
    assert.equal(links[2].innerHTML, "January 27th, 2008");
    prettyDate.update("2008/01/29 22:25:00");
    assert.equal(links[0].innerHTML, "Yesterday");
    assert.equal(links[2].innerHTML, "2 days ago");
  });
  </script>
</head>
<body>

<div id="qunit"></div>
<div id="qunit-fixture">

<ul>
  <li class="entry">
    <p>blah blah blah...</p>
    <small class="extra">
      Posted <span class="time">
        <a href="#2008/01/blah/57/" title="2008-01-28T20:24:17Z"
          >January 28th, 2008</a>
      </span>
      by <span class="author"><a href="#john/">John Resig</a></span>
    </small>
  </li>
  <li class="entry">
    <p>blah blah blah...</p>
    <small class="extra">
      Posted <span class="time">
        <a href="#2008/01/blah/57/" title="2008-01-27T22:24:17Z"
          >January 27th, 2008</a>
      </span>
      by <span class="author"><a href="#john/">John Resig</a></span>
    </small>
  </li>
</ul>

</div>

</body>
</html>

Here’s the contents of prettydate2.js:

var prettyDate = {
  format: function(now, time){
    var date = new Date(time || ""),
      diff = (((new Date(now)).getTime() - date.getTime()) / 1000),
      day_diff = Math.floor(diff / 86400);

    if ( isNaN(day_diff) || day_diff < 0 || day_diff >= 31 )
      return;

    return day_diff === 0 && (
        diff < 60 && "just now" ||
        diff < 120 && "1 minute ago" ||
        diff < 3600 && Math.floor( diff / 60 ) +
          " minutes ago" ||
        diff < 7200 && "1 hour ago" ||
        diff < 86400 && Math.floor( diff / 3600 ) +
          " hours ago") ||
      day_diff === 1 && "Yesterday" ||
      day_diff < 7 && day_diff + " days ago" ||
      day_diff < 31 && Math.ceil( day_diff / 7 ) +
        " weeks ago";
  },

  update: function(now) {
    var links = document.getElementsByTagName("a");
    for ( var i = 0; i < links.length; i++ ) {
      if ( links[i].title ) {
        var date = prettyDate.format(now, links[i].title);
        if ( date ) {
          links[i].innerHTML = date;
        }
      }
    }
  }
};

The new prettyDate.update function is an extract of the initial example, but with the now argument to pass through to prettyDate.format. The QUnit-based test for that function starts by selecting all a elements within the #qunit-fixture element. In the updated markup in the body element, the <div id="qunit-fixture">…</div> is new. It contains an extract of the markup from our initial example, enough to write useful tests against. By putting it in the #qunit-fixture element, we don’t have to worry about DOM changes from one test affecting other tests, because QUnit will automatically reset the markup after each test.

Let’s look at the first test for prettyDate.update. After selecting those anchors, two assertions verify that these have their initial text values. Afterwards, prettyDate.update is called, passing along a fixed date (the same as in previous tests). Afterwards, two more assertions are run, now verifying that the innerHTML property of these elements have the correctly formatted date, “2 hours ago” and “Yesterday.”

Refactoring, Stage 2

The next test, prettyDate.update, one day later, does nearly the same thing, except that it passes a different date to prettyDate.update and, therefore, expects different results for the two links. Let’s see if we can refactor these tests to remove the duplication.

<!doctype html>
<html>
<head>
  <meta charset="utf-8">
  <title>Refactored date examples</title>
  <link rel="stylesheet" href="//code.jquery.com/qunit/qunit-1.18.0.css">
  <script src="//code.jquery.com/qunit/qunit-1.18.0.js"></script>
  <script src="prettydate2.js"></script>
  <script>
  QUnit.test("prettydate.format", function( assert ) {
    function date(then, expected) {
      assert.equal(prettyDate.format("2008/01/28 22:25:00", then),
        expected);
    }
    date("2008/01/28 22:24:30", "just now");
    date("2008/01/28 22:23:30", "1 minute ago");
    date("2008/01/28 21:23:30", "1 hour ago");
    date("2008/01/27 22:23:30", "Yesterday");
    date("2008/01/26 22:23:30", "2 days ago");
    date("2007/01/26 22:23:30", undefined);
  });

  function domtest(name, now, first, second) {
    QUnit.test(name, function( assert ) {
      var links = document.getElementById("qunit-fixture")
        .getElementsByTagName("a");
      assert.equal(links[0].innerHTML, "January 28th, 2008");
      assert.equal(links[2].innerHTML, "January 27th, 2008");
      prettyDate.update(now);
      assert.equal(links[0].innerHTML, first);
      assert.equal(links[2].innerHTML, second);
    });
  }
  domtest("prettyDate.update", "2008-01-28T22:25:00Z",
    "2 hours ago", "Yesterday");
  domtest("prettyDate.update, one day later", "2008/01/29 22:25:00",
    "Yesterday", "2 days ago");
  </script>
</head>
<body>

<div id="qunit"></div>
<div id="qunit-fixture">

<ul>
  <li class="entry">
    <p>blah blah blah...</p>
    <small class="extra">
      Posted <span class="time">
        <a href="#2008/01/blah/57/" title="2008-01-28T20:24:17Z"
          >January 28th, 2008</a>
      </span>
      by <span class="author"><a href="#john/">John Resig</a></span>
    </small>
  </li>
  <li class="entry">
    <p>blah blah blah...</p>
    <small class="extra">
      Posted <span class="time">
        <a href="#2008/01/blah/57/" title="2008-01-27T22:24:17Z"
          >January 27th, 2008</a>
      </span>
      by <span class="author"><a href="#john/">John Resig</a></span>
    </small>
  </li>
</ul>

</div>

</body>
</html>

Here we have a new function called domtest, which encapsulates the logic of the two previous calls to test, introducing arguments for the test name, the date string and the two expected strings. It then gets called twice.

Back To The Start

With that in place, let’s go back to our initial example and see what that looks like now, after the refactoring.

<!doctype html>
<html>
<head>
  <meta charset="utf-8">
  <title>Final date examples</title>
  <script src="prettydate2.js"></script>
  <script>
  window.onload = function() {
    prettyDate.update("2008-01-28T22:25:00Z");
  };
  </script>
</head>
<body>

<ul>
  <li class="entry">
    <p>blah blah blah...</p>
    <small class="extra">
      Posted <span class="time">
        <a href="#2008/01/blah/57/" title="2008-01-28T20:24:17Z">
          <span>January 28th, 2008</span>
        </a>
      </span>
      by <span class="author"><a href="#john/">John Resig</a></span>
    </small>
  </li>
  <!-- more list items -->
</ul>

</body>
</html>

For a non-static example, we’d remove the argument to prettyDate.update. All in all, the refactoring is a huge improvement over the first example. And thanks to the prettyDate module that we introduced, we can add even more functionality without clobbering the global namespace.

Conclusion

Testing JavaScript code is not just a matter of using some test runner and writing a few tests; it usually requires some heavy structural changes when applied to code that has been tested only manually before. We’ve walked through an example of how to change the code structure of an existing module to run some tests using an ad-hoc testing framework, then replacing that with a more full-featured framework to get useful visual results.

QUnit itself has a lot more to offer, with specific support for testing asynchronous code such as timeouts, AJAX and events. Its visual test runner helps to debug code by making it easy to rerun specific tests and by providing stack traces for failed assertions and caught exceptions. For further reading, check out the QUnit Cookbook.

Originally published on Smashing Magazine, June 2012

The Basics of Object-Oriented JavaScript

[Fuente: http://code.tutsplus.com/tutorials/the-basics-of-object-oriented-javascript–net-7670 ]

Over recent years, JavaScript has increasingly gained popularity, partly due to libraries that are developed to make JavaScript apps/effects easier to create for those who may not have fully grasped the core language yet.

While in the past it was a common argument that JavaScript was a basic language and was very ‘slap dash’ with no real foundation; this is no longer the case, especially with the introduction of high scale web applications and ‘adaptations’ such as JSON (JavaScript Object Notation).

JavaScript can have all that an Object-Orientated language has to offer, albeit with some extra effort outside of the scope of this article.

Let’s Create an Object

    function myObject(){

    };

Congratulations, you just created an object. There are two ways to create a JavaScript object: they are ‘Constructor functions’ and ‘Literal notation. The one above is a Constructor function, I’ll explain what the difference is shortly, but before I do, here is what an Object definition looks like using literal notation.

    var myObject = {

    };

Literal is a preferred option for name spacing so that your JavaScript code doesn’t interfere (or vice versa) with other scripts running on the page and also if you are using this object as a single object and not requiring more than one instance of the object, whereas Constructor function type notation is preferred if you need to do some initial work before the object is created or require multiple instances of the object where each instance can be changed during the lifetime of the script. Let’s continue to build on both our objects simultaneously so we can observe what the differences are.

Defining Methods and Properties

Constructor version:

    function myObject(){
        this.iAm = 'an object';
        this.whatAmI = function(){
            alert('I am ' + this.iAm);
        };
    };

Literal version:

    var myObject = {
        iAm : 'an object',
        whatAmI : function(){
            alert('I am ' + this.iAm);
        }
    }

For each of the objects we have created a property ‘iAm’ which contains a string value that is used in our objects method ‘whatAmI’ which alerts a message.

Properties are variables created inside an object and methods are functions created inside an object.

Now is probably as good a time as any to explain how to use properties and methods (although you would already have done so if you are familiar with a library).

To use a property first you type what object it belongs to – so in this case it’s myObject – and then to reference its internal properties, you put a full stop and then the name of the property so it will eventually look like myObject.iAm (this will return ‘an object’).

For methods, it is the same except to execute the method, as with any function, you must put parenthesis after it; otherwise you will just be returning a reference to the function and not what the function actually returns. So it will look like myObject.whatAmI() (this will alert ‘I am an object’).

Now for the differences:

  • The constructor object has its properties and methods defined with the keyword ‘this’ in front of it, whereas the literal version does not.
  • In the constructor object the properties/methods have their ‘values’ defined after an equal sign ‘=’ whereas in the literal version, they are defined after a colon ‘:’.
  • The constructor function can have (optional) semi-colons ‘;’ at the end of each property/method declaration whereas in the literal version if you have more than one property or method, they MUST be separated with a comma ‘,’, and they CANNOT have semi-colons after them, otherwise JavaScript will return an error.

There is also a difference between the way these two types of object declarations are used.

To use a literally notated object, you simply use it by referencing its variable name, so wherever it is required you call it by typing;

    myObject.whatAmI();

With constructor functions you need to instantiate (create a new instance of) the object first; you do this by typing;

    var myNewObject = new myObject();
    myNewObject.whatAmI();

Using a Constructor Function.

Let’s use our previous constructor function and build upon it so it performs some basic (but dynamic) operations when we instantiate it.

    function myObject(){
        this.iAm = 'an object';
        this.whatAmI = function(){
            alert('I am ' + this.iAm);
        };
    };

Just like any JavaScript function, we can use arguments with our constructor function;

function myObject(what){
        this.iAm = what;
        this.whatAmI = function(language){
                alert('I am ' + this.iAm + ' of the ' + language + ' language');
        };
};

Now let’s instantiate our object and call its whatAmI method, filling in the required fields as we do so.

    var myNewObject = new myObject('an object');
    myNewObject.whatAmI('JavaScript');

This will alert ‘I am an object of the JavaScript language.’

To Instantiate or not to Instantiate

I mentioned earlier about the differences between Object Constructors and Object Literals and that when a change is made to an Object Literal it affects that object across the entire script, whereas when a Constructor function is instantiated and then a change is made to that instance, it won’t affect any other instances of that object. Let’s try an example;

First we will create an Object literal;

 var myObjectLiteral = {
        myProperty : 'this is a property'
    }

    //alert current myProperty
    alert(myObjectLiteral.myProperty); //this will alert 'this is a property'

    //change myProperty
    myObjectLiteral.myProperty = 'this is a new property';

    //alert current myProperty
    alert(myObjectLiteral.myProperty); //this will alert 'this is a new property', as expected

Even if you create a new variable and point it towards the object, it will have the same effect.

 var myObjectLiteral = {
        myProperty : 'this is a property'
    }

    //alert current myProperty
    alert(myObjectLiteral.myProperty); //this will alert 'this is a property'

    //define new variable with object as value
    var sameObject = myObjectLiteral;

    //change myProperty
    myObjectLiteral.myProperty = 'this is a new property';

    //alert current myProperty
    alert(sameObject.myProperty); //this will still alert 'this is a new property'

Now let’s try a similar exercise with a Constructor function.

 //this is one other way of creating a Constructor function
        var myObjectConstructor = function(){
        this.myProperty = 'this is a property'
    }

    //instantiate our Constructor
    var constructorOne = new myObjectConstructor();

    //instantiate a second instance of our Constructor
    var constructorTwo = new myObjectConstructor();

    //alert current myProperty of constructorOne instance
    alert(constructorOne.myProperty); //this will alert 'this is a property'

     //alert current myProperty of constructorTwo instance
    alert(constructorTwo.myProperty); //this will alert 'this is a property'

So as expected, both return the correct value, but let’s change the myProperty for one of the instances.

 //this is one other way of creating a Constructor function
        var myObjectConstructor = function(){
        this.myProperty = 'this is a property'
    }

    //instantiate our Constructor
    var constructorOne = new myObjectConstructor();

    //change myProperty of the first instance
    constructorOne.myProperty = 'this is a new property';

    //instantiate a second instance of our Constructor
    var constructorTwo = new myObjectConstructor();

    //alert current myProperty of constructorOne instance
    alert(constructorOne.myProperty); //this will alert 'this is a new property'

     //alert current myProperty of constructorTwo instance
    alert(constructorTwo.myProperty); //this will still alert 'this is a property'

As you can see from this example, even though we changed the property of constructorOne it didn’t affect myObjectConstructor and therefore didn’t affect constructorTwo. Even if constructorTwo was instantiated before we changed the myProperty property of constructorOne, it would still not affect the myProperty property of constructorTwo as it is a completely different instance of the object within JavaScript’s memory.

So which one should you use? Well it depends on the situation, if you only need one object of its kind for your script (as you will see in our example at the end of this article), then use an object literal, but if you need several instances of an object, where each instance is independent of the other and can have different properties or methods depending on the way it’s constructed, then use a constructor function.

This and That

While explaining constructor functions, there were a lot of ‘this’ keywords being thrown around and I figure what better time to talk about scope!

Now you might be asking ‘what is this scope you speak of’?’ Scope in JavaScript is function/object based, so that means if you’re outside of a function, you can’t use a variable that is defined inside a function (unless you use a closure).

There is however a scope chain, which means that a function inside another function can access a variable defined in its parent function. Let’s take a look at some example code.

<script type="text/javascript">

var var1 = 'this is global and is available to everyone';

function function1(){

        var var2 = 'this is only available inside function1 and function2';     

        function function2(){

                var var3 = 'this is only available inside function2';

        }               

}

</script>

As you can see in this example, var1 is defined in the global object and is available to all functions and object, var2 is defined inside function1 and is available to function1 and function2, but if you try to reference it from the global object it will give you the error ‘var2 is undefined’, var3 is only accessible to function2.

So what does ‘this’ reference?

  • Well in a browser, ‘this’ references the window object, so technically the window is our global object.
  • If we’re inside an object, ‘this’ will refer to the object itself however if you’re inside a function, this will still refer to the window object
  • and likewise if you’re inside a method that is within an object, ‘this’ will refer to the object.
  • Due to our scope chain, if we’re inside a sub-object (an object inside an object), ‘this’ will refer to the sub-object and not the parent object.

As a side note, it’s also worth adding that when using functions like setInterval, setTimeout and eval, when you execute a function or method via one of these, ‘this’ refers to the window object as these are methods of window, so setInterval() and window.setInterval() are the same.

Ok now that we have that out of the way, let’s do a real world example and create a form validation object!

Real world Usage: A Form Validation Object

First I must introduce you to the addEvent function which we will create and is a combination of ECMAScript’s (Firefox, Safari, etc.. ) addEventListener() function and Microsoft ActiveX Script’s attachEvent() function.

    function addEvent(to, type, fn){
        if(document.addEventListener){
            to.addEventListener(type, fn, false);
        } else if(document.attachEvent){
            to.attachEvent('on'+type, fn);
        } else {
            to['on'+type] = fn;
        }       
    };

This creates a new function with three arguments, to being the DOM object we are attaching the event to, type being the type of event and fn being the function run when the event is triggered. It first checks whether addEventListener is supported, if so it will use that, if not it will check for attachEvent and if all else fails you are probably using IE5 or something equally obsolete so we will add the event directly onto its event property (note: the third option will overwrite any existing function that may have been attached to the event property while the first two will add it as an additional function to its event property).

Now let’s set up our document so it is similar to what you might see when you develop jQuery stuff.

In jQuery you would have;

    $(document).ready(function(){
        //all our code that runs after the page is ready goes here
    });

Using our addEvent function we have;

    addEvent(window, 'load', function(){
                //all our code that runs after the page is ready goes here
        });

Now for our Form object.

var Form = {

        validClass : 'valid',

        fname : {
                minLength : 1,          
                maxLength : 15, 
                fieldName : 'First Name'
        },

        lname : {
                minLength : 1,          
                maxLength : 25,
                fieldName : 'Last Name'
        },

        validateLength : function(formEl, type){
                if(formEl.value.length > type.maxLength || formEl.value.length < type.minLength ){        
                        formEl.className = formEl.className.replace(' '+Form.validClass, '');
                        return false;
                } else {
                        if(formEl.className.indexOf(' '+Form.validClass) == -1)
                        formEl.className += ' '+Form.validClass;
                        return true;
                }
        },

        validateEmail : function(formEl){
                var regEx = /^([0-9a-zA-Z]([-.\w]*[0-9a-zA-Z])*@([0-9a-zA-Z][-\w]*[0-9a-zA-Z]\.)+[a-zA-Z]{2,9})$/;
                var emailTest = regEx.test(formEl.value);                
                if (emailTest) {
                        if(formEl.className.indexOf(' '+Form.validClass) == -1)                 
                        formEl.className += ' '+Form.validClass;            
                        return true;
                } else {
                        formEl.className = formEl.className.replace(' '+Form.validClass, '');
                        return false;
                }                       
        },              

        getSubmit : function(formID){    
                var inputs = document.getElementById(formID).getElementsByTagName('input');
                for(var i = 0; i < inputs.length; i++){
                        if(inputs[i].type == 'submit'){
                                return inputs[i];
                        }               
                }               
                return false;
        }                       

};

So this is quite basic but can easily be expanded upon.

To break this down first we create a new property which is just the string name of our ‘valid’ css class that when applied to the form field, adds valid effects such as a green border. We also define our two sub-objects, fname and lname, so we can define their own properties that can be used by methods elsewhere, these properties are minLength which is the minimum amount of characters these fields can have, maxLength which is the max characters the field can have and fieldName which doesn’t actually get used, but could be grabbed for things like identifying the field with a user friendly string in an error message (eg. ‘First Name field is required.’).

Next we create a validateLength method that accepts two arguments: formEl the DOM element to validate and the type which refers to one of the sub-object to use (i.e. fname or lname). This function checks whether the length of the field is between the minLength and maxLength range, if it’s not then we remove our valid class (if it exists) from the element and return false, otherwise if it is then we add the valid class and return true.

Then we have a validateEmail method which accepts a DOM element as an argument, we then test this DOM elements value against an email type regular expression; again if it passes we add our class and return true and vice versa.

Finally we have a getSubmit method. This method is given the id of the form and then loops through all input elements inside the specified form to find which one has a type of submit (type=”submit”). The reason for this method is to return the submit button so we can disable it until the form is ready to submit.

Let’s put this validator object to work on a real form. First we need our HTML.

    <body>

    <form id="ourForm">
        <label>First Name</label><input type="text" /><br />
        <label>Last Name</label><input type="text" /><br />
        <label>Email</label><input type="text" /><br />
        <input type="submit" value="submit" />
    </form>

    </body>

Now let’s access these input objects using JavaScript and validate them when the form submits.

addEvent(window, 'load', function(){

        var ourForm = document.getElementById('ourForm');       
        var submit_button = Form.getSubmit('ourForm');
        submit_button.disabled = 'disabled';

        function checkForm(){
                var inputs = ourForm.getElementsByTagName('input');
                if(Form.validateLength(inputs[0], Form.fname)){
                        if(Form.validateLength(inputs[1], Form.lname)){
                                if(Form.validateEmail(inputs[2])){                                       

                                                submit_button.disabled = false;
                                                return true;

                                }
                        }
                }

                submit_button.disabled = 'disabled';
                return false;

        };

        checkForm();            
        addEvent(ourForm, 'keyup', checkForm);
        addEvent(ourForm, 'submit', checkForm);

});

Let’s break down this code.

We wrap our code in the addEvent function so when the window is loaded this script runs. Firstly we grab our form using its ID and put it in a variable named ourForm, then we grab our submit button (using our Form objects getSubmit method) and put it in a variable namedsubmit_button, and then set the submit buttons disabled attribute to ‘disabled’.

Next we define a checkForm function. This stores all the inputs inside the form field as an array and we attach it to a variable named.. you guessed it.. inputs! Then it defines some nested if statements which test each of the fields inside the inputs array against our Form methods. This is the reason we returned true or false in our methods, so if it returns true, we pass that if statement and continue onto the next, but if it returns false, we exit the if statements.

Following our function definition, we execute the checkForm function when the page initially loads and also attach the function to a keyup event and a submit event.

You might be asking, why attach to submit if we disabled the submit button. Well if you are focused on an input field and hit the enter key, it will attempt to submit the form and we need to test for this, hence the reason our checkForm function returns true (submits the form) or false (doesn’t submit form).

Conclusion

So we learned how to define the different object types within JavaScript and create properties and methods within them. We also learned a nifty addEvent function and got to use our object in a basic real world example.

Browserify handbook

[Fuente: https://github.com/substack/browserify-handbook]

introduction

This document covers how to use browserify to build modular applications.

browserify is a tool for compiling node-flavored commonjs modules for the browser.

You can use browserify to organize your code and use third-party libraries even if you don’t use node itself in any other capacity except for bundling and installing packages with npm.

The module system that browserify uses is the same as node, so packages published to npm that were originally intended for use in node but not browsers will work just fine in the browser too.

Increasingly, people are publishing modules to npm which are intentionally designed to work in both node and in the browser using browserify and many packages on npm are intended for use in just the browser. npm is for all javascript, front or backend alike.

node packaged manuscript

You can install this handbook with npm, appropriately enough. Just do:

npm install -g browserify-handbook

Now you will have a browserify-handbook command that will open this readme file in your $PAGER. Otherwise, you may continue reading this document as you are presently doing.

node packaged modules

Before we can dive too deeply into how to use browserify and how it works, it is important to first understand how the node-flavored version of the commonjs module system works.

require

In node, there is a require() function for loading code from other files.

If you install a module with npm:

npm install uniq

Then in a file nums.js we can require('uniq'):

var uniq = require('uniq');
var nums = [ 5, 2, 1, 3, 2, 5, 4, 2, 0, 1 ];
console.log(uniq(nums));

The output of this program when run with node is:

$ node nums.js
[ 0, 1, 2, 3, 4, 5 ]

You can require relative files by requiring a string that starts with a .. For example, to load a file foo.js from main.js, in main.js you can do:

var foo = require('./foo.js');
console.log(foo(4));

If foo.js was in the parent directory, you could use ../foo.js instead:

var foo = require('../foo.js');
console.log(foo(4));

or likewise for any other kind of relative path. Relative paths are always resolved with respect to the invoking file’s location.

Note that require() returned a function and we assigned that return value to a variable called uniq. We could have picked any other name and it would have worked the same. require() returns the exports of the module name that you specify.

How require() works is unlike many other module systems where imports are akin to statements that expose themselves as globals or file-local lexicals with names declared in the module itself outside of your control. Under the node style of code import with require(), someone reading your program can easily tell where each piece of functionality came from. This approach scales much better as the number of modules in an application grows.

exports

To export a single thing from a file so that other files may import it, assign over the value at module.exports:

module.exports = function (n) {
    return n * 111
};

Now when some module main.js loads your foo.js, the return value of require('./foo.js') will be the exported function:

var foo = require('./foo.js');
console.log(foo(5));

This program will print:

555

You can export any kind of value with module.exports, not just functions.

For example, this is perfectly fine:

module.exports = 555

and so is this:

var numbers = [];
for (var i = 0; i < 100; i++) numbers.push(i);

module.exports = numbers;

There is another form of doing exports specifically for exporting items onto an object. Here, exports is used instead of module.exports:

exports.beep = function (n) { return n * 1000 }
exports.boop = 555

This program is the same as:

module.exports.beep = function (n) { return n * 1000 }
module.exports.boop = 555

because module.exports is the same as exports and is initially set to an empty object.

Note however that you can’t do:

// this doesn't work
exports = function (n) { return n * 1000 }

because the export value lives on the module object, and so assigning a new value for exports instead of module.exports masks the original reference.

Instead if you are going to export a single item, always do:

// instead
module.exports = function (n) { return n * 1000 }

If you’re still confused, try to understand how modules work in the background:

var module = {
  exports: {}
};

// If you require a module, it's basically wrapped in a function
(function(module, exports) {
  exports = function (n) { return n * 1000 };
}(module, module.exports))

console.log(module.exports); // it's still an empty object :(

Most of the time, you will want to export a single function or constructor with module.exports because it’s usually best for a module to do one thing.

The exports feature was originally the primary way of exporting functionality and module.exports was an afterthought, but module.exports proved to be much more useful in practice at being more direct, clear, and avoiding duplication.

In the early days, this style used to be much more common:

foo.js:

exports.foo = function (n) { return n * 111 }

main.js:

var foo = require('./foo.js');
console.log(foo.foo(5));

but note that the foo.foo is a bit superfluous. Using module.exports it becomes more clear:

foo.js:

module.exports = function (n) { return n * 111 }

main.js:

var foo = require('./foo.js');
console.log(foo(5));

bundling for the browser

To run a module in node, you’ve got to start from somewhere.

In node you pass a file to the node command to run a file:

$ node robot.js
beep boop

In browserify, you do this same thing, but instead of running the file, you generate a stream of concatenated javascript files on stdout that you can write to a file with the > operator:

$ browserify robot.js > bundle.js

Now bundle.js contains all the javascript that robot.js needs to work. Just plop it into a single script tag in some html:

<html>
  <body>
    <script src="bundle.js"></script>
  </body>
</html>

Bonus: if you put your script tag right before the </body>, you can use all of the dom elements on the page without waiting for a dom onready event.

There are many more things you can do with bundling. Check out the bundling section elsewhere in this document.

how browserify works

Browserify starts at the entry point files that you give it and searches for any require() calls it finds using static analysis of the source code’s abstract syntax tree.

For every require() call with a string in it, browserify resolves those module strings to file paths and then searches those file paths for require() calls recursively until the entire dependency graph is visited.

Each file is concatenated into a single javascript file with a minimal require() definition that maps the statically-resolved names to internal IDs.

This means that the bundle you generate is completely self-contained and has everything your application needs to work with a pretty negligible overhead.

For more details about how browserify works, check out the compiler pipeline section of this document.

how node_modules works

node has a clever algorithm for resolving modules that is unique among rival platforms.

Instead of resolving packages from an array of system search paths like how $PATH works on the command line, node’s mechanism is local by default.

If you require('./foo.js') from /beep/boop/bar.js, node will look for ./foo.js in /beep/boop/foo.js. Paths that start with a ./ or ../ are always local to the file that calls require().

If however you require a non-relative name such as require('xyz') from /beep/boop/foo.js, node searches these paths in order, stopping at the first match and raising an error if nothing is found:

/beep/boop/node_modules/xyz
/beep/node_modules/xyz
/node_modules/xyz

For each xyz directory that exists, node will first look for a xyz/package.json to see if a "main" field exists. The "main" field defines which file should take charge if you require() the directory path.

For example, if /beep/node_modules/xyz is the first match and/beep/node_modules/xyz/package.json has:

{
  "name": "xyz",
  "version": "1.2.3",
  "main": "lib/abc.js"
}

then the exports from /beep/node_modules/xyz/lib/abc.js will be returned by require('xyz').

If there is no package.json or no "main" field, index.js is assumed:

/beep/node_modules/xyz/index.js

If you need to, you can reach into a package to pick out a particular file. For example, to load the lib/clone.js file from the dat package, just do:

var clone = require('dat/lib/clone.js')

The recursive node_modules resolution will find the first dat package up the directory hierarchy, then the lib/clone.js file will be resolved from there. This require('dat/lib/clone.js') approach will work from any location where you can require('dat').

node also has a mechanism for searching an array of paths, but this mechanism is deprecated and you should be using node_modules/ unless you have a very good reason not to.

The great thing about node’s algorithm and how npm installs packages is that you can never have a version conflict, unlike most every other platform. npm installs the dependencies of each package into node_modules.

Each library gets its own local node_modules/ directory where its dependencies are stored and each dependency’s dependencies has its own node_modules/ directory, recursively all the way down.

This means that packages can successfully use different versions of libraries in the same application, which greatly decreases the coordination overhead necessary to iterate on APIs. This feature is very important for an ecosystem like npm where there is no central authority to manage how packages are published and organized. Everyone may simply publish as they see fit and not worry about how their dependency version choices might impact other dependencies included in the same application.

You can leverage how node_modules/ works to organize your own local application modules too. See the avoiding ../../../../../../.. section for more.

 

development

Concatenation has some downsides, but these can be very adequately addressed with development tooling.

source maps

Browserify supports a --debug/-d flag and opts.debug parameter to enable source maps. Source maps tell the browser to convert line and column offsets for exceptions thrown in the bundle file back into the offsets and filenames of the original sources.

The source maps include all the original file contents inline so that you can simply put the bundle file on a web server and not need to ensure that all the original source contents are accessible from the web server with paths set up correctly.

exorcist

The downside of inlining all the source files into the inline source map is that the bundle is twice as large. This is fine for debugging locally but not practical for shipping source maps to production. However, you can use exorcist to pull the inline source map out into a separate bundle.map.js file:

browserify main.js --debug | exorcist bundle.js.map > bundle.js

auto-recompile

Running a command to recompile your bundle every time can be slow and tedious. Luckily there are many tools to solve this problem. Some of these tools support live-reloading to various degrees and others have a more traditional manual refresh cycle.

These are just a few of the tools you can use, but there are many more on npm! There are many different tools here that encompass many different tradeoffs and development styles. It can be a little bit more work up-front to find the tools that responate most strongly with your own personal expectations and experience, but I think this diversity helps programmers to be more effective and provides more room for creativity and experimentation. I think diversity in tooling and a smaller browserify core is healthier in the medium to long term than picking a few “winners” by including them in browserify core (which creates all kinds of havoc in meaningful versioning and bitrot in core).

That said, here are a few modules you might want to consider for setting up a browserify development workflow. But keep an eye out for other tools not (yet) on this list!

watchify

You can use watchify interchangeably with browserify but instead of writing to an output file once, watchify will write the bundle file and then watch all of the files in your dependency graph for changes. When you modify a file, the new bundle file will be written much more quickly than the first time because of aggressive caching.

You can use -v to print a message every time a new bundle is written:

$ watchify browser.js -d -o static/bundle.js -v
610598 bytes written to static/bundle.js  0.23s
610606 bytes written to static/bundle.js  0.10s
610597 bytes written to static/bundle.js  0.14s
610606 bytes written to static/bundle.js  0.08s
610597 bytes written to static/bundle.js  0.08s
610597 bytes written to static/bundle.js  0.19s

Here is a handy configuration for using watchify and browserify with the package.json “scripts” field:

{
  "build": "browserify browser.js -o static/bundle.js",
  "watch": "watchify browser.js -o static/bundle.js --debug --verbose",
}

To build the bundle for production do npm run build and to watch files for during development do npm run watch.

Learn more about npm run.

beefy

If you would rather spin up a web server that automatically recompiles your code when you modify it, check out beefy.

Just give beefy an entry file:

beefy main.js

and it will set up shop on an http port.

wzrd

In a similar spirit to beefy but in a more minimal form is wzrd.

Just npm install -g wzrd then you can do:

wzrd app.js

and open up http://localhost:9966 in your browser.

browserify-middleware, enchilada

If you are using express, check out browserify-middleware or enchilada.

They both provide middleware you can drop into an express application for serving browserify bundles.

livereactload

livereactload is a tool for react that automatically updates your web page state when you modify your code.

livereactload is just an ordinary browserify transform that you can load with -t livereactload, but you should consult the project readme for more information.

budo

budo is a browserify development server with a focus on incremental bundling and live reloading, including for css.

First make sure the watchify command is installed along with budo:

npm install -g watchify budo

then tell budo to watch a file and listen on http://localhost:9966

budo app.js

Now every time you update app.js or any other file in your dependency graph, the code will update after a refresh.

or to automatically reload the page live when a file changes, you can do:

budo app.js --live

Check out budo-chrome for a way to configure budo to update the code live without even reloading the page (sometimes called hot reloading).

using the api directly

You can just use the API directly from an ordinary http.createServer() for development too:

var browserify = require('browserify');
var http = require('http');

http.createServer(function (req, res) {
    if (req.url === '/bundle.js') {
        res.setHeader('content-type', 'application/javascript');
        var b = browserify(__dirname + '/main.js').bundle();
        b.on('error', console.error);
        b.pipe(res);
    }
    else res.writeHead(404, 'not found')
});

grunt

If you use grunt, you’ll probably want to use the grunt-browserify plugin.

gulp

If you use gulp, you should use the browserify API directly.

Here is a guide for getting started with gulp and browserify.

Here is a guide on how to make browserify builds fast with watchify using gulp from the official gulp recipes.

builtins

In order to make more npm modules originally written for node work in the browser, browserify provides many browser-specific implementations of node core libraries:

events, stream, url, path, and querystring are particularly useful in a browser environment.

Additionally, if browserify detects the use of Bufferprocessglobal__filename, or__dirname, it will include a browser-appropriate definition.

So even if a module does a lot of buffer and stream operations, it will probably just work in the browser, so long as it doesn’t do any server IO.

If you haven’t done any node before, here are some examples of what each of those globals can do. Note too that these globals are only actually defined when you or some module you depend on uses them.

Buffer

In node all the file and network APIs deal with Buffer chunks. In browserify the Buffer API is provided by buffer, which uses augmented typed arrays in a very performant way with fallbacks for old browsers.

Here’s an example of using Buffer to convert a base64 string to hex:

var buf = Buffer('YmVlcCBib29w', 'base64');
var hex = buf.toString('hex');
console.log(hex);

This example will print:

6265657020626f6f70

process

In node, process is a special object that handles information and control for the running process such as environment, signals, and standard IO streams.

Of particular consequence is the process.nextTick() implementation that interfaces with the event loop.

In browserify the process implementation is handled by the process module which just providesprocess.nextTick() and little else.

Here’s what process.nextTick() does:

setTimeout(function () {
    console.log('third');
}, 0);

process.nextTick(function () {
    console.log('second');
});

console.log('first');

This script will output:

first
second
third

process.nextTick(fn) is like setTimeout(fn, 0), but faster because setTimeout is artificially slower in javascript engines for compatibility reasons.

global

In node, global is the top-level scope where global variables are attached similar to how window works in the browser. In browserify, global is just an alias for the window object.

__filename

__filename is the path to the current file, which is different for each file.

To prevent disclosing system path information, this path is rooted at the opts.basedir that you pass to browserify(), which defaults to the current working directory.

If we have a main.js:

var bar = require('./foo/bar.js');

console.log('here in main.js, __filename is:', __filename);
bar();

and a foo/bar.js:

module.exports = function () {
    console.log('here in foo/bar.js, __filename is:', __filename);
};

then running browserify starting at main.js gives this output:

$ browserify main.js | node
here in main.js, __filename is: /main.js
here in foo/bar.js, __filename is: /foo/bar.js

__dirname

__dirname is the directory of the current file. Like __filename__dirname is rooted at theopts.basedir.

Here’s an example of how __dirname works:

main.js:

require('./x/y/z/abc.js');
console.log('in main.js __dirname=' + __dirname);

x/y/z/abc.js:

console.log('in abc.js, __dirname=' + __dirname);

output:

$ browserify main.js | node
in abc.js, __dirname=/x/y/z
in main.js __dirname=/

transforms

Instead of browserify baking in support for everything, it supports a flexible transform system that are used to convert source files in-place.

This way you can require() files written in coffee script or templates and everything will be compiled down to javascript.

To use coffeescript for example, you can use the coffeeify transform. Make sure you’ve installed coffeeify first with npm install coffeeify then do:

$ browserify -t coffeeify main.coffee > bundle.js

or with the API you can do:

var b = browserify('main.coffee');
b.transform('coffeeify');

The best part is, if you have source maps enabled with --debug or opts.debug, the bundle.js will map exceptions back into the original coffee script source files. This is very handy for debugging with firebug or chrome inspector.

writing your own

Transforms implement a simple streaming interface. Here is a transform that replaces $CWD with theprocess.cwd():

var through = require('through2');

module.exports = function (file) {
    return through(function (buf, enc, next) {
        this.push(buf.toString('utf8').replace(/\$CWD/g, process.cwd()));
        next();
    });
};

The transform function fires for every file in the current package and returns a transform stream that performs the conversion. The stream is written to and by browserify with the original file contents and browserify reads from the stream to obtain the new contents.

Simply save your transform to a file or make a package and then add it with -t ./your_transform.js.

For more information about how streams work, check out the stream handbook.

package.json

browser field

You can define a "browser" field in the package.json of any package that will tell browserify to override lookups for the main field and for individual modules.

If you have a module with a main entry point of main.js for node but have a browser-specific entry point at browser.js, you can do:

{
  "name": "mypkg",
  "version": "1.2.3",
  "main": "main.js",
  "browser": "browser.js"
}

Now when somebody does require('mypkg') in node, they will get the exports from main.js, but when they do require('mypkg') in a browser, they will get the exports from browser.js.

Splitting up whether you are in the browser or not with a "browser" field in this way is greatly preferrable to checking whether you are in a browser at runtime because you may want to load different modules based on whether you are in node or the browser. If the require() calls for both node and the browser are in the same file, browserify’s static analysis will include everything whether you use those files or not.

You can do more with the “browser” field as an object instead of a string.

For example, if you only want to swap out a single file in lib/ with a browser-specific version, you could do:

{
  "name": "mypkg",
  "version": "1.2.3",
  "main": "main.js",
  "browser": {
    "lib/foo.js": "lib/browser-foo.js"
  }
}

or if you want to swap out a module used locally in the package, you can do:

{
  "name": "mypkg",
  "version": "1.2.3",
  "main": "main.js",
  "browser": {
    "fs": "level-fs-browser"
  }
}

You can ignore files (setting their contents to the empty object) by setting their values in the browser field to false:

{
  "name": "mypkg",
  "version": "1.2.3",
  "main": "main.js",
  "browser": {
    "winston": false
  }
}

The browser field only applies to the current package. Any mappings you put will not propagate down to its dependencies or up to its dependents. This isolation is designed to protect modules from each other so that when you require a module you won’t need to worry about any system-wide effects it might have. Likewise, you shouldn’t need to wory about how your local configuration might adversely affect modules far away deep into your dependency graph.

browserify.transform field

You can configure transforms to be automatically applied when a module is loaded in a package’s browserify.transform field. For example, we can automatically apply the brfs transform with this package.json:

{
  "name": "mypkg",
  "version": "1.2.3",
  "main": "main.js",
  "browserify": {
    "transform": [ "brfs" ]
  }
}

Now in our main.js we can do:

var fs = require('fs');
var src = fs.readFileSync(__dirname + '/foo.txt', 'utf8');

module.exports = function (x) { return src.replace(x, 'zzz') };

and the fs.readFileSync() call will be inlined by brfs without consumers of the module having to know. You can apply as many transforms as you like in the transform array and they will be applied in order.

Like the "browser" field, transforms configured in package.json will only apply to the local package for the same reasons.

configuring transforms

Sometimes a transform takes configuration options on the command line. To apply these from package.json you can do the following.

on the command line

browserify -t coffeeify \
           -t [ browserify-ngannotate --ext .coffee ] \
           index.coffee > index.js

in package.json

"browserify": {
  "transform": [
    "coffeeify",
    ["browserify-ngannotate", {"ext": ".coffee"}]
  ]
}

finding good modules

Here are some useful heuristics for finding good modules on npm that work in the browser:

  • I can install it with npm
  • code snippet on the readme using require() – from a quick glance I should see how to integrate the library into what I’m presently working on
  • has a very clear, narrow idea about scope and purpose
  • knows when to delegate to other libraries – doesn’t try to do too many things itself
  • written or maintained by authors whose opinions about software scope, modularity, and interfaces I generally agree with (often a faster shortcut than reading the code/docs very closely)
  • inspecting which modules depend on the library I’m evaluating – this is baked into the package page for modules published to npm

Other metrics like number of stars on github, project activity, or a slick landing page, are not as reliable.

module philosophy

People used to think that exporting a bunch of handy utility-style things would be the main way that programmers would consume code because that is the primary way of exporting and importing code on most other platforms and indeed still persists even on npm.

However, this kitchen-sink mentality toward including a bunch of thematically-related but separable functionality into a single package appears to be an artifact for the difficulty of publishing and discovery in a pre-github, pre-npm era.

There are two other big problems with modules that try to export a bunch of functionality all in one place under the auspices of convenience: demarcation turf wars and finding which modules do what.

Packages that are grab-bags of features waste a ton of time policing boundaries about which new features belong and don’t belong. There is no clear natural boundary of the problem domain in this kind of package about what the scope is, it’s all somebody’s smug opinion.

Node, npm, and browserify are not that. They are avowedly ala-carte, participatory, and would rather celebrate disagreement and the dizzying proliferation of new ideas and approaches than try to clamp down in the name of conformity, standards, or “best practices”.

Nobody who needs to do gaussian blur ever thinks “hmm I guess I’ll start checking generic mathematics, statistics, image processing, and utility libraries to see which one has gaussian blur in it. Was it stats2 or image-pack-utils or maths-extra or maybe underscore has that one?” No. None of this. Stop it. They npm search gaussian and they immediately see ndarray-gaussian-filter and it does exactly what they want and then they continue on with their actual problem instead of getting lost in the weeds of somebody’s neglected grand utility fiefdom.

organizing modules

avoiding ../../../../../../..

Not everything in an application properly belongs on the public npm and the overhead of setting up a private npm or git repo is still rather large in many cases. Here are some approaches for avoiding the../../../../../../../ relative paths problem.

symlink

The simplest thing you can do is to symlink your app root directory into your node_modules/ directory.

Did you know that symlinks work on windows too?

To link a lib/ directory in your project root into node_modules, do:

ln -s ../lib node_modules/app

and now from anywhere in your project you’ll be able to require files in lib/ by doing require('app/foo.js') to get lib/foo.js.

node_modules

People sometimes object to putting application-specific modules into node_modules because it is not obvious how to check in your internal modules without also checking in third-party modules from npm.

The answer is quite simple! If you have a .gitignore file that ignores node_modules:

node_modules

You can just add an exception with ! for each of your internal application modules:

node_modules/*
!node_modules/foo
!node_modules/bar

Please note that you can’t unignore a subdirectory, if the parent is already ignored. So instead of ignoring node_modules, you have to ignore every directory inside node_modules with thenode_modules/* trick, and then you can add your exceptions.

Now anywhere in your application you will be able to require('foo') or require('bar') without having a very large and fragile relative path.

If you have a lot of modules and want to keep them more separate from the third-party modules installed by npm, you can just put them all under a directory in node_modules such asnode_modules/app:

node_modules/app/foo
node_modules/app/bar

Now you will be able to require('app/foo') or require('app/bar') from anywhere in your application.

In your .gitignore, just add an exception for node_modules/app:

node_modules/*
!node_modules/app

If your application had transforms configured in package.json, you’ll need to create a separate package.json with its own transform field in your node_modules/foo or node_modules/app/foocomponent directory because transforms don’t apply across module boundaries. This will make your modules more robust against configuration changes in your application and it will be easier to independently reuse the packages outside of your application.

custom paths

You might see some places talk about using the $NODE_PATH environment variable or opts.paths to add directories for node and browserify to look in to find modules.

Unlike most other platforms, using a shell-style array of path directories with $NODE_PATH is not as favorable in node compared to making effective use of the node_modules directory.

This is because your application is more tightly coupled to a runtime environment configuration so there are more moving parts and your application will only work when your environment is setup correctly.

node and browserify both support but discourage the use of $NODE_PATH.

non-javascript assets

There are many browserify transforms you can use to do many things. Commonly, transforms are used to include non-javascript assets into bundle files.

brfs

One way of including any kind of asset that works in both node and the browser is brfs.

brfs uses static analysis to compile the results of fs.readFile() and fs.readFileSync() calls down to source contents at compile time.

For example, this main.js:

var fs = require('fs');
var html = fs.readFileSync(__dirname + '/robot.html', 'utf8');
console.log(html);

applied through brfs would become something like:

var fs = require('fs');
var html = "<b>beep boop</b>";
console.log(html);

when run through brfs.

This is handy because you can reuse the exact same code in node and the browser, which makes sharing modules and testing much simpler.

fs.readFile() and fs.readFileSync() accept the same arguments as in node, which makes including inline image assets as base64-encoded strings very easy:

var fs = require('fs');
var imdata = fs.readFileSync(__dirname + '/image.png', 'base64');
var img = document.createElement('img');
img.setAttribute('src', 'data:image/png;base64,' + imdata);
document.body.appendChild(img);

If you have some css you want to inline into your bundle, you can do that too with the assistence of a module such as insert-css:

var fs = require('fs');
var insertStyle = require('insert-css');

var css = fs.readFileSync(__dirname + '/style.css', 'utf8');
insertStyle(css);

Inserting css this way works fine for small reusable modules that you distribute with npm because they are fully-contained, but if you want a more wholistic approach to asset management using browserify, check out atomify and parcelify.

hbsify

jadeify

reactify

reusable components

Putting these ideas about code organization together, we can build a reusable UI component that we can reuse across our application or in other applications.

Here is a bare-bones example of an empty widget module:

module.exports = Widget;

function Widget (opts) {
    if (!(this instanceof Widget)) return new Widget(opts);
    this.element = document.createElement('div');
}

Widget.prototype.appendTo = function (target) {
    if (typeof target === 'string') target = document.querySelector(target);
    target.appendChild(this.element);
};

Handy javascript constructor tip: you can include a this instanceof Widget check like above to let people consume your module with new Widget or Widget(). It’s nice because it hides an implementation detail from your API and you still get the performance benefits and indentation wins of using prototypes.

To use this widget, just use require() to load the widget file, instantiate it, and then call.appendTo() with a css selector string or a dom element.

Like this:

var Widget = require('./widget.js');
var w = Widget();
w.appendTo('#container');

and now your widget will be appended to the DOM.

Creating HTML elements procedurally is fine for very simple content but gets very verbose and unclear for anything bigger. Luckily there are many transforms available to ease importing HTML into your javascript modules.

Let’s extend our widget example using brfs. We can also use domify to turn the string thatfs.readFileSync() returns into an html dom element:

var fs = require('fs');
var domify = require('domify');

var html = fs.readFileSync(__dirname + '/widget.html', 'utf8');

module.exports = Widget;

function Widget (opts) {
    if (!(this instanceof Widget)) return new Widget(opts);
    this.element = domify(html);
}

Widget.prototype.appendTo = function (target) {
    if (typeof target === 'string') target = document.querySelector(target);
    target.appendChild(this.element);
};

and now our widget will load a widget.html, so let’s make one:

<div class="widget">
  <h1 class="name"></h1>
  <div class="msg"></div>
</div>

It’s often useful to emit events. Here’s how we can emit events using the built-in events module and the inherits module:

var fs = require('fs');
var domify = require('domify');
var inherits = require('inherits');
var EventEmitter = require('events').EventEmitter;

var html = fs.readFileSync(__dirname + '/widget.html', 'utf8');

inherits(Widget, EventEmitter);
module.exports = Widget;

function Widget (opts) {
    if (!(this instanceof Widget)) return new Widget(opts);
    this.element = domify(html);
}

Widget.prototype.appendTo = function (target) {
    if (typeof target === 'string') target = document.querySelector(target);
    target.appendChild(this.element);
    this.emit('append', target);
};

Now we can listen for 'append' events on our widget instance:

var Widget = require('./widget.js');
var w = Widget();
w.on('append', function (target) {
    console.log('appended to: ' + target.outerHTML);
});
w.appendTo('#container');

We can add more methods to our widget to set elements on the html:

var fs = require('fs');
var domify = require('domify');
var inherits = require('inherits');
var EventEmitter = require('events').EventEmitter;

var html = fs.readFileSync(__dirname + '/widget.html', 'utf8');

inherits(Widget, EventEmitter);
module.exports = Widget;

function Widget (opts) {
    if (!(this instanceof Widget)) return new Widget(opts);
    this.element = domify(html);
}

Widget.prototype.appendTo = function (target) {
    if (typeof target === 'string') target = document.querySelector(target);
    target.appendChild(this.element);
};

Widget.prototype.setName = function (name) {
    this.element.querySelector('.name').textContent = name;
}

Widget.prototype.setMessage = function (msg) {
    this.element.querySelector('.msg').textContent = msg;
}

If setting element attributes and content gets too verbose, check out hyperglue.

Now finally, we can toss our widget.js and widget.html into node_modules/app-widget. Since our widget uses the brfs transform, we can create a package.json with:

{
  "name": "app-widget",
  "version": "1.0.0",
  "private": true,
  "main": "widget.js",
  "browserify": {
    "transform": [ "brfs" ]
  },
  "dependencies": {
    "brfs": "^1.1.1",
    "inherits": "^2.0.1"
  }
}

And now whenever we require('app-widget') from anywhere in our application, brfs will be applied to our widget.js automatically! Our widget can even maintain its own dependencies. This way we can update dependencies in one widgets without worrying about breaking changes cascading over into other widgets.

Make sure to add an exclusion in your .gitignore for node_modules/app-widget:

node_modules/*
!node_modules/app-widget

You can read more about shared rendering in node and the browser if you want to learn about sharing rendering logic between node and the browser using browserify and some streaming html libraries.

testing in node and the browser

Testing modular code is very easy! One of the biggest benefits of modularity is that your interfaces become much easier to instantiate in isolation and so it’s easy to make automated tests.

Unfortunately, few testing libraries play nicely out of the box with modules and tend to roll their own idiosyncratic interfaces with implicit globals and obtuse flow control that get in the way of a clean design with good separation.

People also make a huge fuss about “mocking” but it’s usually not necessary if you design your modules with testing in mind. Keeping IO separate from your algorithms, carefully restricting the scope of your module, and accepting callback parameters for different interfaces can all make your code much easier to test.

For example, if you have a library that does both IO and speaks a protocol, consider separating the IO layer from the protocol using an interface like streams.

Your code will be easier to test and reusable in different contexts that you didn’t initially envision. This is a recurring theme of testing: if your code is hard to test, it is probably not modular enough or contains the wrong balance of abstractions. Testing should not be an afterthought, it should inform your whole design and it will help you to write better interfaces.

testing libraries

tape

Tape was specifically designed from the start to work well in both node and browserify. Suppose we have an index.js with an async interface:

module.exports = function (x, cb) {
    setTimeout(function () {
        cb(x * 100);
    }, 1000);
};

Here’s how we can test this module using tape. Let’s put this file in test/beep.js:

var test = require('tape');
var hundreder = require('../');

test('beep', function (t) {
    t.plan(1);

    hundreder(5, function (n) {
        t.equal(n, 500, '5*100 === 500');
    });
});

Because the test file lives in test/, we can require the index.js in the parent directory by doingrequire('../')index.js is the default place that node and browserify look for a module if there is no package.json in that directory with a main field.

We can require() tape like any other library after it has been installed with npm install tape.

The string 'beep' is an optional name for the test. The 3rd argument to t.equal() is a completely optional description.

The t.plan(1) says that we expect 1 assertion. If there are not enough assertions or too many, the test will fail. An assertion is a comparison like t.equal(). tape has assertion primitives for:

  • t.equal(a, b) – compare a and b strictly with ===
  • t.deepEqual(a, b) – compare a and b recursively
  • t.ok(x) – fail if x is not truthy

and more! You can always add an additional description argument.

Running our module is very simple! To run the module in node, just run node test/beep.js:

$ node test/beep.js
TAP version 13
# beep
ok 1 5*100 === 500

1..1
# tests 1
# pass  1

# ok

The output is printed to stdout and the exit code is 0.

To run our code in the browser, just do:

$ browserify test/beep.js > bundle.js

then plop bundle.js into a <script> tag:

<script src="bundle.js"></script>

and load that html in a browser. The output will be in the debug console which you can open with F12, ctrl-shift-j, or ctrl-shift-k depending on the browser.

This is a bit cumbersome to run our tests in a browser, but you can install the testling command to help. First do:

npm install -g testling

And now just do browserify test/beep.js | testling:

$ browserify test/beep.js | testling

TAP version 13
# beep
ok 1 5*100 === 500

1..1
# tests 1
# pass  1

# ok

testling will launch a real browser headlessly on your system to run the tests.

Now suppose we want to add another file, test/boop.js:

var test = require('tape');
var hundreder = require('../');

test('fraction', function (t) {
    t.plan(1);

    hundreder(1/20, function (n) {
        t.equal(n, 5, '1/20th of 100');
    });
});

test('negative', function (t) {
    t.plan(1);

    hundreder(-3, function (n) {
        t.equal(n, -300, 'negative number');
    });
});

Here our test has 2 test() blocks. The second test block won’t start to execute until the first is completely finished, even though it is asynchronous. You can even nest test blocks by usingt.test().

We can run test/boop.js with node directly as with test/beep.js, but if we want to run both tests, there is a minimal command-runner we can use that comes with tape. To get the tape command do:

npm install -g tape

and now you can run:

$ tape test/*.js
TAP version 13
# beep
ok 1 5*100 === 500
# fraction
ok 2 1/20th of 100
# negative
ok 3 negative number

1..3
# tests 3
# pass  3

# ok

and you can just pass test/*.js to browserify to run your tests in the browser:

$ browserify test/* | testling

TAP version 13
# beep
ok 1 5*100 === 500
# fraction
ok 2 1/20th of 100
# negative
ok 3 negative number

1..3
# tests 3
# pass  3

# ok

Putting together all these steps, we can configure package.json with a test script:

{
  "name": "hundreder",
  "version": "1.0.0",
  "main": "index.js",
  "devDependencies": {
    "tape": "^2.13.1",
    "testling": "^1.6.1"
  },
  "scripts": {
    "test": "tape test/*.js",
    "test-browser": "browserify test/*.js | testlingify"
  }
}

Now you can do npm test to run the tests in node and npm run test-browser to run the tests in the browser. You don’t need to worry about installing commands with -g when you use npm run: npm automatically sets up the $PATH for all packages installed locally to the project.

If you have some tests that only run in node and some tests that only run in the browser, you could have subdirectories in test/ such as test/server and test/browser with the tests that run both places just in test/. Then you could just add the relevant directory to the globs:

{
  "name": "hundreder",
  "version": "1.0.0",
  "main": "index.js",
  "devDependencies": {
    "tape": "^2.13.1",
    "testling": "^1.6.1"
  },
  "scripts": {
    "test": "tape test/*.js test/server/*.js",
    "test-browser": "browserify test/*.js test/browser/*.js | testlingify"
  }
}

and now server-specific and browser-specific tests will be run in addition to the common tests.

If you want something even slicker, check out prova once you have gotten the basic concepts.

assert

The core assert module is a fine way to write simple tests too, although it can sometimes be tricky to ensure that the correct number of callbacks have fired.

You can solve that problem with tools like macgyver but it is appropriately DIY.

mocha

code coverage

testling-ci

bundling

This section covers bundling in more detail.

Bundling is the step where starting from the entry files, all the source files in the dependency graph are walked and packed into a single output file.

saving bytes

One of the first things you’ll want to tweak is how the files that npm installs are placed on disk to avoid duplicates.

When you do a clean install in a directory, npm will ordinarily factor out similar versions into the topmost directory where 2 modules share a dependency. However, as you install more packages, new packages will not be factored out automatically. You can however use the npm dedupe command to factor out packages for an already-installed set of packages in node_modules/. You could also remove node_modules/ and install from scratch again if problems with duplicates persist.

browserify will not include the same exact file twice, but compatible versions may differ slightly. browserify is also not version-aware, it will include the versions of packages exactly as they are laid out in node_modules/ according to the require() algorithm that node uses.

You can use the browserify --list and browserify --deps commands to further inspect which files are being included to scan for duplicates.

standalone

You can generate UMD bundles with --standalone that will work in node, the browser with globals, and AMD environments.

Just add --standalone NAME to your bundle command:

$ browserify foo.js --standalone xyz > bundle.js

This command will export the contents of foo.js under the external module name xyz. If a module system is detected in the host environment, it will be used. Otherwise a window global named xyz will be exported.

You can use dot-syntax to specify a namespace hierarchy:

$ browserify foo.js --standalone foo.bar.baz > bundle.js

If there is already a foo or a foo.bar in the host environment in window global mode, browserify will attach its exports onto those objects. The AMD and module.exports modules will behave the same.

Note however that standalone only works with a single entry or directly-required file.

external bundles

ignoring and excluding

In browserify parlance, “ignore” means: replace the definition of a module with an empty object. “exclude” means: remove a module completely from a dependency graph.

Another way to achieve many of the same goals as ignore and exclude is the “browser” field in package.json, which is covered elsewhere in this document.

ignoring

Ignoring is an optimistic strategy designed to stub in an empty definition for node-specific modules that are only used in some codepaths. For example, if a module requires a library that only works in node but for a specific chunk of the code:

var fs = require('fs');
var path = require('path');
var mkdirp = require('mkdirp');

exports.convert = convert;
function convert (src) {
    return src.replace(/beep/g, 'boop');
}

exports.write = function (src, dst, cb) {
    fs.readFile(src, function (err, src) {
        if (err) return cb(err);
        mkdirp(path.dirname(dst), function (err) {
            if (err) return cb(err);
            var out = convert(src);
            fs.writeFile(dst, out, cb);
        });
    });
};

browserify already “ignores” the 'fs' module by returning an empty object, but the .write()function here won’t work in the browser without an extra step like a static analysis transform or a runtime storage fs abstraction.

However, if we really want the convert() function but don’t want to see mkdirp in the final bundle, we can ignore mkdirp with b.ignore('mkdirp') or browserify --ignore mkdirp. The code will still work in the browser if we don’t call write() because require('mkdirp') won’t throw an exception, just return an empty object.

Generally speaking it’s not a good idea for modules that are primarily algorithmic (parsers, formatters) to do IO themselves but these tricks can let you use those modules in the browser anyway.

To ignore foo on the command-line do:

browserify --ignore foo

To ignore foo from the api with some bundle instance b do:

b.ignore('foo')

excluding

Another related thing we might want is to completely remove a module from the output so thatrequire('modulename') will fail at runtime. This is useful if we want to split things up into multiple bundles that will defer in a cascade to previously-defined require() definitions.

For example, if we have a vendored standalone bundle for jquery that we don’t want to appear in the primary bundle:

$ npm install jquery
$ browserify -r jquery --standalone jquery > jquery-bundle.js

then we want to just require('jquery') in a main.js:

var $ = require('jquery');
$(window).click(function () { document.body.bgColor = 'red' });

defering to the jquery dist bundle so that we can write:

<script src="jquery-bundle.js"></script>
<script src="bundle.js"></script>

and not have the jquery definition show up in bundle.js, then while compiling the main.js, you can--exclude jquery:

browserify main.js --exclude jquery > bundle.js

To exclude foo on the command-line do:

browserify --exclude foo

To exclude foo from the api with some bundle instance b do:

b.exclude('foo')

browserify cdn

shimming

Unfortunately, some packages are not written with node-style commonjs exports. For modules that export their functionality with globals or AMD, there are packages that can help automatically convert these troublesome packages into something that browserify can understand.

browserify-shim

One way to automatically convert non-commonjs packages is with browserify-shim.

browserify-shim is loaded as a transform and also reads a "browserify-shim" field frompackage.json.

Suppose we need to use a troublesome third-party library we’ve placed in ./vendor/foo.js that exports its functionality as a window global called FOO. We can set up our package.json with:

{
  "browserify": {
    "transform": "browserify-shim"
  },
  "browserify-shim": {
    "./vendor/foo.js": "FOO"
  }
}

and now when we require('./vendor/foo.js'), we get the FOO variable that ./vendor/foo.jstried to put into the global scope, but that attempt was shimmed away into an isolated context to prevent global pollution.

We could even use the browser field to make require('foo') work instead of always needing to use a relative path to load ./vendor/foo.js:

{
  "browser": {
    "foo": "./vendor/foo.js"
  },
  "browserify": {
    "transform": "browserify-shim"
  },
  "browserify-shim": {
    "foo": "FOO"
  }
}

Now require('foo') will return the FOO export that ./vendor/foo.js tried to place on the global scope.

partitioning

Most of the time, the default method of bundling where one or more entry files map to a single bundled output file is perfectly adequate, particularly considering that bundling minimizes latency down to a single http request to fetch all the javascript assets.

However, sometimes this initial penalty is too high for parts of a website that are rarely or never used by most visitors such as an admin panel. This partitioning can be accomplished with the technique covered in the ignoring and excluding section, but factoring out shared dependencies manually can be tedious for a large and fluid dependency graph.

Luckily, there are plugins that can automatically factor browserify output into separate bundle payloads.

factor-bundle

factor-bundle splits browserify output into multiple bundle targets based on entry-point. For each entry-point, an entry-specific output file is built. Files that are needed by two or more of the entry files get factored out into a common bundle.

For example, suppose we have 2 pages: /x and /y. Each page has an entry point, x.js for /x andy.js for /y.

We then generate page-specific bundles bundle/x.js and bundle/y.js with bundle/common.jscontaining the dependencies shared by both x.js and y.js:

browserify x.js y.js -p [ factor-bundle -o bundle/x.js -o bundle/y.js ] \
  -o bundle/common.js

Now we can simply put 2 script tags on each page. On /x we would put:

<script src="/bundle/common.js"></script>
<script src="/bundle/x.js"></script>

and on page /y we would put:

<script src="/bundle/common.js"></script>
<script src="/bundle/y.js"></script>

You could also load the bundles asynchronously with ajax or by inserting a script tag into the page dynamically but factor-bundle only concerns itself with generating the bundles, not with loading them.

partition-bundle

partition-bundle handles splitting output into multiple bundles like factor-bundle, but includes a built-in loader using a special loadjs() function.

partition-bundle takes a json file that maps source files to bundle files:

{
  "entry.js": ["./a"],
  "common.js": ["./b"],
  "common/extra.js": ["./e", "./d"]
}

Then partition-bundle is loaded as a plugin and the mapping file, output directory, and destination url path (required for dynamic loading) are passed in:

browserify -p [ partition-bundle --map mapping.json \
  --output output/directory --url directory ]

Now you can add:

<script src="entry.js"></script>

to your page to load the entry file. From inside the entry file, you can dynamically load other bundles with a loadjs() function:

a.addEventListener('click', function() {
  loadjs(['./e', './d'], function(e, d) {
    console.log(e, d);
  });
});

compiler pipeline

Since version 5, browserify exposes its compiler pipeline as a labeled-stream-splicer.

This means that transformations can be added or removed directly into the internal pipeline. This pipeline provides a clean interface for advanced customizations such as watching files or factoring bundles from multiple entry points.

For example, we could replace the built-in integer-based labeling mechanism with hashed IDs by first injecting a pass-through transform after the “deps” have been calculated to hash source files. Then we can use the hashes we captured to create our own custom labeler, replacing the built-in “label” transform:

var browserify = require('browserify');
var through = require('through2');
var shasum = require('shasum');

var b = browserify('./main.js');

var hashes = {};
var hasher = through.obj(function (row, enc, next) {
    hashes[row.id] = shasum(row.source);
    this.push(row);
    next();
});
b.pipeline.get('deps').push(hasher);

var labeler = through.obj(function (row, enc, next) {
    row.id = hashes[row.id];

    Object.keys(row.deps).forEach(function (key) {
        row.deps[key] = hashes[row.deps[key]];
    });

    this.push(row);
    next();
});
b.pipeline.get('label').splice(0, 1, labeler);

b.bundle().pipe(process.stdout);

Now instead of getting integers for the IDs in the output format, we get file hashes:

$ node bundle.js
(function e(t,n,r){function s(o,u){if(!n[o]){if(!t[o]){var a=typeof require=="function"&&require;if(!u&&a)return a(o,!0);if(i)return i(o,!0);var f=new Error("Cannot find module '"+o+"'");throw f.code="MODULE_NOT_FOUND",f}var l=n[o]={exports:{}};t[o][0].call(l.exports,function(e){var n=t[o][1][e];return s(n?n:e)},l,l.exports,e,t,n,r)}return n[o].exports}var i=typeof require=="function"&&require;for(var o=0;o<r.length;o++)s(r[o]);return s})({"5f0a0e3a143f2356582f58a70f385f4bde44f04b":[function(require,module,exports){
var foo = require('./foo.js');
var bar = require('./bar.js');

console.log(foo(3) + bar(4));

},{"./bar.js":"cba5983117ae1d6699d85fc4d54eb589d758f12b","./foo.js":"736100869ec2e44f7cfcf0dc6554b055e117c53c"}],"cba5983117ae1d6699d85fc4d54eb589d758f12b":[function(require,module,exports){
module.exports = function (n) { return n * 100 };

},{}],"736100869ec2e44f7cfcf0dc6554b055e117c53c":[function(require,module,exports){
module.exports = function (n) { return n + 1 };

},{}]},{},["5f0a0e3a143f2356582f58a70f385f4bde44f04b"]);

Note that the built-in labeler does other things like checking for the external, excluded configurations so replacing it will be difficult if you depend on those features. This example just serves as an example for the kinds of things you can do by hacking into the compiler pipeline.

build your own browserify

labeled phases

Each phase in the browserify pipeline has a label that you can hook onto. Fetch a label with.get(name) to return a labeled-stream-splicer handle at the appropriate label. Once you have a handle, you can .push().pop().shift().unshift(), and .splice() your own transform streams into the pipeline or remove existing transform streams.

recorder

The recorder is used to capture the inputs sent to the deps phase so that they can be replayed on subsequent calls to .bundle(). Unlike in previous releases, v5 can generate bundle output multiple times. This is very handy for tools like watchify that re-bundle when a file has changed.

deps

The deps phase expects entry and require() files or objects as input and calls module-deps to generate a stream of json output for all of the files in the dependency graph.

module-deps is invoked with some customizations here such as:

  • setting up the browserify transform key for package.json
  • filtering out external, excluded, and ignored files
  • setting the default extensions for .js and .json plus options configured in theopts.extensions parameter in the browserify constructor
  • configuring a global insert-module-globals transform to detect and implement processBuffer,global__dirname, and __filename
  • setting up the list of node builtins which are shimmed by browserify

json

This transform adds module.exports= in front of files with a .json extension.

unbom

This transform removes byte order markers, which are sometimes used by windows text editors to indicate the endianness of files. These markers are ignored by node, so browserify ignores them for compatibility.

syntax

This transform checks for syntax errors using the syntax-error package to give informative syntax errors with line and column numbers.

sort

This phase uses deps-sort to sort the rows written to it in order to make the bundles deterministic.

dedupe

The transform at this phase uses dedupe information provided by deps-sort in the sort phase to remove files that have duplicate contents.

label

This phase converts file-based IDs which might expose system path information and inflate the bundle size into integer-based IDs.

The label phase will also normalize path names based on the opts.basedir or process.cwd() to avoid exposing system path information.

emit-deps

This phase emits a 'dep' event for each row after the label phase.

debug

If opts.debug was given to the browserify() constructor, this phase will transform input to addsourceRoot and sourceFile properties which are used by browser-pack in the pack phase.

pack

This phase converts rows with 'id' and 'source' parameters as input (among others) and generates the concatenated javascript bundle as output using browser-pack.

wrap

This is an empty phase at the end where you can easily tack on custom post transformations without interfering with existing mechanics.

browser-unpack

browser-unpack converts a compiled bundle file back into a format very similar to the output ofmodule-deps.

This is very handy if you need to inspect or transform a bundle that has already been compiled.

For example:

$ browserify src/main.js | browser-unpack
[
{"id":1,"source":"module.exports = function (n) { return n * 100 };","deps":{}}
,
{"id":2,"source":"module.exports = function (n) { return n + 1 };","deps":{}}
,
{"id":3,"source":"var foo = require('./foo.js');\nvar bar = require('./bar.js');\n\nconsole.log(foo(3) + bar(4));","deps":{"./bar.js":1,"./foo.js":2},"entry":true}
]

This decomposition is needed by tools such as factor-bundle and bundle-collapser.

plugins

When loaded, plugins have access to the browserify instance itself.

using plugins

Plugins should be used sparingly and only in cases where a transform or global transform is not powerful enough to perform the desired functionality.

You can load a plugin with -p on the command-line:

$ browserify main.js -p foo > bundle.js

would load a plugin called foofoo is resolved with require(), so to load a local file as a plugin, preface the path with a ./ and to load a plugin from node_modules/foo, just do -p foo.

You can pass options to plugins with square brackets around the entire plugin expression, including the plugin name as the first argument:

$ browserify one.js two.js \
  -p [ factor-bundle -o bundle/one.js -o bundle/two.js ] \
  > common.js

This command-line syntax is parsed by the subarg package.

To see a list of browserify plugins, browse npm for packages with the keyword “browserify-plugin”:http://npmjs.org/browse/keyword/browserify-plugin

authoring plugins

To author a plugin, write a package that exports a single function that will receive a bundle instance and options object as arguments:

// example plugin

module.exports = function (b, opts) {
  // ...
}

Plugins operate on the bundle instance b directly by listening for events or splicing transforms into the pipeline. Plugins should not overwrite bundle methods unless they have a very good reason.

Writing modular javascript

[Fuente: http://addyosmani.com/writing-modular-js/]

Writing Modular JavaScript With AMD, CommonJS & ES Harmony

Modularity: The Importance Of Decoupling Your Application

When we say an application is modular, we generally mean it’s composed of a set of highly decoupled, distinct pieces of functionality stored in modules. As you probably know, loose coupling facilitates easier maintainability of apps by removing dependencies where possible. When this is implemented efficiently, its quite easy to see how changes to one part of a system may affect another.

Unlike some more traditional programming languages however, the current iteration of JavaScript (ECMA-262) doesn’t provide developers with the means to import such modules of code in a clean, organized manner. It’s one of the concerns with specifications that haven’t required great thought until more recent years where the need for more organized JavaScript applications became apparent.

Instead, developers at present are left to fall back on variations of the module or object literal patterns. With many of these, module scripts are strung together in the DOM with namespaces being described by a single global object where it’s still possible to incur naming collisions in your architecture. There’s also no clean way to handle dependency management without some manual effort or third party tools.

Whilst native solutions to these problems will be arriving in ES Harmony, the good news is that writing modular JavaScript has never been easier and you can start doing it today.

In this article, we’re going to look at three formats for writing modular JavaScript: AMD,CommonJS and proposals for the next version of JavaScript, Harmony.

Prelude A Note On Script Loaders

It’s difficult to discuss AMD and CommonJS modules without talking about the elephant in the room – script loaders. At present, script loading is a means to a goal, that goal being modular JavaScript that can be used in applications today – for this, use of a compatible script loader is unfortunately necessary. In order to get the most out of this article, I recommend gaining a basic understanding of how popular script loading tools work so the explanations of module formats make sense in context.

There are a number of great loaders for handling module loading in the AMD and CJS formats, but my personal preferences are RequireJS and curl.js. Complete tutorials on these tools are outside the scope of this article, but I can recommend reading John Hann’s post about curl.js and James Burke’s RequireJS API documentation for more.

From a production perspective, the use of optimization tools (like the RequireJS optimizer) to concatenate scripts is recommended for deployment when working with such modules. Interestingly, with the Almond AMD shim, RequireJS doesn’t need to be rolled in the deployed site and what you might consider a script loader can be easily shifted outside of development.

That said, James Burke would probably say that being able to dynamically load scripts after page load still has its use cases and RequireJS can assist with this too. With these notes in mind, let’s get started.

AMD A Format For Writing Modular JavaScript In The Browser

The overall goal for the AMD (Asynchronous Module Definition) format is to provide a solution for modular JavaScript that developers can use today. It was born out of Dojo’s real world experience using XHR+eval and proponents of this format wanted to avoid any future solutions suffering from the weaknesses of those in the past.

The AMD module format itself is a proposal for defining modules where both the module and dependencies can be asynchronously loaded. It has a number of distinct advantages including being both asynchronous and highly flexible by nature which removes the tight coupling one might commonly find between code and module identity. Many developers enjoy using it and one could consider it a reliable stepping stone towards the module system proposed for ES Harmony.

AMD began as a draft specification for a module format on the CommonJS list but as it wasn’t able to reach full concensus, further development of the format moved to the amdjsgroup.

Today it’s embraced by projects including Dojo (1.7), MooTools (2.0), Firebug (1.8) and even jQuery (1.7). Although the term CommonJS AMD format has been seen in the wild on occasion, it’s best to refer to it as just AMD or Async Module support as not all participants on the CJS list wished to pursue it.

Getting Started With Modules

The two key concepts you need to be aware of here are the idea of a define method for facilitating module definition and a require method for handling dependency loadingdefine is used to define named or unnamed modules based on the proposal using the following signature:

define(
    module_id /*optional*/, 
    [dependencies] /*optional*/, 
    definition function /*function for instantiating the module or object*/
);

As you can tell by the inline comments, the module_id is an optional argument which is typically only required when non-AMD concatenation tools are being used (there may be some other edge cases where it’s useful too). When this argument is left out, we call the module anonymous.

When working with anonymous modules, the idea of a module’s identity is DRY, making it trivial to avoid duplication of filenames and code. Because the code is more portable, it can be easily moved to other locations (or around the file-system) without needing to alter the code itself or change its ID. The module_id is equivalent to folder paths in simple packages and when not used in packages. Developers can also run the same code on multiple environments just by using an AMD optimizer that works with a CommonJS environment such as r.js.

Back to the define signature, the dependencies argument represents an array of dependencies which are required by the module you are defining and the third argument (‘definition function’) is a function that’s executed to instantiate your module. A barebone module could be defined as follows:

Understanding AMD: define()

// A module_id (myModule) is used here for demonstration purposes only
 
define('myModule', 
    ['foo', 'bar'], 
    // module definition function
    // dependencies (foo and bar) are mapped to function parameters
    function ( foo, bar ) {
        // return a value that defines the module export
        // (i.e the functionality we want to expose for consumption)
    
        // create your module here
        var myModule = {
            doStuff:function(){
                console.log('Yay! Stuff');
            }
        }
 
        return myModule;
});
 
// An alternative example could be..
define('myModule', 
    ['math', 'graph'], 
    function ( math, graph ) {
 
        // Note that this is a slightly different pattern
        // With AMD, it's possible to define modules in a few
        // different ways due as it's relatively flexible with
        // certain aspects of the syntax
        return {
            plot: function(x, y){
                return graph.drawPie(math.randomGrid(x,y));
            }
        }
    };
});

require on the other hand is typically used to load code in a top-level JavaScript file or within a module should you wish to dynamically fetch dependencies. An example of its usage is:

Understanding AMD: require()

// Consider 'foo' and 'bar' are two external modules
// In this example, the 'exports' from the two modules loaded are passed as
// function arguments to the callback (foo and bar)
// so that they can similarly be accessed
 
require(['foo', 'bar'], function ( foo, bar ) {
        // rest of your code here
        foo.doSomething();
});

Dynamically-loaded Dependencies

define(function ( require ) {
    var isReady = false, foobar;
 
    // note the inline require within our module definition
    require(['foo', 'bar'], function (foo, bar) {
        isReady = true;
        foobar = foo() + bar();
    });
 
    // we can still return a module
    return {
        isReady: isReady,
        foobar: foobar
    };
});

Understanding AMD: plugins

The following is an example of defining an AMD-compatible plugin:

// With AMD, it's possible to load in assets of almost any kind
// including text-files and HTML. This enables us to have template
// dependencies which can be used to skin components either on
// page-load or dynamically.
 
define(['./templates', 'text!./template.md','css!./template.css'],
    function( templates, template ){
        console.log(templates);
        // do some fun template stuff here.
    }
});

Loading AMD Modules Using require.js

require(['app/myModule'], 
    function( myModule ){
        // start the main module which in-turn
        // loads other modules
        var module = new myModule();
        module.doStuff();
});

Loading AMD Modules Using curl.js

curl(['app/myModule.js'], 
    function( myModule ){
        // start the main module which in-turn
        // loads other modules
        var module = new myModule();
        module.doStuff();
});

Modules With Deferred Dependencies

// This could be compatible with jQuery's Deferred implementation,
// futures.js (slightly different syntax) or any one of a number
// of other implementations
define(['lib/Deferred'], function( Deferred ){
    var defer = new Deferred(); 
    require(['lib/templates/?index.html','lib/data/?stats'],
        function( template, data ){
            defer.resolve({ template: template, data:data });
        }
    );
    return defer.promise();
});

Why Is AMD A Better Choice For Writing Modular JavaScript?

  • Provides a clear proposal for how to approach defining flexible modules.
  • Significantly cleaner than the present global namespace and <script> tag solutions many of us rely on. There’s a clean way to declare stand-alone modules and dependencies they may have.
  • Module definitions are encapsulated, helping us to avoid pollution of the global namespace.
  • Works better than some alternative solutions (eg. CommonJS, which we’ll be looking at shortly). Doesn’t have issues with cross-domain, local or debugging and doesn’t have a reliance on server-side tools to be used. Most AMD loaders support loading modules in the browser without a build process.
  • Provides a ‘transport’ approach for including multiple modules in a single file. Other approaches like CommonJS have yet to agree on a transport format.
  • It’s possible to lazy load scripts if this is needed.

Related Reading

The RequireJS Guide To AMD

What’s the fastest way to load AMD modules?

AMD vs. CJS, what’s the better format?

AMD Is Better For The Web Than CommonJS Modules

The Future Is Modules Not Frameworks

AMD No Longer A CommonJS Specification

On Inventing JavaScript Module Formats And Script Loaders

The AMD Mailing List

AMD Modules With jQuery

The Basics

Unlike Dojo, jQuery really only comes with one file, however given the plugin-based nature of the library, we can demonstrate how straight-forward it is to define an AMD module that uses it below.

define(['js/jquery.js','js/jquery.color.js','js/underscore.js'],
    function($, colorPlugin, _){
        // Here we've passed in jQuery, the color plugin and Underscore
        // None of these will be accessible in the global scope, but we
        // can easily reference them below.
 
        // Pseudo-randomize an array of colors, selecting the first
        // item in the shuffled array
        var shuffleColor = _.first(_.shuffle(['#666','#333','#111']));
 
        // Animate the background-color of any elements with the class
        // 'item' on the page using the shuffled color
        $('.item').animate({'backgroundColor': shuffleColor });
        
        return {};
        // What we return can be used by other modules
    });

There is however something missing from this example and it’s the concept of registration.

Registering jQuery As An Async-compatible Module

One of the key features that landed in jQuery 1.7 was support for registering jQuery as an asynchronous module. There are a number of compatible script loaders (including RequireJS and curl) which are capable of loading modules using an asynchronous module format and this means fewer hacks are required to get things working.

As a result of jQuery’s popularity, AMD loaders need to take into account multiple versions of the library being loaded into the same page as you ideally don’t want several different versions loading at the same time. Loaders have the option of either specifically taking this issue into account or instructing their users that there are known issues with third party scripts and their libraries.

What the 1.7 addition brings to the table is that it helps avoid issues with other third party code on a page accidentally loading up a version of jQuery on the page that the owner wasn’t expecting. You don’t want other instances clobbering your own and so this can be of benefit.

The way this works is that the script loader being employed indicates that it supports multiple jQuery versions by specifying that a property, define.amd.jQuery is equal to true. For those interested in more specific implementation details, we register jQuery as a named module as there is a risk that it can be concatenated with other files which may use AMD’s define() method, but not use a proper concatenation script that understands anonymous AMD module definitions.

The named AMD provides a safety blanket of being both robust and safe for most use-cases.

// Account for the existence of more than one global 
// instances of jQuery in the document, cater for testing 
// .noConflict()

var jQuery = this.jQuery || "jQuery", 
$ = this.$ || "$",
originaljQuery = jQuery,
original$ = $,
amdDefined;

define(['jquery'] , function ($) {
    $('.items').css('background','green');
    return function () {};
});

// The very easy to implement flag stating support which 
// would be used by the AMD loader
define.amd = {
    jQuery: true
};

Smarter jQuery Plugins

I’ve recently discussed some ideas and examples of how jQuery plugins could be written using Universal Module Definition (UMD) patterns here. UMDs define modules that can work on both the client and server, as well as with all popular script loaders available at the moment. Whilst this is still a new area with a lot of concepts still being finalized, feel free to look at the code samples in the section title AMD && CommonJS below and let me know if you feel there’s anything we could do better.

What Script Loaders & Frameworks Support AMD?

In-browser:

Server-side:

AMD Conclusions

The above are very trivial examples of just how useful AMD modules can truly be, but they hopefully provide a foundation for understanding how they work.

You may be interested to know that many visible large applications and companies currently use AMD modules as a part of their architecture. These include IBM and the BBC iPlayer, which highlight just how seriously this format is being considered by developers at an enterprise-level.

For more reasons why many developers are opting to use AMD modules in their applications, you may be interested in this post by James Burke.

CommonJS A Module Format Optimized For The Server

CommonJS are a volunteer working group which aim to design, prototype and standardize JavaScript APIs. To date they’ve attempted to ratify standards for both modules and packages. The CommonJS module proposal specifies a simple API for declaring modules server-side and unlike AMD attempts to cover a broader set of concerns such as io, filesystem, promises and more.

Getting Started

From a structure perspective, a CJS module is a reusable piece of JavaScript which exports specific objects made available to any dependent code – there are typically no function wrappers around such modules (so you won’t see define used here for example).

At a high-level they basically contain two primary parts: a free variable named exports which contains the objects a module wishes to make available to other modules and a require function that modules can use to import the exports of other modules.

Understanding CJS: require() and exports

// package/lib is a dependency we require
var lib = require('package/lib');
 
// some behaviour for our module
function foo(){
    lib.log('hello world!');
}
 
// export (expose) foo to other modules
exports.foo = foo;

Basic consumption of exports

// define more behaviour we would like to expose
function foobar(){
        this.foo = function(){
                console.log('Hello foo');
        }
 
        this.bar = function(){
                console.log('Hello bar');
        }
}
 
// expose foobar to other modules
exports.foobar = foobar;
 
 
// an application consuming 'foobar'
 
// access the module relative to the path
// where both usage and module files exist
// in the same directory
 
var foobar = require('./foobar').foobar,
    test   = new foobar();
 
test.bar(); // 'Hello bar'

AMD-equivalent Of The First CJS Example

define(['package/lib'], function(lib){
 
    // some behaviour for our module
    function foo(){
        lib.log('hello world!');
    } 
 
    // export (expose) foo for other modules
    return {
        foobar: foo
    };
});

Consuming Multiple Dependencies

app.js
var modA = require('./foo');
var modB = require('./bar');
 
exports.app = function(){
    console.log('Im an application!');
}
 
exports.foo = function(){
    return modA.helloWorld();
}
bar.js
exports.name = 'bar';
foo.js
require('./bar');
exports.helloWorld = function(){
    return 'Hello World!!''
}

What Loaders & Frameworks Support CJS?

In-browser:
Server-side:

Is CJS Suitable For The Browser?

There are developers that feel CommonJS is better suited to server-side development which is one reason there’s currently a level of disagreement over which format should and will be used as the de facto standard in the pre-Harmony age moving forward. Some of the arguments against CJS include a note that many CommonJS APIs address server-oriented features which one would simply not be able to implement at a browser-level in JavaScript – for example, iosystem and js could be considered unimplementable by the nature of their functionality.

That said, it’s useful to know how to structure CJS modules regardless so that we can better appreciate how they fit in when defining modules which may be used everywhere. Modules which have applications on both the client and server include validation, conversion and templating engines. The way some developers are approaching choosing which format to use is opting for CJS when a module can be used in a server-side environment and using AMD if this is not the case.

As AMD modules are capable of using plugins and can define more granular things like constructors and functions this makes sense. CJS modules are only able to define objects which can be tedious to work with if you’re trying to obtain constructors out of them.

Although it’s beyond the scope of this article, you may have also noticed that there were different types of ‘require’ methods mentioned when discussing AMD and CJS.

The concern with a similar naming convention is of course confusion and the community are currently split on the merits of a global require function. John Hann’s suggestion here is that rather than calling it ‘require’, which would probably fail to achieve the goal of informing users about the different between a global and inner require, it may make more sense to rename the global loader method something else (e.g. the name of the library). It’s for this reason that a loader like curl.js uses curl() as opposed to require.

Related Reading

Demystifying CommonJS Modules

JavaScript Growing Up

The RequireJS Notes On CommonJS

Taking Baby Steps With Node.js And CommonJS – Creating Custom Modules

Asynchronous CommonJS Modules for the Browser

The CommonJS Mailing List

AMD && CommonJS Competing, But Equally Valid Standards

Whilst this article has placed more emphasis on using AMD over CJS, the reality is that both formats are valid and have a use.

AMD adopts a browser-first approach to development, opting for asynchronous behaviour and simplified backwards compatability but it doesn’t have any concept of File I/O. It supports objects, functions, constructors, strings, JSON and many other types of modules, running natively in the browser. It’s incredibly flexible.

CommonJS on the other hand takes a server-first approach, assuming synchronous behaviour, no global baggage as John Hann would refer to it as and it attempts to cater for the future (on the server). What we mean by this is that because CJS supports unwrapped modules, it can feel a little more close to the ES.next/Harmony specifications, freeing you of the define() wrapper that AMD enforces. CJS modules however only support objects as modules.

Although the idea of yet another module format may be daunting, you may be interested in some samples of work on hybrid AMD/CJS and Univeral AMD/CJS modules.

GIT: submodules

[Fuente: http://git-scm.com/book/en/v2/Git-Tools-Submodules]

Submodules

It often happens that while working on one project, you need to use another project from within it. Perhaps it’s a library that a third party developed or that you’re developing separately and using in multiple parent projects. A common issue arises in these scenarios: you want to be able to treat the two projects as separate yet still be able to use one from within the other.

Here’s an example. Suppose you’re developing a web site and creating Atom feeds. Instead of writing your own Atom-generating code, you decide to use a library. You’re likely to have to either include this code from a shared library like a CPAN install or Ruby gem, or copy the source code into your own project tree. The issue with including the library is that it’s difficult to customize the library in any way and often more difficult to deploy it, because you need to make sure every client has that library available. The issue with vendoring the code into your own project is that any custom changes you make are difficult to merge when upstream changes become available.

Git addresses this issue using submodules. Submodules allow you to keep a Git repository as a subdirectory of another Git repository. This lets you clone another repository into your project and keep your commits separate.

Starting with Submodules

We’ll walk through developing a simple project that has been split up into a main project and a few sub-projects.

Let’s start by adding an existing Git repository as a submodule of the repository that we’re working on. To add a new submodule you use the git submodule add command with the URL of the project you would like to start tracking. In this example, we’ll add a library called “DbConnector”.

$ git submodule add https://github.com/chaconinc/DbConnector
Cloning into 'DbConnector'...
remote: Counting objects: 11, done.
remote: Compressing objects: 100% (10/10), done.
remote: Total 11 (delta 0), reused 11 (delta 0)
Unpacking objects: 100% (11/11), done.
Checking connectivity... done.

By default, submodules will add the subproject into a directory named the same as the repository, in this case “DbConnector”. You can add a different path at the end of the command if you want it to go elsewhere.

If you run git status at this point, you’ll notice a few things.

$ git status
On branch master
Your branch is up-to-date with 'origin/master'.

Changes to be committed:
  (use "git reset HEAD <file>..." to unstage)

	new file:   .gitmodules
	new file:   DbConnector

First you should notice the new .gitmodules file. This is a configuration file that stores the mapping between the project’s URL and the local subdirectory you’ve pulled it into:

$ cat .gitmodules
[submodule "DbConnector"]
	path = DbConnector
	url = https://github.com/chaconinc/DbConnector

If you have multiple submodules, you’ll have multiple entries in this file. It’s important to note that this file is version-controlled with your other files, like your .gitignore file. It’s pushed and pulled with the rest of your project. This is how other people who clone this project know where to get the submodule projects from.

Since the URL in the .gitmodules file is what other people will first try to clone/fetch from, make sure to use a URL that they can access if possible. For example, if you use a different URL to push to than others would to pull from, use the one that others have access to. You can overwrite this value locally with git config submodule.DbConnector.url PRIVATE_URL for your own use.

The other listing in the git status output is the project folder entry. If you run git diff on that, you see something interesting:

$ git diff --cached DbConnector
diff --git a/DbConnector b/DbConnector
new file mode 160000
index 0000000..c3f01dc
--- /dev/null
+++ b/DbConnector
@@ -0,0 +1 @@
+Subproject commit c3f01dc8862123d317dd46284b05b6892c7b29bc

Although DbConnector is a subdirectory in your working directory, Git sees it as a submodule and doesn’t track its contents when you’re not in that directory. Instead, Git sees it as a particular commit from that repository.

If you want a little nicer diff output, you can pass the --submodule option to git diff.

$ git diff --cached --submodule
diff --git a/.gitmodules b/.gitmodules
new file mode 100644
index 0000000..71fc376
--- /dev/null
+++ b/.gitmodules
@@ -0,0 +1,3 @@
+[submodule "DbConnector"]
+       path = DbConnector
+       url = https://github.com/chaconinc/DbConnector
Submodule DbConnector 0000000...c3f01dc (new submodule)

When you commit, you see something like this:

$ git commit -am 'added DbConnector module'
[master fb9093c] added DbConnector module
 2 files changed, 4 insertions(+)
 create mode 100644 .gitmodules
 create mode 160000 DbConnector

Notice the 160000 mode for the DbConnector entry. That is a special mode in Git that basically means you’re recording a commit as a directory entry rather than a subdirectory or a file.