Author Archives: admin

AngularJS: Things I Wish I Were Told About Angular.js


25 May 2013

Recently I have worked on a project using Angular.js. As of writing this post, it’s a medium sized app (~10 modules, ~20 controllers, ~5 services and ~10 directives) with quite decent test coverage. When I look back, I find myself learning much more about Angular.js than before. It’s not a smooth ride: I’ve gone through lots of refactor and rewrite. And there are lots of things I wish I were told before I started to work on Angular.js

Heads up! The post is based on Angular.js stable branch (1.0.x). Some of the content might not apply if you choose to live on the edge (1.1.x).

About Learning Curve

Angular.js has a very different learning curve from Backbone.js. Backbone.js has a steep learning curve up front: to write a simplest app, you need to know pretty much everything about it. And after you have nailed it, there is pretty much nothing other than some common building blocks and best practices.

However, Angular.js is very different. The initial barrier to get started is very low: after going through the examples on homepage, you should be good to go. You don’t need to understand all core concepts like module, service, dependency injection, scope. Controller and some directives can get you started. And the documentation for quick start is good enough.

The problem is when you dive into Angular.js and start to write some serious app, the learning curve suddenly becomes very steep and its documentations are either incomplete or cumbersome. It took quite some time to get over this stage and started to benefit from all these learning. This is the first thing I wish I were told – so that I would not feel frustrated when I had all these problems.

Understand Modules Before You Start

Angular.js does not force you to use its module system. So when I started, I had a huge CoffeeScript file with lots of controllers with no segregation. It soon went out of control: it’s almost impossible to navigate the file and you have to load this huge file for every page even if you are using only one of the controllers.

So I had to stop and refactor my code. I first split controllers into different modules based on different pages and then extract some common filters and services. Then I made sure all modules had its dependencies set up correctly and components are successfully injected. Of course, you also need to understand dependency injection (DI): a concept that is quite popular in Java and C# world but not in dynamic languages. Fortunately (or some of you may think it’s unfortunate), I’ve done quite some Java and C#, it did not took me very long to understand DI in Angular.js – although it did not feel natural to use DI in JavaScript for the first time. When you get used to DI, you will love the concept, it makes your code less coupled and easier to maintain.

So if you do not want to go through the refactor, learn and plan your modules before you start.

When Your Controllers Get Bigger, Learn Scope

I am sure you use $scope at the very beginning, but probably not understand it. Well, you don’t need to. You can treat it as magic. However, here are some very common questions you might have encountered:

Why my view will not get updated?

function SomeCtrl($scope) {
	setTimeout(function() {
		$scope.message = "I am here!";
	}, 1000);

Why ng-model="touched" does not work as expected?

function SomeCtrl($scope) {
	$scope.items = [{ value: 2 }];
	$scope.touched = false;
	<li ng-repeat="item in items">
		<input type="text" ng-model="item.value">
		<input type="checkbox" ng-model="touched">

These all have something to do with $scope. But more importantly, when your controllers get bigger and bigger, it’s time to break it into sub controllers, and inheritance is closely related to $scope. You need to understand how $scope works:

  1. How the magical data bindings is related to $scope.$watch() and $scope.$apply();
  2. How the $scope is shared between a controller and its child;
  3. When Angular.js will implicitly create a new $scope;
  4. How the events ($scope.$on$scope.$emit$scope.$broadcast work as a communication among scopes)

FYI, the answer to the two questions above are: for 1, you need $scope.$apply() and for 2, Angular.js creates a scope implicitly in ng-repeat.

When You Manipulate DOM in Controller, Write Directives

Angular.js advocates separation of controller and view (DOM). But there are cases when you might need access to DOM in controllers. A typical example is jQuery plugins. Although jQuery plugins seem a little bit “outdated”, but the ecosystem is not going away any sooner. However, it’s really a bad idea to use jQuery plugin directly in your controller: you are making your controller impossible to test and you can hardly reuse any logic.

The best way to deal with it is writing your own directive. I am sure you’ve used directives, but you might not think you can write one. In fact, directives are one of the most powerful concepts in Angular.js. Think of it as reusable components: not only for jQuery plugins, but also for any of your (sub) controllers that get used in multiple places. It’s kinda like shadow DOM. And the best part is, your controller does not need to know the existence of directives: communications are achieved through scope sharing and $scope events.

However, the documentation on directives is probably one of the worst Angular.js documentations. It’s full of jargon, with very few examples and lots of abstract explanations without any context. So don’t freak out if you cannot understand it. The way I get over this is to start writing directives, even without fully understand the documentations. When I reach a stage where I don’t know what to do, I go back to the documentation or Google around. After finishing wrapping a jQuery plugin into my own directive, I master most of the concepts. Learning by doing is probably the most efficient way when there is no good documentation.

For any of you who look for a word of wisdom, here is my one sentence summary: put control logic in directive controller, and DOM logic in link function; scope sharing is the glue.

Angular.js Router Might Not Be What You Expected

This is one of the gotchas that takes me some time of figure out. Angular.js router has a very specific usage. It’s not like Backbone.js router where it monitors the location.hash and call a function when the route matches. Angular.js works like server-side router: it is built to work with ng-view. When a route is hit, a template is loaded and injected into ng-view and some controller can be instantiated. So if you don’t like this behavior, you have to roll out your own service (wrap some existing library into a Angular.js service).

This is not a generic tips, rather, it is very specific. However, I wasted lots of time on this so I just put it here and it might save you a few hours.

About Testing

Since Angular.js advocates separation of view and controller. Writing unit tests on controller is very easy: no DOM is needed at all. Again, the purpose of unit test is not to guarantee that your app runs without a problem – that’s the purpose of integration tests. Unit tests are used to quickly locate the buggy code when your integration tests fail. The only unit test that requires some DOM manipulation is directive.

Because there is dependency injection, when writing your controller unit tests, you can easily use a mock for one of the dependencies, which makes assertions much easier. A typically way to do this is use a mock when instantiate your controller. Or since Angular.js DI uses last win rule, you can rebind (override) your dependency by registering a fake implementation in the tests.

Angular.js also have a E2E test, which might not be easy to setup and run, especially your page is rendered by some server-side language first before JavaScript takes control. After spending quite some time on setting this up, I eventually gave up and fell back using Selenium tests as a higher level of integration tests.

Although I have been using Angular.js a lot lately, I cannot call myself an expert. In fact, one of the reason I write this post is also to learn: I will appreciated it if you can correct my mistakes or misunderstandings. Please Discuss on HN

If you have comment, you can post it HN (link can be found at the end of the essay), send me an email at ruoysun AT gmail DOT com or ping me on Twitter @insraq.


Angular JS Book: Chapter 7: Otros aspectos

[Fuente: AngularJS book by Brad Green and Shyam Seshadri]

In this chapter, we will take a look at some other useful features that are present in AngularJS, but weren’t covered at all or in depth in the chapters and examples so far.


Up to now, you have seen quite a few examples of the $location service in AngularJS.Most of them would have been fleeting glances—an access here, set there. In this section,we will dive into what exactly the $location service in AngularJS is for, and when you should and shouldn’t use it.

The $location service is a wrapper around the window.location that is present in any browser. So why would you want to use it instead of working directly with window.location?

  • Goodbye global state : window.location is a prime example of global state (actually, both window and document objects in the browser are prime examples). The minute you have global state in your application, testing, maintaining and working with it becomes a hassle (if not now, then definitely in the long run). The $location service hides this nastiness (what we call global state), and allows you to test the browser’s location details by injecting mocks during your unit tests.
  • API: window.location gives you total access to the contents of the browser location.That is, window.location gives you the string while $location gives you nice,jQuery-like setters and getters to work with it in a clean way.
  • AngularJS integration:If you use $location, you can use it however you want. But with window.location, you would have to be responsible for notifying AngularJS of changes, and listen to changes as well.
  • HTML5 integration:The $location service is smart enough to realize when HTML5 APIs are available within a browser and use them. If they’re not available, it will fall back to the default usage.

So when should you use the $location service? Any time you want to react to a change in the URL (that is not covered by the $routes, which you should primarily use for working with URL-based views), as well as effect a change in the current URL in the browser.

Let’s consider a small example of how you would use the $location service in a realworld application. Consider a case where we have a datepicker, and when a date is selected, the app navigates to a certain URL. Let us take a look at how that might look:

// Assume that the datepicker calls $scope.dateSelected with the date
$scope.dateSelected = function(dateTxt) {
  $location.path('/filteredResults?startDate=' + dateTxt);
  // If this were being done in the callback for
  // an external library, like jQuery, then we would have to


*NOTA: To $apply, or Not to $apply?

There is confusion amongst AngularJS developers about when $scope.$apply() should be called and when it shouldn’t.Recommendations and rumors on the Internet are rampant. This section will make it crystal clear.

But first, let us try to put $apply in a simpler form.

Scope.$apply is like a lazy worker. It is told to do a lot of work, and it is responsible for making sure that the bindings are updated and the view reflects all those changes. But rather than doing this work all the time, it does it only when it feels it has enough work to be done. In all other cases, it just nods, and notes the work for later. It only actually does the work when you get its attention and tell it explicitly to work. AngularJS does this internally at regular intervals within its lifecycle, but if the call comes from outside (say a jQuery UI event), scope.$apply just takes note, but does nothing. That is why we have to call scope.$apply to tell it, “Hey! You need to do this right now, and not wait!”

Here are four quick tips about when (and how) to call $apply.

  • DO NOT call it all the time. Calling $apply when AngularJS is happily ticking away (in its $digest cycle, as we call it) will cause an exception. So “better safe than sorry” is not the approach you want to use.
  • DO CALL it when controls outside of AngularJS (DOM events, external callbacks such as jQuery UI controls, and so on) are calling AngularJS functions. At that point, you want to tell AngularJS to update itself (the models, the views, and so on), and $apply does just that.
  • Whenever possible, execute your code or function by passing it to $apply, rather than executing the function and then calling $apply(). For example, execute the following code:
$scope.$apply(function() {
  $scope.variable1 = 'some value';
instead of the following:
$scope.variable1 = 'some value';

While both of these will have the same effect, they differ in one significant way.The first will capture any errors that happen when executeSomeAction is called, while the latter will quietly ignore any such errors. You will get error notifications from AngularJS only when you do the first.

  • Consider using something like safeApply:
$scope.safeApply = function(fn) {
  var phase = this.$root.$$phase;
  if(phase == '$apply' || phase == '$digest') {
    if(fn && (typeof(fn) === 'function')) {
  } else {

You can monkey patch this into the topmost scope or the rootscope, and then use the $scope.$safeApply function everywhere. This has been under discussion, and hopefully in a future release, this will be the default behavior.


What are those other methods also available on the $location object? Table 7-1 contains a quick summary for you to use in a bind.

Let us take a look at how the $location service would behave, if the URL in the browser was!/path?param1=value1#hashValue.

Table 7-1. Functions on the $location service
Getter Function –> Getter Value –> Setter Function

  • absUrl() –>!/path?param1=value1#hashValue  –> N/A
  • hash() –> hashValue –> hash(‘newHash’)
  • host() –> –> N/A
  • path() –> /path –> path(‘/newPath’)
  • protocol() –> http –> N/A
  • search()  –> {‘param1’: ‘value1‘} –> search({‘c’: ‘def’})
  • url() —> /path?param1=value1?hashValue –> url(‘/newPath?p2=v2’)

The Setter Function column in Table 7-1 has some sample values that denote the type of object the setter function expects.

Note that the search() setter has a few modes of operation:

  • Calling search(searchObj) with an object<string, string> basically denotes all the parameters and their values
  • Calling search(string) will set the URL params as q=String directly in the URL
  • Calling search(param, value) with a string and value sets (or calling with null removes) a particular search parameter in the URL Using any one of the setters does not mean that window.location will get changed instantly!

The $location service plays well with the Angular lifecycle, so all changes to the location will accumulate and get applied together at the end of the cycle. So feel free to make those changes, one after the other, without fear that the user will see a URL that keeps flickering and changing underneath him.

HTML5 Mode and Hashbang Mode

The $location service can be configured using the $locationProvider (which can be injected, just like everything else in AngularJS). Of particular interest are two properties on this provider, which are:

  • html5ModeA boolean value which dictates whether the $location service works in HTML5 mode or not
  • hashPrefixA string value (actually a single character) that is used as the prefix for Hashbang URLs (in Hashbang mode or legacy browsers in HTML5 mode). By default it is empty, so Angular’s hash is just ‘’. If the hashPrefix is set to ‘!’, then Angular uses what we call Hashbang URLs (! followed by the url).

You might ask, just what are these modes? Well, pretend that you have this super awesome website at that uses AngularJS.

Let’s say you have a particular route (with some parameters and a hash), such as /foo?bar=123#baz.

In normal Hashbang mode (with the hashPrefix set to ‘!’), or in legacy browsers without HTML5 mode support, your URL would look something like:!/foo?bar=123#baz

While in HTML5 mode, the URL would simply look like this:

In both cases, location.path() would be /foo, would be bar=123, and location.hash() would be baz. So if that is the case, why wouldn’t you want to use the HTML5 mode?

The Hashbang approach works seamlessly across all browsers, and requires the least amount of configuration. You just need to set the hashBang prefix (! by default) and you are good to go.

The HTML5 mode, on the other hand, talks to the browser URL through the use of HTML5 History API. The $location service is smart enough to figure out whether HTML5 mode is supported or not, and fall back to the Hashbang approach if necessary, so you don’t need to worry about additional work. But you do have to take care of the following:

Server-side configuration
Because HTML5 links look like any other URL on your application, you need to take care on the server side to route all links within your app to your main HTML (most likely, the index.html). For example, if your app is the landing page for, and you have a route /amazing?who=me in your app, then the URL that the browser would show is

When you browse through your app, this will be fine, as the HTML5 History API kicks in and takes care of things. But if you try to browse directly to this URL, your server will look at you as if you have gone crazy, as there is no such known resource on its side. So you would have to ensure that all requests to /amazing get redirected to /index.html#!/amazing.

AngularJS will kick in from that point onward and take care of things. It will detect changes to the path and redirect to the correct AngularJS routes that were defined.

Link rewriting
You can easily specify URLs as follows:

<a href="/some?foo=bar">link</a>

Depending on whether you are using HTML5 mode or not, AngularJS will take care to redirect to /some?foo=bar or index.html#!/some?foo=bar, respectively. No additional steps are required on your part. Awesome, isn’t it?

But the following types of links will not be rewritten, and the browser will perform a full reload on the page:

a.Links that contain a target element such as the following:

<a href="/some/link" target="_self">link</a>

b. Absolute links going to a different domain:

<a href="">link</a>

This is different because it is an absolute URL, while the previous example used the existing base URL.

c. Links starting with a different base path when one is already defined:

<a href="/some-other-base/link">link</a>

Relative Links
Be sure to check all relative links, images, scripts, and so on. You must either specify the URL base in the head of your main HTML file (<base href=”/my-base”>), or you must use absolute URLs (starting with /) everywhere because relative URLs will be resolved to absolute URLs using the initial absolute URL of the document, which is often different from the root of the application.

Running Angular apps with the History API enabled from document root is strongly encouraged, as it takes care of all relative link issues.

AngularJS Module Methods

The AngularJS Module is responsible for defining how your application is bootstrapped.It also declaratively defines the pieces of your application. Let us take a look at how itaccomplishes this.

Where’s the Main Method?

Those of you coming from a programming language like Java or even Python might be wondering, where is that main method in AngularJS? You know, the one that bootstraps everything, and is the first thing to get executed? The one that functions in JavaScript and instantiates and wires everything together, then tells your application to go run?

AngularJS doesn’t have that. What it has instead is the concept of modules. Modules allow us to declaratively specify our application’s dependencies, and how the wiring and bootstrapping happens. The reason for this kind of approach is manifold:

  1. It is declarative. That means it is written in a way that is easier to write and understand.It reads like English!
  2. It is modular. It forces you to think about how you define your components and dependencies, and makes them explicit.
  3. It allows for easy testing. In your unit tests, you can selectively pull in modules, and avoid the untestable portions of your code. And in your scenario tests, you can load additional modules, which can make working with some components easier.

Let us first take a look at how you would use a module that you have defined, then take a look at how we would declare one.

Say we have a module, in fact, the module for our application, called “MyAwesomeApp.”In my HTML, I could just add the following to the <html> tag (or technically, any other tag):

<html ng-app="MyAwesomeApp">

The ng-app directive tells AngularJS to bootstrap your application using the MyAwesomeApp module.

So how would that module be defined? Well, for one, we recommend that you have separate modules for your services, directives, and filters. Your main module could then just declare the other modules as a dependency (just like we did in Chapter 4 with the RequireJS example).

This makes it easier to manage your modules, as they are nice complete chunks of code.Each module has one and only one responsibility. This also allows your tests to load only the modules they care about, and thus reduces the amount of initialization that needs to happen. The tests can be small and focused.

Loading and Dependencies

Module loading happens in two distinct phases, and the functions reflect them. These are the Config and the Run blocks (or phases):

The Config block
AngularJS hooks up and registers all the providers in this phase. Because of this, only providers and constants can be injected into Config blocks. Services that may or may not have been initialized cannot be injected.

The Run block
Run blocks are used to kickstart your application, and start executing after the injector is finished creating. To prevent further system configuration from happening from this point onwards, only instances and constants can be injected into Run blocks. The Run block is the closest you are going to find to a main method in AngularJS.

Convenience Methods

What can you do with a module? We can instantiate controllers, directives, filters, and services, but the module class allows you to do more, as you can see in Table 7-2:
Table 7-2. Module configuration methods
API Method –> Description

  • config(configFn) –> Use this method to register work that needs to be done when the module is loading.
  • constant(name, object) This happens first, so you can declare all your constants app-wide, and have them available at all configuration (the first method in this list) and instance methods (all methods from here on, like controller, service, and so on).
  • controller(name, constructor) We have seen a lot of examples of this; it basically sets up a controller for use.
  • directive(name, directiveFactory) As discussed in Chapter 6, this allows you to create directives within your app.
  • filter(name, filterFactory) Allows you to create named AngularJS filters, as discussed in previous chapters.
  • run(initializationFn) Use this method when you want to perform work that needs to happen once the injector is set up, right before your application is available to the user.
  • value(name, object) Allows values to be injected across the application.
  • service(name, serviceFactory) Covered in the next section.
  • factory(name, factoryFn) Covered in the next section.
  • provider(name, providerFn) Covered in the next section.

You might realize that we are missing the details of three particular API calls—Factory,Provider, and Service—from the preceding table. There is a reason for that: it is quite easy to confuse the usage between the three, so we will dive into a small example that should better illustrate when (and how!) to use each one.

The Factory
The Factory API call is used whenever we have a class or object that needs some amount of logic or parameters before it can be initialized. A Factory is a function that is responsible for creating a certain value (or object). Let us take the example of a greeter function that needs to be initialized with its salutation:

function Greeter(salutation) {
	this.greet = function(name) {
		return salutation + ' ' + name;

The greeter factory would look something like:

myApp.factory('greeter', function(salut) {
  return new Greeter(salut);

and it would be called as:

var myGreeter = greeter('Halo');

Another simpler example:

var app = angular.module('myApp', []);

// Factory
app.factory('testFactory', function(){
    return {
        sayHello: function(text){
            return "Factory says \"Hello " + text + "\"";

// AngularJS Controller that uses the factory
function HelloCtrl($scope, testFactory)
    $scope.fromFactory = testFactory.sayHello("World");

The Service
What about services? Well, the difference between a Factory and a Service is that the Factory invokes the function passed to it and returns a result. The Service invokes “new” on the constructor method passed to it and returns the result.

So the preceding greeter Factory could be replaced with a greeter Service as follows:

myApp.service('greeter', Greeter);

Every time I asked for a greeter, AngularJS would call the new Greeter() and return that.

Another simpler example:

var app = angular.module('myApp', []);

// Service definition
app.service('testService', function(){
    this.sayHello= function(text){
        return "Service says \"Hello " + text + "\"";

// AngularJS Controller that uses the service
function HelloCtrl($scope, testService)
    $scope.fromService = testService.sayHello("World");

The Provider
This is the most complicated (and thus most configurable, obviously) of the lot.The Provider combines both the Factory and the Service, and also throws in the benefit of being able to configure the Provider function before the injection system is fully in place (in the config blocks, that is).

Let’s see how a modified greeter Service using the Provider might look:

myApp.provider('greeter', function() {
  var salutation = 'Hello';
  this.setSalutation = function(s) {
    salutation = s;
  function Greeter(a) {
    this.greet = function() {
      return salutation + ' ' + a;
  this.$get = function(a) {
    return new Greeter(a);

This allows us to set the salutation at runtime (for example, based on the language of the user).

var myApp = angular.module(myApp, []).config(function(greeterProvider) {

AngularJS would internally call $get whenever someone asked for an instance of the greeter object.
* Warning!
There is a slight, but significant difference between using:
angular.module(‘MyApp’, […])

The difference is that the first creates a new Angular module, pulling in the module dependencies listed in the square brackets ([…]). The second uses the existing module that has already been defined by the first call.

So you should make sure that you use the following code only once in your entire application:
angular.module(‘MyApp’, […]) // Or MyModule, if you are modularizing your app

If you do not plan to save it to a variable and refer to it across your application, then use angular.module(MyApp) in the rest of the files to ensure you get a handle to the correct AngularJS module. Everything on the module must be defined by accessing the variable, or be added on the spot where the module has been defined.

Communicating Between Scopes with $on,$emit, and $broadcast

AngularJS scopes have a very hierarchical and nested structure. There is one main $rootScope (per Angular app or ng-app, that is), which all other scopes either inherit, or are nested under. Quite often, you will find that scopes don’t share variables or do not prototypically inherit from one another.

In such a case, how do you communicate between scopes? One option is creating a service that is a singleton in the scope of the app, and processing all inter-scope communication through that service.

There is another option in AngularJS: communicating through events on the scope.There are some restrictions; for example, you cannot generally broadcast an event to all watching scopes. You have to be selective in whether you are communicating to your parents or to your children.

But before we discuss that, how do you listen to these events? Here is an example where our scope on any Star System is waiting and watching for an event we call “planetDestroyed.”

scope.$on('planetDestroyed', function(event, galaxy, planet) {
  // Custom event, so what planet was destroyed
  scope.alertNearbyPlanets(galaxy, planet);

Where do these additional arguments to the event listener come from? Let’s take a look at how an individual planet could communicate with its parent Star System.

scope.$emit('planetDestroyed’, scope.myGalaxy, scope.myPlanet);

The additional arguments to $emit are passed on as function parameters to the listener functions. Also, $emit communicates only upwards from its current scope, so the poor denizens of the planet (if they had a scope to themselves) would not be notified if their planet was being destroyed.

Similarly, if a Galaxy wanted to communicate downwards to its child, the Star System scope, then it could communicate as follows:

scope.$emit('selfDestructSystem', targetSystem);

Then all Star Systems listening for the event could look at the target system, and decide if they should self-destruct, using these commands:

scope.$on('selfDestructSystem', function(event, targetSystem) {
  if (scope.mySystem === targetSystem) {
    scope.selfDestruct(); // Go Ka-boom!!

Of course, as the event propagates all the way up (or down), it might become necessary at a certain level or scope to say, “Enough! You shall not PASS!” or to prevent what the event does by default. The event object passed to the listener has functions to handle all of the above, and more, so let us take a quick look at what you can get up to with the event object in Table 7-3.

Table 7-3. Event object properties and methods
Property of event –> Purpose

  • event.targetScope –> The scope which emitted or broadcasted the event originally
  • event.currentScope –> The scope currently handling the event
  • –> The name of the event
  • event.stopPropagation() –> A function which will prevent further event propagation (this is available only for events that were $emitted 
  • event.preventDefault() –> This actually doesn’t do anything but set defaultPrevented to true. It is up to the implementer of the listeners to check on defaultPrevented before taking action 
  • event.defaultPrevented –> true if preventDefault was called


Qué es el Scrum


Scrum es un proceso en el que se aplican de manera regular un conjunto de mejores prácticas para trabajar colaborativamente, en equipo, y obtener el mejor resultado posible de un proyecto. Estas prácticas se apoyan unas a otras y su selección tiene origen en un estudio de la manera de trabajar de equipos altamente productivos.

En Scrum se realizan entregas parciales y regulares del producto final, priorizadas por el beneficio que aportan al receptor del proyecto. Por ello, Scrum está especialmente indicado para proyectos en entornos complejos, donde se necesita obtener resultados pronto, donde los requisitos son cambiantes o poco definidos, donde la innovación, la competitividad, la flexibilidad y la productividad son fundamentales.

Scrum también se utiliza para resolver situaciones en que no se está entregando al cliente lo que necesita, cuando las entregas se alargan demasiado, los costes se disparan o la calidad no es aceptable, cuando se necesita capacidad de reacción ante la competencia, cuando la moral de los equipos es baja y la rotación alta, cuando es necesario identificar y solucionar ineficiencias sistemáticamente o cuando se quiere trabajar utilizando un proceso especializado en el desarrollo de producto.


En 1986 Hirotaka Takeuchi e Ikujiro Nonaka describieron una nueva aproximación holística que incrementa la rapidez y la flexibilidad en el desarrollo de nuevos productos comerciales.[1]Takeuchi y Nonaka comparan esta nueva aproximación holística, en la cual las fases se traslapan de manera intensa y el proceso completo es realizado por un equipo con funciones transversales, como en el rugby, donde los 8 delanteros del equipo «actúan como una unidad para intentar desplazar a los fowards del equipo contrario». Los casos de estudio provienen de las industrias automovilísticas, así como de fabricación de máquinas fotográficas, computadoras e impresoras.

En 1991 Peter DeGrace y Leslie Stahl en su libro Wicked Problems, Righteous Solutions (A problemas malvados, soluciones virtuosas),[2] se refirieron a esta aproximación como scrum (melé en inglés), un término propio del rugby mencionado en el artículo por Takeuchi y Nonaka.

A principios de los años 1990 Ken Schwaber empleó una aproximación que lo llevó a poner en práctica el scrum en su compañía, Advanced Development Methods.[cita requerida] Por aquel tiempo Jeff Sutherland desarrolló una aproximación similar en Easel Corporation y fue el primero en denominar scrum. En 1995 Schwaber y Sutherland, durante el OOPSLA ’95 desarrollado en Austin, presentaron en paralelo una serie de artículos describiendo scrum, siendo ésta la primera aparición pública de la metodología.[cita requerida] Durante los años siguientes,Schwaber y Sutherland, colaboraron para consolidar los artículos antes mencionados, así como sus experiencias y el conjunto de mejores prácticas de la industria que conforman a lo que ahora se le conoce como scrum.[cita requerida] En 2001, Schwaber y Mike Beedle describieron la metodología en el libro Agile Software Development with Scrum.

El proceso

En Scrum un proyecto se ejecuta en bloques temporales cortos y fijos (iteraciones de un mes natural y hasta de dos semanas, si así se necesita). Cada iteración tiene que proporcionar un resultado completo, un incremento de producto final que sea susceptible de ser entregado con el mínimo esfuerzo al cliente cuando lo solicite.

El proceso parte de la lista de objetivos/requisitos priorizada del producto, que actúa como plan el proyecto. En esta lista el cliente prioriza los objetivos balanceando el valor que leaportan respecto a su coste y quedan repartidos en iteraciones y entregas. De manera regular el cliente puede maximizar la utilidad de lo que se desarrolla y el retorno de inversiónmediante la replanificación de objetivos que realiza al inicio de cada iteración.

Las actividades que se llevan a cabo en Scrum son las siguientes:

Planificación de la iteración

El primer día de la iteración se realiza la reunión de planificación de la iteración. Tiene dos partes:

  1. Selección de requisitos (4 horas máximo). El cliente presenta al equipo la lista de requisitos priorizada del producto o proyecto.El equipo pregunta al cliente las dudas que surgen y selecciona los requisitos más prioritarios que se compromete a completar en la iteración, de manera que puedan ser entregados si el cliente lo solicita.
  2. Planificación de la iteración (4 horas máximo). El equipo elabora la lista de tareas de la iteración necesarias para desarrollar los requisitos a que se ha comprometido. La estimaciónde esfuerzo se hace de manera conjunta y los miembros del equipo se autoasignan las tareas.

Ejecución de la iteración

Cada día el equipo realiza una reunión de sincronización (15 minutos máximo). Cada miembro del equipo inspecciona el trabajo que el resto está realizando (dependencias entre tareas,progreso hacia el objetivo de la iteración,  obstáculos que pueden impedir este objetivo) para poder hacer las adaptaciones necesarias que permitan cumplir con el compromiso adquirido. En la reunión cada miembro del equipo responde a tres preguntas:

  • ¿Qué he hecho desde la última reunión de sincronización?
  • ¿Qué voy a hacer a partir de este momento?
  • ¿Qué impedimentos tengo o voy a tener?

Durante la iteración el Facilitador se encarga de que el equipo pueda 

cumplir con su compromiso y de que no se merme su productividad.

Elimina los obstáculos que el equipo no puede resolver por sí mismo.

Protege al equipo de interrupciones externas que puedan afectar su compromiso o su productividad.

Inspección y adaptación

El último día de la iteración se realiza la reunión de revisión de la iteración. Tiene dos partes:

  1. Demostración (4 horas máximo). El equipo presenta al cliente los requisitos completados en la iteración, en forma de incremento de producto preparado para ser entregado con el mínimo esfuerzo. En función de los resultados mostrados y de los cambios que haya habido en el contexto del proyecto, el cliente realiza las adaptaciones necesarias de manera objetiva, ya desde la primera iteración, replanificando el proyecto.
  2. Retrospectiva (4 horas máximo). El equipo analiza cómo ha sido su manera de trabajar y cuáles son los problemas que podrían impedirle progresar adecuadamente, mejorando de manera continua su productividad. El Facilitador se encargará de ir eliminando los obstáculos identificados.

AngularJS Book: Chapter 6: Directives

With directives, you can extend HTML to add declarative syntax to do whatever you like. By doing so, you can replace generic <div>s and <span>s with elements and attributes that actually mean something specific to your application. The ones that come with Angular provide basic functionality, but you can create your own to do things
specific to your application.

First we’re going to go over the directives API and how it fits within the Angular startup and runtime lifecycles. From there, we’ll use this knowledge to create several classes of directives. We’ll finish the chapter with how to write unit tests for directives and how to make these run quickly.

But first, a few notes on the syntax for using directives.

Directives and HTML Validation

Throughout this book, we’ve used Angular’s built-in directives with the ng-directivename syntax. Examples include ng-repeat, ng-view, and ng-controller. Here, the ng portion is the namespace for Angular, and the part after the dash is the name for the directive.

While we prefer this syntax for ease of typing, it isn’t valid in many HTML validation schemes. To support these, Angular lets you invoke any directive in several ways. The following syntaxes, laid out in Table 6-1, are all equivalent to allow for your preferred validator to work properly:

Validator Format Example
none namespace-name ng-repeat=item in items
XML namespace:name ng:repeat=item in items
HTML5 data-namespace-name data-ng-repeat=item in items
xHTML x-namespace-name x-ng-repeat=item in items

Because you can use any of these, the Angular documentation lists directives with a camel-case format, instead of any of these options. For example, ng-repeat is found under the title ngRepeat. As you’ll see in a bit, you’ll use this naming format when defining your own directives.

If you don’t use an HTML validator (most folks don’t), you’ll be just fine using the namespace-directive syntax as you’ve seen in the examples so far.

API Overview

A basic pseudo-code template for creating any directive follows:

var myModule = angular.module(...);
myModule.directive('namespaceDirectiveName', function factory(injectables) {
	var directiveDefinitionObject = {
		restrict: string,
		priority: number,
		template: string,
		templateUrl: string,
		replace: bool,
		transclude: bool,
		scope: bool or object,
		controller: function controllerConstructor($scope,
		require: string,
		link: function postLink(scope, iElement, iAttrs) { ... },
		compile: function compile(tElement, tAttrs, transclude) {
			return {
				pre: function preLink(scope, iElement, iAttrs, controller) { ... },
				post: function postLink(scope, iElement, iAttrs, controller) { ... }
	return directiveDefinitionObject;

Some of the options are mutually exclusive, most of them are optional, and all of them have details that are worth explaining.

Table 6-2 provides an overview of when you’d use each of the options.
Table 6-2. Directive definition options
Property –> Purpose

  • restrict –> Declare how directive can be used in a template as an element, attribute, class, comment, or any combination.
  • priority –> Set the order of execution in the template relative to other directives on the element.
  • template –> Specify an inline template as a string. Not used if you’re specifying your template as a URL.
  • templateUrl –> Specify the template to be loaded by URL. This is not used if you’ve specified an inline template as a string.
  • replace –> If true, replace the current element. If false or unspecified, append this directive to the current element.
  • transclude –> Lets you move the original children of a directive to a location inside the new template.
  • scope –> Create a new scope for this directive rather than inheriting the parent scope.
  • controller –> Create a controller which publishes an API for communicating across directives.
  • require –> Require that another directive be present for this directive to function correctly.
  • link –> Programmatically modify resulting DOM element instances, add event listeners, and set up data binding.
  • compile –> Programmatically modify the DOM template for features across copies of a directive, as when used in ng-repeat.Your compile function can also return link functions to modify the resulting element instances.

Let’s dig into the details.

Naming Your Directive

You create a name for your directive with a module’s directive function, as in the following:

myModule.directive('directiveName', function factory(injectables)

Though you can name your directives anything you like, the convention is to pick a prefix namespace that identifies your directives and prevents them from colliding with external directives that you might include in your project.

You certainly wouldn’t want to name them with an ng- prefix, as that might collide with Angular’s bundled directives. If you work at SuperDuper MegaCorp, you could choose super-, superduper-, or even superduper-megacorp-, though you might choose the first option just for ease of typing.

As previously noted, Angular uses a normalized naming scheme for directives and will make camel-cased directive names available in templates in the five different validatorfriendly varieties. For example, if you’ve picked your prefix as super- and you’re writing a date-picker component, you might name it superDatePicker. In templates, you could
then use it as super-date-picker, super:date-picker, data-super-date-picker, or another variant.

The Directive Definition Object

As previously mentioned, most of the options in the directive definition are optional.In fact, there are no hard requirements and you can construct useful directives out of many subsets of the parameters. Let’s take a walk through what the options do.


The restrict property lets you specify the declaration style for your directive—that is, whether it can be used as an element name, attribute, class, or comment. You can specify one or more declaration styles using a character to represent each of them from the set in Table 6-3:
Table 6-3. Options for directive declaration usage

  • Character –> Declaration style –> Example
  • E –> element –> <my-menu title=Products></my-menu>
  • A –> attribute –> <div my-menu=Products></my-menu>
  • C –> class –> <div:Products></div>
  • M –> comment –> <!– directive: my-menu Products –>

If you wanted to use your directive as either an element or an attribute, you’d pass EA as the restrict string.

If you omit the restrict property, the default is A, and your directive can be used only as an attribute.

If you plan to support IE8, attribute- and class-based directives are your best bet, as it requires extra effort to make new elements work properly. See the Angular documentation for full details on this.


In cases where you have multiple directives on a single DOM element and where the order in which they’re applied matters, you can use the priority property to order their application. Higher numbers run first. The default priority is 0 if you don’t specify one.

Needing to set priority will likely be a rare occurrence. One example of a directive that needs to set priority is the ng-repeat. When repeating elements, we want Angular to make copies of the template element before other directives get applied. Without this, the other directives would get applied to the canonical template element rather than to
the repeated elements we want in our app.

Though it’s not in the documentation, you can search the Angular source for the few other directives that use priority. For ng-repeat, we use a priority value of 1000, so there’s plenty of room for other priorities beneath it.


When creating components, widgets, controls, and so on, Angular lets you replace or wrap the contents of an element with a template that you provide. For example, if you were to create a set of tabbed views in your UI, you would render something like Figure 6-1.
Figure 6-1. Tabbed views


Instead of having a bunch of <div>, <ul><li>, and <a> elements, you could create the directives <tab-set> and <tab>, which declare the structure of each tab respectively.Your HTML then does a much better job of expressing the intent of your template. The end result could look like:

<tab title='Home'>
<p>Welcome home!</p>
<tab title='Preferences'>
<!-- preferences UI goes here -->

You could also data bind the strings for title and the tab content via a controller on <tab> or <tabset>. And it’s not limited to tabs—you can do menus, accordions, pop-ups, dialog boxes, or anything else your app needs in this way.

You specify the replacement DOM elements either through the template or the templateUrl properties. You’d use template to set the template content via a string, and templateUrl to refer to the template to be loaded from a file on the server. As you’ll see in the following example, you can pre-cache these templates to reduce the number of GET requests, potentially improve performance.

Let’s write a dumb directive: a <hello> element that just replaces itself with <div>Hi there</div>. In it, we’ll set restrict to allow elements and set template to what we want to display. As the default behavior is to append content to elements, we’ll set replace to true to replace the original template:

var appModule = angular.module('app', []);
  appModule.directive('hello', function() {
    return {
      restrict: 'E',
      template: '<div>Hi there</div>',
      replace: true

We’ll use it in a page like so:

<html lang='en' ng-app='app'>

Loading it into a browser, we see “Hi there.”

If you were to view the page source, you’d still see the <hello></hello> on the page, but if you inspected the generated source (in Chrome, right-click on Hi there and select Inspect Element), you would see:

<div>Hi there</div>

The <hello></hello> was replaced by the <div> from the template.
If you were to remove the replace: true from the directive definition, you’d see <hello><div>Hi there</div></hello>.

You’ll usually want to use templateUrl instead of template, as typing HTML into strings isn’t much fun. The template property is usually only useful for very small templates. Writing as templateUrl is useful, as these templates are cacheable by setting the appropriate headers. We could rewrite our hello directive example like so:

var appModule = angular.module('app', []);
  appModule.directive('hello', function() {
    return {
     restrict: 'E',
     templateUrl: 'helloTemplate.html',
     replace: true

and in helloTemplate.html, you would put:

<div>Hi there</div>

* If you are using Chrome as your browser,and you are testing with localhot ,  the “same origin policy” will prevent Chrome from loading these templates from file://, and you’ll get an error that says something like “Origin null is not allowed by Access-Control-Allow-Origin.” You have two options here:

  • Load your app through a web server
  • • Set a flag on Chrome. You can do this by running Chrome from the command line as chrome –allow-file-access-from-files

Loading these files through templateUrl will, however, make your user wait until they load to see the directive. If you want to have the template load with the first page, you can include it as part of the page in a script tag, like so:

<script type='text/ng-template' id='helloTemplateInline.html'>
  <div>Hi there</div>

The id attribute here is important, as this is the URL key that Angular uses to store the template. You’ll use this id later in your directive’s templateUrl to specify which template to insert.

This version will load just fine without a server, as no XMLHttpRequest is necessary to fetch the content.

Finally, you could load the templates yourself over $http or another mechanism and then set them directly in the object Angular uses called the $templateCache. We want this template available in the cache before the directives run, so we’ll call it via a run function on our module.

var appModule = angular.module('app', []);$templateCache) {
    $templateCache.put('helloTemplateCached.html', '<div>Hi there</div>');
appModule.directive('hello', function() {
   return {
     restrict: 'E',
     templateUrl: 'helloTemplateCached.html',
     replace: true

You would likely want to do this in production only as a technique to reduce the number of GET requests required. You’d run a script to concatenate all the templates into a single file, and load it in a new module that you then reference from your main application module.


In addition to replacing or appending the content, you can also move the original content within the new template through the transclude property. When set to true, the directive will delete the original content, but make it available for reinsertion within your template through a directive called ng-transclude.

We could change our example to use transclusion:

appModule.directive('hello', function() {
  return {
    template: '<div>Hi there <span ng-transclude></span></div>',
    transclude: true

applying it as:

<div hello>Bob</div>

We would see:

“Hi there Bob.”

Compile and Link Functions

While inserting templates is useful, the really interesting work of any directive happens in its compile or its link function.

The compile and link functions are named after the two phases Angular uses to create the live view for your application. Let’s take a high-level view of Angular’s initialization process, in order:

Script loads

Angular loads and looks for the ng-app directive to find the application boundaries.

Compile phase

In this phase, Angular walks the DOM to identify all the registered directives in the template. For each directive, it then transforms the DOM based on the directive’s rules (template, replace, transclude, and so on), and calls the compile function if it exists. The result is a compiled template function, which will invoke the link functions collected from all of the directives.

Link phase

To make the view dynamic, Angular then runs a link function for each directive.The link functions typically creates listeners on the DOM or the model. These listeners keep the view and the model in sync at all times.
So we’ve got the compile phase, which deals with transforming the template, and the link phase, which deals with modifying the data in the view. Along these lines, the primary difference between the compile and link functions in directives is that compile functions deal with transforming the template itself, and link functions deal with making a dynamic connection between model and view. It is in this second phase that scopes are attached to the compiled link functions, and the directive becomes live through data binding.

These two phases are separate for performance reasons. Compile functions execute only once in the compile phase, whereas link functions are executed many times, once for each instance of the directive. For example, let’s say you use ng-repeat over your directive.You don’t want to call compile, which causes a DOM-walk on each ng-repeat iteration. Instead, you want to compile once, then link.

While you should certainly learn the differences between compile and link and the capabilities of each, the majority of directives you’ll need to write will not need to transform the template; you’ll write mostly link functions.

Let’s take a look at the syntax for each of these again to compare. For compile, we have:

compile: function compile(tElement, tAttrs, transclude) {
  return {
    pre: function preLink(scope, iElement, iAttrs, controller) { ... }    ,
    post: function postLink(scope, iElement, iAttrs, controller) { ...    }

And for link, it is:

link: function postLink(scope, iElement, iAttrs) { ... }

Notice that one difference here is that the link function gets access to a scope but compile does not. This is because during the compile phase, the scope doesn’t exist yet.You do, however, have the ability to return link functions from the compile function.These link functions do have access to the scope.

Notice also that both compile and link receive a reference to their DOM element and the list of attributes for that element. The difference here is that the compile function receives the template element and attributes from the template, and thus gets the t prefix. The link function receives them from the view instances created from the template, and thus gets the i prefix.

This distinction only matters when the directive is within some other directive that makes copies of the template. The ng-repeat directive is a good example.

<div ng-repeat='thing in things'>
  <my-widget config='thing'></my-widget>

Here, the compile function will be called exactly once, but the link function will be called once per copy of my-widget—equal to the number of elements in things. So, if my-widget needs to modify something in common to all copies (instances) of mywidget,the right place to do this, for efficiency’s sake, is in a compile function.

You will also notice that the compile function receives a transclude function as a property. Here, you have an opportunity to write a function that programmatically transcludes content for situations where the simple template-based transclusion won’t suffice.

Lastly, compile can return both a preLink and a postLink function,whereas link specifies only a postLink function. preLink, as its name implies, runs after the compile phase, but before directives on the child elements are linked. Similarly, postLink runs after all the child element directives are linked. This means that if you need to change the DOM structure, you will do so in postLink. Doing it in the preLink will confuse the attachment process and cause an error.


You will often want to access a scope from your directive to watch model values and make UI updates when they change, and to notify Angular when external events cause the model to change. This is most common when you’re wrapping some non-Angular component from jQuery, Closure, or another library, or implementing simple DOM events. Evaluate Angular expressions passed into your directive as attributes.

When you want a scope for one of these reasons, you have three options for the type of scope you’ll get:

1. The existing scope from your directive’s DOM element.

2. A new scope you create that inherits from your enclosing controller’s scope. Here,you’ll have the ability to read all the values in the scopes above this one in the tree.This scope will be shared with any other directives on your DOM element that request this kind of scope and can be used to communicate with them.

3. An isolate scope that inherits no model properties from its parent. You’ll want to use this option when you need to isolate the operation of this directive from the parent scope when creating reusable components.
You can create these scope configurations with the following syntax:
Scope Type –> Syntax

  • existing scope –> scope: false (this is the default if unspecified)
  • new scope –> scope: true
  • isolate scope –> scope: { /* attribute names and binding style */ }

When you create an isolate scope, you don’t have access to anything in the parent scope’s model by default. You can, however, specify that you want specific attributes passed into your directive. You can think of these attribute names as parameters to the function.

Note that while isolate scopes don’t inherit model properties, they are still children of their parent scope. Like all other scopes, they have a $parent property that references their parent.

You can pass specific attributes from the parent scope to the isolate scope by passing a map of directive attribute names. There are three possible ways to pass data to and from the parent scope. We call these different ways of passing data “binding strategies.” You can also, optionally, specify a local alias for the attribute name.The syntax without aliases is in the following form:

scope: { attributeName1: 'BINDING_STRATEGY',
         attributeName2: 'BINDING_STRATEGY', …

With aliases, the form is:

scope: { attributeAlias: 'BINDING_STRATEGY' + 'templateAttributeName',

The binding strategies are defined by symbols in Table 6-4:
Table 6-4. Binding strategies
Symbol –> Meaning

  • @ –> Pass this attribute as a string. You can also data bind values from enclosing scopes by using interpolation {{}} in the attribute value.
  • = –> Data bind this property with a property in your directive’s parent scope.
  • & –> Pass in a function from the parent scope to be called later.

These are fairly abstract concepts, so let’s look at some variations on a concrete example to illustrate. Let’s say that we want to create an expander directive that shows a title bar that expands to display extra content when clicked.It would look like Figure 6-2 when closed.



We would write it as follows:

<div ng-controller='SomeController'>
  <expander class='expander' expander-title='title'>

The values for title (Click me to expand) and text (Hi there folks…), come from the enclosing scope. We could set this up with a controller like so:

function SomeController($scope) {
  $scope.title = 'Click me to expand';
  $scope.text = 'Hi there folks, I am the content
    + 'that was hidden but is now shown.';

We can then write this directive as:

angular.module('expanderModule', [])
  .directive('expander', function(){
    return {
      restrict: 'EA',
      replace: true,
      transclude: true,
      scope: { title:'=expanderTitle' },
      template: '<div>' +
'<div class="title" ng-click="toggle()">{{title}}</div>' +
'<div class="body" ng-show="showMe" ng-transclude></div>' +
      link: function(scope, element, attrs) {
        scope.showMe = false;
        scope.toggle = function toggle() {
          scope.showMe = !scope.showMe;

And for styling, we’d do something like this:

.expander {
  border: 1px solid black;
  width: 250px;
.expander > .title {
  background-color: black;
  color: white;
  padding: .1em .3em;
  cursor: pointer;
.expander > .body {
  padding: .1em .3em;

Let’s look at what each option in the directive is doing for us, in Table 6-5.
Table 6-5. Functions of elements
Function –> Name –> Description

  • restrict: EA Invoke this directive as either an element or attribute. That is, <expander…>…</expander> and <div expander…>…</div> are equivalent.
  • replace: true Replace the original element with the template we provide.
  • transclude: true Move the original element’s content to another location in the provided template.
  • scope: { title:=expanderTitle } Create a local scope property called title that is data bound to a parent-scope property declared in the expander-title attribute. Here, we’re renaming expanderTitle as title for convenience. We could have written scope: { expanderTitle: ‘=’ } and referred to it as expanderTitle within our template instead. But in case other directives also have a title attribute,it makes sense to disambiguate our title in the API and just rename it for local use. Also notice here that the naming uses the same camel-case expansion as the directive names themselves do.
  • template: <‘div’> +… Declare the template to be inserted for this directive. Note that we’re using ng-click and ng-show to show/hide ourselves and ng-transclude to declare where the original content will go. Also note that transcluded content gets access to the parent scope, not the scope of the directive enclosing it.
  • link: … Set up the showMe model to track the expander’s open/closed state and define the toggle() function to be called when users click on the title div.

If we think it would make more sense to define the expander title in the template rather than in the model, we can use the string-style attribute passing denoted by an @ symbol in the scope declaration, like this:
scope: { title:’@expanderTitle’ },

In the template we can achieve the same effect with:

<expander class='expander' expander-title='Click me to expand'>

Note that with this @ strategy we could still data bind the title to our controller’s scope by using interpolation :

<expander class='expander' expander-title='{{title}}'>

Manipulating DOM Elements

The iElement or tElement passed to the directive’s link and compile functions are wrapped references to the native DOM element. If you have loaded the jQuery library, these are jQuery elements you’re already used to working with.

If you’re not using jQuery, the elements are inside an Angular-native wrapper called jqLite. This API has a subset of jQuery that we need to create everything in Angular.For many applications, you can do everything you need with this API alone.

If you need direct access to the raw DOM element you can get it by accessing the first element of the object with element[0].

You can see the full list of supported APIs in the Angular docs for angular.element() —the function you’d use to create jqLite-wrapped DOM elements yourself. It includes functions like addClass(), bind(), find(), toggleClass(), and so on. Again, these are all the most useful core functions from jQuery, but with a much smaller code footprint.

In addition to the jQuery APIs, elements also have Angular-specific functions. These exist whether or not you’re using the full jQuery library.

Table 6-6. Angular specific functions on an element
Function –> Description

  • controller(name) When you need to communicate directly with a controller, this function returns the controller attached to the element. If none exists for this element, it walks up the DOM and finds the nearest parent controller instead. The name parameter is optional and is used to specify the name of another directive on this same element. If provided, it will return the controller from that directive. The name should be in the camel-case format as with all directives. That is, ngModel instead of ng-model.
  • injector() Gets the injector for the current element or its parent. This allows you to ask for dependencies defined for the modules in these elements.
  • scope() Returns the scope of the current element or its nearest parent.
  • inheritedData() As with the jQuery function data(), inheritedData() sets and gets data on an element in a leak-proof way. In addition to getting data from the current element, it will also walk up the DOM to find a value.

As an example, let’s re-implement the previous expander example without the help of ng-show and ng-click. It would look like the following:

				function() {
					return {
						restrict : 'EA',
						replace : true,
						transclude : true,
						scope : {
							title : '=expanderTitle'
						template : '<div>'
								+ '<div class="title">{{title}}</div>'
								+ '<div class="body closed" ng-transclude></div>'
								+ '</div>',
						link : function(scope, element, attrs) {
							var titleElement = angular.element(element.children().eq(0));
							var bodyElement = angular.element(element.children().eq(1));
							titleElement.bind('click', toggle);
							function toggle() {
	<div ng-controller='SomeController'>
		<expander2 class='expander' expander-title='title'> {{text}}	</expander2>

We’ve removed the ng-click and ng-show directives from the template. Instead, to perform the desired action when users click on the expander title, we’ll create a jqLite element from the title element and bind the click event to it with a toggle() function as its callback. In toggle(), we’ll call toggleClass() on the expander body element to add or remove a class called closed, where we’d set the element to display: none with a class like this:

.closed {
  display: none;


When you have nested directives that need to communicate with each other, the way to do this is through a controller. A <menu> may need to know about the <menu-item> elements inside it so it can show and hide them appropriately. The same would be true for a <tab-set> knowing about its <tab> elements, or a <grid-view> knowing about its <grid-element> elements.

As previously shown, to create an API to communicate between directives, you can declare a controller as part of a directive with the controller property syntax:

controller: function controllerConstructor($scope, $element, $attrs, $transclude)

This controller function is dependency injected, so the parameters listed here, while potentially useful, are all optional—they can be listed in any order. They’re also only a subset of the services available.

Other directives can have this controller passed to them with the require property syntax. The full form of require looks like:
require: ‘^?directiveName’.Explanations of the require string can be found in Table 6-7.

Option –> Usage

  • directiveName –> This camel-cased name specifies which directive the controller should come from. So if our <my-menuitem> directive needs to find a controller on its parent <my-menu>, we’d write it as myMenu.
  • ^ –> By default, Angular gets the controller from the named directive on the same element. Adding this optional ^ symbol says to also walk up the DOM tree to find the directive. For the <my-menu> example, we’d need to add this symbol; the final string would be \^myMenu.
  • ? –> If the required controller is not found, Angular will throw an exception to tell you about the problem. Adding a ? symbol to the string says that this controller is optional and that an exception shouldn’t be thrown if not found. Though it sounds unlikely, if we wanted to let <my-menu-item>s be used without a <mymenu> container, we could add this for a final require string of ?\^myMenu.

As an example, let’s rewrite our expander directive to be used in a set called “accordion,”which ensures that when you open one expander, the others in the set automatically close.

First, let’s write the accordion directive that will do the coordination. We’ll add our controller constructor here with methods to do the coordination:

var appModule = angular.module('app', []);
appModule.directive('accordion', function() {
	return {
		restrict : 'EA',
		replace : true,
		transclude : true,
		template : '<div ng-transclude></div>',
		controller : function() {
			var expanders = [];
			this.gotOpened = function(selectedExpander) {
				angular.forEach(expanders, function(expander) {
					if (selectedExpander != expander) {
						expander.showMe = false;
			this.addExpander = function(expander) {

appModule.directive('expander', function() {
	return {
		restrict : 'EA',
		replace : true,
		transclude : true,
		require : '^?accordion',
		scope : {
			title : '=expanderTitle'
		template : '<div>'
				+ '<div class="title" ng-click="toggle()">{{title}}</div>'
				+ '<div class="body" ng-show="showMe" ng-transclude></div>'
				+ '</div>',
		link : function(scope, element, attrs, accordionController) {
			scope.showMe = false;
			scope.toggle = function toggle() {
				scope.showMe = !scope.showMe;

with an appropriate controller, of course:

function SomeController($scope) {
	$scope.expanders = [
				title : 'Click me to expand',
				text : 'Hi there folks, I am the content that was hidden but is now shown.'
				title : 'Click this',
				text : 'I am even better text than you have seen previously'
				title : 'No, click me!',
				text : 'I am text that should be seen before seeing other texts'
			} ];

Moving On

As we’ve seen, directives let us extend HTML’s syntax and turn many application tasks into a do-what-I-mean declaration. Directives make reuse a breeze—from configuring your app, like with ng-model and ng-controller, to doing template tasks like ngrepeat and ng-view, to sky’s-the-limit reusable components such as data-grids, bubblecharts,tool-tips, and tabs.

Angular JS Book: Chapter 4 : Analyzing an AngularJS App

We talked about some of the commonly used features of AngularJS in Chapter 2, and then dived into how your development should be structured in Chapter 3. Rather than continuing with similarly deep dives into individual features, Chapter 4 will look at a small, real-life application. We will get a feel for how all the pieces that we have been talking about (with toy examples) actually come together to form a real, working application.

Rather than putting the entire application front and center, we will introduce one portion of it at a time, then talk about the interesting and relevant parts, slowly building up to the entire application by the end of this chapter.

The Application

GutHub is a simple recipe management application, which we designed both to store our super tasty recipes and to show off various pieces of an AngularJS application. The application:
• has a two-column layout.
• has a navigation bar on the left.
• allows you to create a new recipe.
• allows you to browse the list of existing recipes.

The main view is on the right, which gets changed—depending on the URL—to either the list of recipes, the details of a single recipe, or an editable form to add to or edit existing recipes. We can see a screenshot of the application in Figure 4-1.


Figure 4-1. GutHub: A simple recipe management application
This entire application is available on our GitHub repo in chapter4/guthub.

Relationship Between Model, Controller, and Template

Before we dive into the application, let us spend a paragraph or two talking about how the three pieces of our application work together, and how to think about each of them.

The model is the truth. Just repeat that sentence a few times. Your entire application is driven off the model—what views are displayed, what to display in the views, what gets saved, everything! So spend some extra time thinking about your model, what the attributes of your object are going to be, and how you are going to retrieve it from the
server and save it. The view will get updated automatically through the use of data bindings, so the focus should always be on the model.

The controller holds the business logic: how you retrieve your model, what kinds of operations you perform on it, what kind of information your view needs from the model, and how you transform the model to get what you want. The responsibility of validation, making server calls, bootstrapping your view with the right data, and mostly everything
in between belongs on your controller.

Finally, the template represents how your model will be displayed, and how the user will interact with your application. It should mostly be restricted to the following:

  • Displaying your model
  • Defining the ways the user can interact with your application (clicks, input fields, and so on)
  • Styling the app, and figuring out how and when some elements are displayed (show or hide, hover, and so on)
  • Filtering and formatting your data (both input and output)

Realize that the template in Angular is not necessarily the view part of the Model View Controller design paradigm. Instead, the view is the compiled version of the template that gets executed. It is a combination of the template and the model.What should not go into the template is any kind of business logic or behavior; this information should be restricted to the controller. Keeping the template simple allows a proper separation of concerns, and also ensures that you can get the most code under test using only unit tests. Templates will have to be tested with scenario tests.

But, you might ask, where does DOM manipulation go? DOM manipulation doesn’t really go into the controllers or the template. It goes into AngularJS directives (but can sometimes be used via services, which house DOM manipulation to avoid duplication of code). We’ll cover an example of that in our GutHub example as well.

Without further ado, let’s dive right in.

The Model

We are going to keep the model dead simple for this application. There are recipes.They’re about the only model object in this entire application. Everything else builds off of it.

Each recipe has the following properties:
• An ID if it is persisted to our server
• A name
• A short description
• Cooking instructions
• Whether it is a featured recipe or not
• An array of ingredients, each with an amount, a unit, and a name
That’s it. Dead simple. Everything in the app is based around this simple model. Here’s a sample recipe for you to devour (the same one referenced in Figure 4-1):

    "id": "1",
    "title": "Cookies",
    "description": "Delicious, crisp on the outside, chewy on the outside, oozing with chocolatey goodness cookies. The best kind",
    "ingredients": [
            "amount": "1",
            "amountUnits": "packet",
            "ingredientName": "Chips Ahoy"
    "instructions": "1. Go buy a packet of Chips Ahoy\n2. Heat it up in an oven\n3. Enjoy warm cookies\n4. Learn how to bake cookies from somewhere else"

We will go on to see how more complicated UI features can be built around this simple model.

Controllers, Directives, and Services, Oh My!

Now we finally get to sink our teeth into the meat of this delicious application. First, we will look at the directives and services code and talk a little bit about what it is doing,then we’ll take a look at the multiple controllers needed for this application.


'use strict';

var services = angular.module('',

services.factory('Recipe', ['$resource',
    function($resource) {
  return $resource('/recipes/:id', {id: '@id'});

services.factory('MultiRecipeLoader', ['Recipe', '$q',
    function(Recipe, $q) {
  return function() {
    var delay = $q.defer();
    Recipe.query(function(recipes) {
    }, function() {
      delay.reject('Unable to fetch recipes');
    return delay.promise;

services.factory('RecipeLoader', ['Recipe', '$route', '$q',
    function(Recipe, $route, $q) {
  return function() {
    var delay = $q.defer();
    Recipe.get({id: $route.current.params.recipeId}, function(recipe) {
    }, function() {
      delay.reject('Unable to fetch recipe '  + $route.current.params.recipeId);
    return delay.promise;

Let’s take a look at our services first. We touched upon services in “Organizing Dependencies with Modules” on page 33. Here, we’ll dig a little bit deeper.

In this file, we instantiate three AngularJS services.

There is a recipe service, which returns what we call an Angular Resource. These are RESTful resources, which point at a RESTful server. The Angular Resource encapsulates the lower level $http service, so that you can just deal with objects in your code.

With just that single line of code—return $resource—(and of course, a dependency on the module), we can now put recipe as an argument in any of our controllers, and it will be injected into the controller. Furthermore, each recipe object has the following methods built in:

• Recipe.get()
• Recipe.query()
• Recipe.remove()
• Recipe.delete()

*NOTA: If you are going to use Recipe.delete, and want your application to
work in IE, you will have to call it like so: Recipe[delete](). This is
because delete is a keyword in IE.

Of the previous methods, all but query work with a single recipe; query() returns an array of recipes by default.

The line of code that declares the resource—return $resource—also does a few more nice things for us:
1. Notice the :id in the URL specified for the RESTful resource. It basically says that when you make any query (say, Recipe.get()), if you pass in an object with an id field, then the value of that field will be added to the end of the URL.That is, calling Recipe.get({id: 15}) will make a call to /recipe/15.

2. What about that second object? The {id: @id}? Well, as they say, a line of code is worth a thousand explanations, so let’s take a simple example.
Say we have a recipe object, which has the necessary information already stored within it, including an id.

Then, we can save it by simply doing the following:

// Assuming existingRecipeObj has all the necessary fields,
// including id (say 13)
var recipe = new Recipe(existingRecipeObj);

This will make a POST request to /recipe/13.The @id tells it to pick the id field from its object and use that as the id parameter.It’s an added convenience that can save a few lines of code.

There are two other services in apps/scripts/services/services.js. Both of them are Loaders;one loads a single recipe (RecipeLoader), and the other loads all recipes (MultiRecipeLoader). These are used when we hook up our routes. At their cores, both of them behave very similarly. The flow of both these services is as follows:
1. Create a $q deferred object (these are AngularJS promises, used for chaining asynchronous functions).
2. Make a call to the server.
3. Resolve the deferred object when the server returns the value.
4. Return the promise that will be used by the routing mechanism of AngularJS.

*Promises in an AngularJS land

A promise is an interface that deals with objects that are returned or get filled in at a future point in time (basically, asynchronous actions). At its core, a promise is an object with a then() function.

To showcase the advantages, let us take an example where we need to fetch the current profile of a user:

var currentProfile = null;
var username = 'something';
fetchServerConfig(function(serverConfig) {
                   function(profiles) {
                    	currentProfile = profiles.currentProfile;

There are a few problems with this approach:
1. The resultant code is an indentation nightmare, especially if you have to chain multiple calls.
2. Errors reported in between callbacks and functions have a tendency to be lost, unless you handle them manually at each step.
3. You have to encapsulate the logic of what you want to do with currentProfile in the innermost callback, either directly, or through a separate function.

Promises solve these issues. Before we go into the how, let’s take a look at the same problem implemented with promises:

var currentProfile =
fetchServerConfig().then(function(serverConfig) {
  return fetchUserProfiles(serverConfig.USER_PROFILES, username);
}).then(function(profiles) {
  return profiles.currentProfile;
}, function(error) {
  // Handle errors in either fetchServerConfig or
  // fetchUserProfiles here

Notice the advantages:
1. You can chain function calls, so you don’t get into an indentation nightmare.
2. You are assured that the previous function call is finished before the next function
in the chain is called.
3. Each then() call takes two arguments (both functions). The first one is the success
callback and the second one is the error handler.
4. In case of errors in the chain, the error will get propagated through to the rest of
the error handlers. So any error in any of the callbacks can be handled in the end.
What about resolve and reject, you ask? Well, deferred in AngularJS is a way of
creating promises. Calling resolve on it fulfills the promise (calls the success handler),
while calling reject on it calls the error handler of the promise.

We’ll come back to this again when we hook up our routes.


We can now move to the directives we will be using in our application. There will be two directives in the app:


This directive will be shown and hidden when the routes change and while the page is still loading information. It will hook into the route-changing mechanism and automatically hide and show whatever is within its tag ,based on the state of the page.


The focus directive is used to ensure that specific input fields (or elements) have the focus.

Let’s look at the code:

'use strict';

var directives = angular.module('guthub.directives', []);

directives.directive('butterbar', ['$rootScope',
    function($rootScope) {
  return {
    link: function(scope, element, attrs) {

      $rootScope.$on('$routeChangeStart', function() {

      $rootScope.$on('$routeChangeSuccess', function() {

    function() {
  return {
    link: function(scope, element, attrs) {

The preceding directive returns an object with a single property, link. We will dive deeper into how you can create your own directives in Chapter 6, but for now, all you need to know is the following:

1. Directives go through a two-step process. In the first step (the compile phase), all directives attached to a DOM element are found, and then processed. Any DOM manipulation also happens during the compile step. At the end of this phase, a linking function is produced.

2. In the second step, the link phase (the phase we used previously), the preceding DOM template produced is linked to the scope. Also, any watchers or listeners are added as needed, resulting in a live binding between the scope and the element.

Thus, anything related to the scope happens in the linking phase.
So what’s happening in our directive? Let’s take a look, shall we?
The butterbar directive can be used as follows:

<div butterbar>My loading text...</div>

It basically hides the element right up front, then adds two watches on the root scope. Every time a route change begins, it shows the element (by changing its class), and every time the route has successfully finished changing, it hides the butterbar again.

Another interesting thing to note is how we inject the $rootScope into the directive.All directives directly hook into the AngularJS dependency injection system, so you can inject your services and whatever else you need into them.

The final thing of note is the API for working with the element. jQuery veterans will be glad to know that it follows a jQuery-like syntax (addClass, removeClass). AngularJS implements a subset of the calls of jQuery so that jQuery is an optional dependency for any AngularJS project. In case you do end up using the full jQuery library in your project, you should know that AngularJS uses that instead of the jQlite implementation it has

The second directive (focus) is much simpler. It just calls the focus() method on the current element. You can call it by adding the focus attribute on any input element, like so:

<input type="text" focus></input>

When the page loads, that element immediately gets the focus.


With directives and services covered, we can finally get into the controllers, of which we have five. All these controllers are located in a single file (app/scripts/controllers/controllers.js), but we’ll go over them one at a time. Let’s go over the first controller,which is the List Controller, responsible for displaying the list of all recipes in the system.

app.controller('ListCtrl', ['$scope', 'recipes',
    function($scope, recipes) {
  $ = recipes;

Notice one very important thing with the List Controller: in the constructor, it does no work of going to the server and fetching the recipes. Instead, it is handed a list of recipes already fetched. You might wonder how that’s done. We’ll answer that in the routing section of the chapter, but it has to do with the MultiRecipeLoader service we saw
previously. Just keep that in the back of your mind.

With the List Controller under our belts, the other controllers are pretty similar in nature, but we will still cover them one by one to point out the interesting aspects:

app.controller('ViewCtrl', ['$scope', '$location', 'recipe',
 function($scope, $location, recipe) {
  $scope.recipe = recipe;
  $scope.edit = function() {
   $location.path('/edit/' +;

The interesting aspect about the View Controller is the edit function it exposes on the scope. Instead of showing and hiding fields or something similar, this controller relies on AngularJS to do the heavy lifting (as should you!). The edit function simply changes the URL to the edit equivalent for the recipe, and lo and behold, AngularJS does the rest. AngularJS recognizes that the URL has changed and loads the corresponding view (which is the same recipe in edit mode). Voila!

Next, let’s take a look at the Edit Controller:

app.controller('EditCtrl', ['$scope', '$location', 'recipe',
    function($scope, $location, recipe) {
  $scope.recipe = recipe;

  $ = function() {
    $scope.recipe.$save(function(recipe) {
      $location.path('/view/' +;

  $scope.remove = function() {
    delete $scope.recipe;

What’s new here are the save and remove methods that the Edit Controller exposes on the scope. The save function on the scope does what you would expect it to. It saves the current recipe, and once it is done saving, redirects the user to the view screen with the same recipe. The callback function is useful in these scenarios to execute or perform some action once you are done.

There are two ways we could have saved the recipe here. One is to do it as shown in the code, by executing $scope.recipe.$save(). This is only possible because recipe is a resource object that was returned by the RecipeLoader in the first place.

Otherwise, the way you would save the recipe would be:;

The remove function is also straightforward, in that it removes the recipe from the scope, and redirects users to the main landing page. Note that it doesn’t actually remove it from our server, though it shouldn’t be very hard to make that additional call.

Next, we have the New Controller:

app.controller('NewCtrl', ['$scope', '$location', 'Recipe',
    function($scope, $location, Recipe) {
  $scope.recipe = new Recipe({
    ingredients: [ {} ]

  $ = function() {
    $scope.recipe.$save(function(recipe) {
      $location.path('/view/' +;

The New Controller is almost exactly the same as the Edit Controller. In fact, you could look at combining the two into a single controller as an exercise. The only major difference is that the New Controller creates a new recipe (which is a resource, so that it has the save function) as the first step. Everything else remains unchanged.

Finally, we have the Ingredients Controller. This is a special controller, but before we get into why or how, let’s take a look:

app.controller('IngredientsCtrl', ['$scope',
    function($scope) {
  $scope.addIngredient = function() {
    var ingredients = $scope.recipe.ingredients;
    ingredients[ingredients.length] = {};
  $scope.removeIngredient = function(index) {
    $scope.recipe.ingredients.splice(index, 1);

All the other controllers that we saw so far are linked to particular views on the UI. But the Ingredients Controller is special. It’s a child controller that is used on the edit pages to encapsulate certain functionality that is not needed at the higher level. The interesting thing to note is that since it is a child controller, it inherits the scope from the parent controller (the Edit/New controllers in this case). Thus, it has access to the $scope.recipe from the parent.

The controller itself does nothing too interesting or unique. It just adds a new ingredient to the array of ingredients present on the recipe, or removes a specific ingredient from the list of ingredients on the recipe.

With that, we finish the last of the controllers. The only JavaScript piece that remains is how the routing is set up:

var app = angular.module('guthub',
    ['guthub.directives', '']);

app.config(['$routeProvider', function($routeProvider) {
      when('/', {
        controller: 'ListCtrl',
        resolve: {
          recipes: ["MultiRecipeLoader", function(MultiRecipeLoader) {
            return MultiRecipeLoader();
      }).when('/edit/:recipeId', {
        controller: 'EditCtrl',
        resolve: {
          recipe: ["RecipeLoader", function(RecipeLoader) {
            return RecipeLoader();
      }).when('/view/:recipeId', {
        controller: 'ViewCtrl',
        resolve: {
          recipe: ["RecipeLoader", function(RecipeLoader) {
            return RecipeLoader();
      }).when('/new', {
        controller: 'NewCtrl',

As promised, we finally reached the point where the resolve functions are used. The previous piece of code sets up the Guthub AngularJS module, as well as the routes and templates involved in the application.

It hooks up the directives and the services that we created, and then specifies the various routes we will have in our application.

For each route, we specify the URL, the controller that backs it up, the template to load,and finally (optionally), a resolve object.

This resolve object tells AngularJS that each of these resolve keys needs to be satisfied before the route can be displayed to the user. For us, we want to load all the recipes, or an individual recipe, and make sure we have the server response before we display the page. So we tell the route provider that we have recipes (or a recipe), and then tell it how
to fetch it.

This links back to the two services we defined in the first section, the MultiRecipeLoad er and the RecipeLoader. If the resolve function returns an AngularJS promise, then AngularJS is smart enough to wait for the promise to get resolved before it proceeds.That means that it will wait until the server responds.

The results are then passed into the constructor as arguments (with the names of the parameters being the object’s fields).

Finally, the otherwise function denotes the default URL redirect that needs to happen when no routes are matched.

You might notice that both the Edit and the New controller routes lead
to the same template URL, views/recipeForm.html. What’s happening
here? We reused the edit template. Depending on which controller is
associated, different elements are shown in the edit recipe template.
With this done, we can now move on to the templates, how these controllers hook up to them, and manage what is shown to the end user.

The Templates

Let us start by taking a look at the outermost, main template, which is the index.html.This is the base of our single-page application, and all the other views are loaded within the context of this template:

<!DOCTYPE html>
<html   lang="en" ng-app="guthub">
  <title>GutHub - Create and Share</title>
  <script src="scripts/vendor/angular.min.js"></script>
  <script src="scripts/vendor/angular-resource.min.js"></script>
  <script src="scripts/directives/directives.js"></script>
  <script src="scripts/services/services.js"></script>
  <script src="scripts/controllers/controllers.js"></script>
  <link href="styles/bootstrap.css" rel="stylesheet">
  <link href="styles/guthub.css" rel="stylesheet">

  <div butterbar>Loading...</div>

  <div class="container-fluid">
    <div class="row-fluid">
      <div class="span2">
        <div id="focus"><a href="/#/new">New Recipe</a></div>
        <div><a href="/#/">Recipe List</a></div>

      <div class="span10">
        <div ng-view></div>

There are five interesting elements to note in the preceding template, most of which you already encountered in Chapter 2. Let’s go over them one by one:


We set the ng-app module to be GutHub. This is the same module name we gave in our angular.module function. This is how AngularJS knows to hook the two together.

script tag

This is where AngularJS is loaded for the application. It has to be done before all your JS files that use AngularJS are loaded. Ideally, this should be done at the bottom of the body.


Aha! Our first usage of a custom directive. When we defined our butterbar directive before, we wanted to use it on an element so that it would be shown when the routes were changing, and hidden on success. The highlighted element’s text is shown (a very boring “Loading…” in this case) as needed.

Link href Values

The hrefs link to the various pages of our single-page application. Notice how they use the # character to ensure that the page doesn’t reload, and are relative to the current page. AngularJS watches the URL (as long as the page isn’t reloaded), and works it magic (or actually, the very boring route management we defined as part of our routes) when needed.


This is where the last piece of magic happens. In our controllers section, we defined our routes. As part of that definition, we denoted the URL for each route, the controller associated with the route, and a template. When AngularJS detects a route change, it loads the template, attaches the controller to it, and replaces the ngview with the contents of the template.

One thing that is conspicuous in its absence is the ng-controller tag. Most applications would have some sort of a MainController associated with the outer template. Its most common location would be on the body tag. In this case, we didn’t use it, because the entire outer template has no AngularJS content that needs to refer to a scope.

Now let’s look at the individual templates associated with each controller, starting with the “list of recipes” template:


<h3>Recipe List</h3>
<ul class="recipes">
  <li ng-repeat="recipe in recipes">
    <div><a ng-href="/#/view/{{}}">{{recipe.title}}</a></div>

Really, it’s a very boring template. There are only two points of interest here. The first one is a very standard usage of the ng-repeat tag. It picks up all the recipes from the scope, and repeats over them.

The second is the usage of the ng-href tag instead of href. This is purely to avoid having a bad link during the time that AngularJS is loading up. The ng-href ensures that at no time is a malformed link presented to the user. Always use this whenever your URLs are dynamic instead of static.

Of course you might wonder: where is the controller? There is no ng-controller defined, and there really was no Main Controller defined. This is where route mapping comes into play. If you remember (or peek back a few pages), the / route redirected to the list template and had the List Controller associated with it. Thus, when any references are made to variables and the like, it is within the scope of the List Controller.

Now we move on to something with a little bit more meat: the view form.



<span ng-show="recipe.ingredients.length == 0">No Ingredients</span>
<ul class="unstyled" ng-hide="recipe.ingredients.length == 0">
  <li ng-repeat="ingredient in recipe.ingredients">


<form ng-submit="edit()" class="form-horizontal">
  <div class="form-actions">
    <button class="btn btn-primary">Edit</button>

Another nice, small, contained template. We’ll draw your attention to three things, though not necessarily in the order they are shown!

The first is the pretty standard ng-repeat. The recipes are again in the scope of the View Controller, which is loaded by the resolve function before this page is displayed to the user. This ensures that the page is not in a broken, unloaded state when the user sees it.

The next interesting usage is that of ng-show and ng-class to style the template. The ng-show tag has been added to the <i> tag, which is used to display a starred icon. Now, the starred icon is shown only when the recipe is a featured recipe (as denoted by the recipe.featured boolean value). Ideally, to ensure proper spacing, you would have another empty spacer icon, with an ng-hide directive on it, with the exact same AngularJS
expression as shown in the ng-show. That is a very common usage, to display one thing and hide another on a given condition.The ng-class is used to add a class to the <h2> tag (“featured” in this case) when the
recipe is a featured recipe. That adds some special highlighting to make the title stand out even more.

The final thing to note is the ng-submit directive on the form. The directive states that the edit() function on the scope is called in case the form is submitted. The form submission happens when any button without an explicit function attached (in this case, the Edit button) is clicked. Again, AngularJS is smart enough to figure out the scope
that is being referred to (from the module, the route, and the controller) and call the right method at the right time.

Now we can move on to our final template (and possibly the most complicated one yet),the recipe form template.

<h2>Edit Recipe</h2>
<form name="recipeForm" ng-submit="save()" class="form-horizontal">
  <div class="control-group">
    <label class="control-label" for="title">Title:</label>
    <div class="controls">
      <input ng-model="recipe.title" class="input-xlarge" id="title" focus required>

  <div class="control-group">
    <label class="control-label" for="description">Description:</label>
    <div class="controls">
      <textarea ng-model="recipe.description" class="input-xlarge" id="description"></textarea>

  <div class="control-group">
    <label class="control-label" for="ingredients">Ingredients:</label>
    <div class="controls">
      <ul id="ingredients" class="unstyled" ng-controller="IngredientsCtrl">
        <li ng-repeat="ingredient in recipe.ingredients">
          <input ng-model="ingredient.amount" class="input-mini">
          <input ng-model="ingredient.amountUnits" class="input-small">
          <input ng-model="ingredient.ingredientName">
          <button type="button" class="btn btn-mini" ng-click="removeIngredient($index)"><i class="icon-minus-sign"></i> Delete </button>
        <button type="button" class="btn btn-mini" ng-click="addIngredient()"><i class="icon-plus-sign"></i> Add </button>

  <div class="control-group">
    <label class="control-label" for="instructions">Instructions:</label>
    <div class="controls">
      <textarea ng-model="recipe.instructions" class="input-xxlarge" id="instructions"></textarea>

  <div class="form-actions">
    <button class="btn btn-primary" ng-disabled="recipeForm.$invalid">Save</button>
    <button type="button" ng-click="remove()" ng-show="!" class="btn">Delete</button>

Don’t panic. It looks like a lot of code, and it is a lot of code, but if you actually dig into it, it’s not very complicated. In fact, a lot of it is simple, repetitive boilerplate to show editable input fields for editing recipes:

• The focus directive is added on the very first input field (the title input field).This ensures that when the user navigates to this page, the title field has focus so the user can immediately start typing in the title.

• The ng-submit directive is used very similarly to the previous example, so we won’t dive into it much, other than to say that it saves the state of the recipe and signals the end of the editing process. It hooks up to the save() function in the Edit Controller.

• The ng-model directive is used to bind the various input boxes and text areas on the field to the model.

• One of the more interesting aspects on this page, and one we recommend you spend some time trying to understand, is the ng-controller tag on the ingredients list portion. Let’s take a minute to understand what is happening here.

We see a list of ingredients being displayed, and the container tag is associated with an ng-controller. That means that the whole <ul> tag is scoped to the Ingredients Controller. But what about the actual controller of this template, the Edit Controller?

As it turns out, the Ingredients Controller is created as a child controller of the Edit Controller, thereby inheriting the scope of Edit Controller. That is why it has access to the recipe object from the Edit Controller.

In addition, it adds the addIngredient() method, which is used by the highlighted ng-click, which is accessible only within the scope of the <ul> tag. Why would you want to do this? This is the best way to separate your concerns. Why should the Edit Controller have an addIngredients() method, when 99% of the template doesn’t care about it? Child and nested controllers are awesome for such precise, contained tasks, and allow you to separate your business logic into more manageable

• The other directive that we want to cover in some depth here is the form validation controls. It is easy enough in the AngularJS world to set a particular form field “as required.” Simply add the required tag to the input (as is the case in the preceding code). But now what do you do with it?
For that, we jump down to the Save button. Notice the ng-disabled directive on it, which says recipeForm.$invalid. The recipeForm is the name of the form which we have declared. AngularJS adds some special variables to it ($valid and $invalid being just two) that allow you to control the form elements. AngularJS looks at all the required elements and updates these special variables accordingly.So if our Recipe Title is empty, recipeForm.$invalid gets set to true (and $val id to false), and our Save button is instantly disabled.

We can also set the max and min length of an input, as well as a Regex pattern against which an input field will be validated. Furthermore, there are advanced usages that can be applied to show certain error messages only when specific conditions are met. Let us diverge for a bit with a small example:

<form name="myForm">
User name: <input type="text"
<span class="error"
ng-show="myForm.userName.$error.minlength">Too Short!</span>

In the preceding example, we add a requirement that the username be at least three characters (through the use of the ng-minlength directive). Now, the form gets populated with each named input in its scope—we have only userName in this example— each of which will have an $error object (which will further include what kind of error it has or doesn’t have: required, minlength, maxlength, or pattern) and a $valid tag to signal whether the input itself is valid or not.

We can use this to selectively show error messages to the user, depending on the type of input error he is making, as we do in the previous example.

Jumping back to our original template—Recipe form template—there is another nice usage of the ng-show highlighted within the ingredients repeater scope. The Add Ingredient button is shown only beside the last ingredient. This is accomplished by calling an ng-show and using the special $last variable that is accessible inside a repeater element scope.

Finally, we have the last ng-click, which is attached to the second button, used for deleting the recipe. Notice how the button only shows if the recipe is not saved yet. While usually it would make more sense to write ng-hide=””, sometimes it makes more semantic sense to say ng-show=”!”. That is, show if the recipe doesn’t have an id, rather than hide if the recipe has an id.

The Tests

We have been holding off on showing you the tests that go along with the controller, but you knew they were coming, didn’t you? In this section, we’ll go over what kinds of  tests you would write for which parts of the code, and how you would actually write them.

Unit Tests

The first and most important kind of test is the unit test. This tests that the controllers (and directives, and services) that you have developed are correctly structured and written, and that they do what you would expect them to.

Before we dive into the individual unit tests, let us take a look at the test harness that surrounds all of our controller unit tests:

describe('Controllers', function() {
	var $scope, ctrl;
	//you need to indicate your module in a test
	beforeEach(function() {
		toEqualData: function(expected) {
			return angular.equals(this.actual, expected);

describe('ListCtrl', function() {....});
	// Other controller describes here as well

The harness (we are still using Jasmine to write these tests in a behavioral manner) does a few things:
1. Creates a globally (at least for the purpose of this test spec) accessible scope and controller, so we don’t worry about creating a new variable for each controller.
2. Initializes the module that our app uses (GutHub in this case).
3. Adds a special matcher that we call equalData. This basically allows us to perform assertions on resource objects (like recipes) that are returned through the $resource service or RESTful calls.

* Remember to add the special matcher called equalData any time we
want to do assertions on ngResource returned objects. This is because
ngResource returned objects have additional methods on them that will
fail normal expect equal calls.

With that harness in place, let’s take a look at the unit tests for the List Controller:

  describe('ListCtrl', function() {
    var mockBackend, recipe;
    // The _$httpBackend_ is the same as $httpBackend. Only written this way to
    // differentiate between injected variables and local variables
    beforeEach(inject(function($rootScope, $controller, _$httpBackend_, Recipe) {
      recipe = Recipe;
      mockBackend = _$httpBackend_;
      $scope = $rootScope.$new();
      ctrl = $controller('ListCtrl', {
        $scope: $scope,
        recipes: [1, 2, 3]

    it('should have list of recipes', function() {
      expect($[1, 2, 3]);

Remember that the List Controller is one of the simplest controllers we have. The constructor of the controller just takes in a list of recipes and saves it to the scope. You could write a test for it, but it seems kind of silly (we still did it, because tests are awesome!).

Instead, the more interesting aspect is the MultiRecipeLoader service. This is responsible for fetching the list of recipes from the server and passing it in as an argument (when hooked up correctly via the $route service):

  describe('MultiRecipeLoader', function() {
    var mockBackend, recipe, loader;
    // The _$httpBackend_ is the same as $httpBackend. Only written this way to
    // differentiate between injected variables and local variables
    beforeEach(inject(function(_$httpBackend_, Recipe, MultiRecipeLoader) {
      recipe = Recipe;
      mockBackend = _$httpBackend_;
      loader = MultiRecipeLoader;

    it('should load list of recipes', function() {
      mockBackend.expectGET('/recipes').respond([{id: 1}, {id: 2}]);

      var recipes;

      var promise = loader();
      promise.then(function(rec) {
        recipes = rec;



      expect(recipes).toEqualData([{id: 1}, {id: 2}]);

We test the MultiRecipeLoader by hooking up a mock HttpBackend in our test. This comes from the angular-mocks.js file that is included when these tests are run. Just injecting it into your beforeEach method is enough for you to start setting expectations on it.

In our second, more meaningful test, we set an expectation for a server GET call to recipes, which will return a simple array of objects. We then use our new custom matcher to ensure that this is exactly what was returned. Note the call to flush() on the mock backend, which tells the mock backend to now return response from the server. You can use this mechanism to test control flow and see how your application handles before and after the server returns a response.

We will skip View Controller, as it is almost exactly like the List Controller except for the addition of an edit() method on the scope. This is pretty simple to test, as you can inject the $location into your test and check its value.

Let us now jump to the Edit Controller, which has two points of interest that we should be unit testing. The resolve function is similar to the one we saw before, and can be tested the same way. Instead, we now want to see how we can test the save() and the remove() methods. Let’s take a look at the tests for those (assuming our harnesses from the previous example):

  describe('EditController', function() {
    var mockBackend, location;
    beforeEach(inject(function($rootScope, $controller, _$httpBackend_, $location, Recipe) {
      mockBackend = _$httpBackend_;
      location = $location;
      $scope = $rootScope.$new();

      ctrl = $controller('EditCtrl', {
        $scope: $scope,
        $location: $location,
        recipe: new Recipe({id: 1, title: 'Recipe'})

    it('should save the recipe', function() {
      mockBackend.expectPOST('/recipes/1', {id: 1, title: 'Recipe'}).respond({id: 2});

      // Set it to something else to ensure it is changed during the test




    it('should remove the recipe', function() {



In the first test, we test the save() function. In particular, we ensure that saving first makes a POST request to the server with our object, and then, once the server responds, the location is changed to the newly persisted object’s view recipe page.

The second test is even simpler. We simply check to ensure that calling remove() on the scope removes the current recipe, then redirects the user to the main landing page. This can be easily done by injecting the $location service into our test, and working with it.

The rest of the unit tests for the controllers follow very similar patterns, so we can skip over them. At their base, such unit tests rely on a few things:
• Ensuring that the controller (or more likely, the scope) reaches the correct state at the end of the initialization
• Confirming that the correct server calls are made, and that the right state is achieved by the scope during the server call and after it is completed (by using our mocked out backend in the unit tests)
• Leveraging the AngularJS dependency injection framework to get a handle on the elements and objects that the controller works with to ensure that the controller is getting set to the correct state

Scenario Tests

Once we are happy with our unit tests, we might be tempted to just lean back, smoke a cigar, and call it a day. But the work of an AngularJS developer isn’t done until he has run his scenario tests. While unit tests assure us that every small piece of JS code is working as intended, we also want to ensure that the template loads, that it is hooked up correctly to the controllers, and that clicking around in the template does the right

This is exactly what a scenario test in AngularJS does for you. It allows you to:
• Load your application
• Browse to a certain page
• Click around and enter text willy-nilly
• Ensure that the right things happen

So how would a scenario test for our “list of recipes” page work? Well, first of all, before we get started on the actual test, we need to do some groundwork.

For the scenario test to work, we will need a working web server that is ready to accept requests from the GutHub application, and will allow storing and getting a list of recipes from it. Feel free to change the code to use an in-memory list of recipes (removing the recipe $resource and changing it to just a JSON object dump), or to reuse and modify the web server we showed you in the previous chapter, or to use Yeoman!

Once we have a server up and running, and serving our application, we can then write and run the following test:

describe('GutHub App', function() {
  it('should show a list of recipes', function() {
    // Our Default GutHub recipes list has two recipes
    expect(repeater('.recipes li').count()).toEqual(2);


HTML5 Canvas

Welcome to Basic Tutorials! In these tutorials we’ll focus on the fundamental drawing capabilities of the HTML5 Canvas, including line and curve drawing, path drawing, shape drawing, gradients, patterns, images, and text.

HTML5 Canvas Basic Tutorials Prerequisites

All you need to get started with Basic Tutorials is a modern browser such as Google Chrome, Firefox, Safari, Opera, or IE9, a good working knowledge of JavaScript, and a simple text editor like notepad

The HTML5 Canvas element is an HTML tag similar to the <div>, <a>, or <table> tag, with the exception that its contents are rendered with JavaScript. In order to leverage the HTML5 Canvas, we’ll need to place the canvas tag somewhere inside the HTML document, access the canvas tag with JavaScript, create a context, and then utilize the HTML5 Canvas API to draw visualizations.

When using canvas, it’s important to understand the difference between the canvas element and the canvas context, as often times people get these confused. The canvas element is the actual DOM node that’s embedded in the HTML page. The canvas context is an object with properties and methods that you can use to render graphics inside the canvas element. The context can be 2d or webgl (3d).

Each canvas element can only have one context. If we use the getContext() method multiple times, it will return a reference to the same context object.

HTML5 Canvas Template

  <canvas id="myCanvas" width="578" height="200"></canvas>
    var canvas = document.getElementById('myCanvas');
    var context = canvas.getContext('2d');

    // do stuff here

To draw a line using HTML5 Canvas, we can use the beginPath(), moveTo(), lineTo(), and stroke() methods. First, we can use the beginPath() method to declare that we are about to draw a new path. Next, we can use the moveTo() method to position the context point (i.e. drawing cursor), and then use the lineTo() method to draw a straight line from the starting position to a new position. Finally, to make the line visible, we can apply a stroke to the line using stroke(). Unless otherwise specified, the stroke color is defaulted to black.

      body {
        margin: 0px;
        padding: 0px;
    <canvas id="myCanvas" width="578" height="200"></canvas>
      var canvas = document.getElementById('myCanvas');
      var context = canvas.getContext('2d');

      context.moveTo(100, 150);
      context.lineTo(450, 50);

To draw an image using HTML5 Canvas, we can use the drawImage() method which requires an image object and a destination point. The destination point defines the top left corner of the image relative to the top left corner of the canvas. Since the drawImage() method requires an image object, we must first create an image and wait for it to load before instantiating drawImage(). We can accomplish this by using the onload property of the image object.

      body {
        margin: 0px;
        padding: 0px;
    <canvas id="myCanvas" width="578" height="400"></canvas>
      var canvas = document.getElementById('myCanvas');
      var context = canvas.getContext('2d');
      var imageObj = new Image();

      imageObj.onload = function() {
        context.drawImage(imageObj, 69, 50);
      imageObj.src = '';

Data URI scheme

[Fuente:   ]

El esquema data URI es un URI scheme (Uniform Resource Identifier scheme) que proporciona una forma de incluir datos in-line en páginas web como si fueran recursos externos. Esta técnica te permite por ejemplo descargar en una sola petición HTTP imágenes y hojas de estilos antes que hacer múltiples peticiones HTTP lo cual puede ser ineficiente.

Los data URIs tienden a ser más simples que métodos de inclusión, tales como MIME with cid or mid URIs. Los data URIs se les llama algunas veces URL (Uniform Resource Locators) , aunque no referencien nada remoto. El esquema data URI está definido en RFC 2397 of the Internet Engineering Task Force (IETF).

En los browsers que soportan completamente Data URIs para “navigation”, contenido generado por Javascript se le puede provocar como si fuera una descarga de fichero para el usuario, simplemente asignando el Data Uri al window.location.href. Como ejemplo valga la conversión de tablas HTML a CSV para descargar utilizando Data un URI como este:

data:text/csv;charset=UTF-8,' + encodeURIComponent(csv)

donde “csv” ha sido generado por Javascript.

La IETF publicó la especificación data URI en 1998 y no ha cambiado desde entonces. La especificación HTML 4.01 se refiere al data URI scheme y este esquema ha sido implementado en la mayoría de los navegadores.

Web browser support

As of March 2012, Data URIs are supported by the following web browsers:

  • Gecko-based, such as FirefoxSeaMonkeyXeroBankCaminoFennec and K-Meleon
  • Konqueror, via KDE‘s KIO slaves input/output system
  • Opera (including devices such as the Nintendo DSi or Wii)
  • WebKit-based, such as Safari (including iOS), Android‘s browser, Kindle 4’s browser, Epiphany and Midori (WebKit is a derivative of Konqueror’s KHTML engine, but Mac OS X does not share the KIO architecture so the implementations are different), and Webkit/Chromium-based, such as Chrome
  • Trident
    • Internet Explorer 8: Microsoft has limited its support to certain “non-navigable” content for security reasons, including concerns that JavaScript embedded in a data URI may not be interpretable by script filters such as those used by web-based email clients. Data URIs must be smaller than 32 KB in Version 8.[3] Data URIs are supported only for the following elements and/or attributes:[4]
      • object (images only)
      • img
      • input type=image
      • link
      • CSS declarations that accept a URL, such as background-imagebackgroundlist-style-typelist-style and similar.
    • Internet Explorer 9: Internet Explorer 9 does not have 32KB limitation and supports more elements.

Email Client support

Following clients support data URI for images[3]



The encoding is indicated by ;base64. If it’s present the data is encoded as base64. Without it the data (as a sequence of octets) is represented using ASCII encoding for octets inside the range of safe URL characters and using the standard %xx hex encoding of URLs for octets outside that range. If <MIME-type> is omitted, it defaults to text/plain;charset=US-ASCII. (As a shorthand, the type can be omitted but the charset parameter supplied.)

Some browsers (Chrome, Opera, Safari, Firefox) accept a non-standard ordering if both ;base64 and ;charset are supplied, while Internet Explorer requires that the charset’s specification must precede the base64 token.

Advantages and disadvantages


  • HTTP request and header traffic is not required for embedded data, so data URIs consume less bandwidth whenever the overhead of encoding the inline content as a data URI is smaller than the HTTP overhead. For example, the required base64 encoding for an image 600 bytes long would be 800 bytes, so if an HTTP request required more than 200 bytes of overhead, the data URI would be more efficient.
  • For transferring many small files (less than a few kilobytes each), this can be fasterTCP transfers tend to start slowly. If each file requires a new TCP connection, the transfer speed is limited by the round-trip time rather than the available bandwidth. Using HTTP keep-alive improves the situation, but may not entirely alleviate the bottleneck.
  • While web browsers will not cache inline-loaded data as separate resource, external CSS files using data URIs are cached, so that an external CSS file with 10 background-images embedded as data URIs requires only one initial request instead of eleven and subsequent requests require only retrieving one cached CSS file, instead of one CSS file plus ten cached images.
  • When browsing a secure HTTPS web site, web browsers commonly require that all elements of a web page be downloaded over secure connections, or the user will be notified of reduced security due to a mixture of secure and insecure elements. On badly configured servers, HTTPS requests have significant overhead over common HTTP requests, so embedding data in data URIs may improve speed in this case.
  • Web browsers are usually configured to make only a certain number of (often two) concurrent HTTP connections to a domain,[5] so inline data frees up a download connection for other content.
  • Environments with limited or restricted access to external resources may embed content when it is disallowed or impractical to reference it externally. For example, an advanced HTML editing field could accept a pasted or inserted image and convert it to a data URI to hide the complexity of external resources from the user. Alternatively, a browser can convert (encode) image based data from the clipboard to a data URI and paste it in a HTML editing field. Mozilla Firefox 4 supports this functionality.
  • It is possible to manage a multimedia page as a single file.
  • Email message templates can contain images (for backgrounds or signatures) without the image appearing to be an “attachment”.


  • Data URIs are not separately cached from their containing documents (e.g. CSS or HTML files), therefore the encoded data is downloaded every time the containing documents are re-downloaded.
  • Content must be re-encoded and re-embedded every time a change is made.
  • Internet Explorer through version 7 (approximately 5% of web traffic as of September 2011), lacks support. However this can be overcome by serving browser-specific content.[6]
  • Internet Explorer 8 limits data URIs to a maximum length of 32 KB. (Internet Explorer 9 does not have this limitation)[3][4]
  • In IE 8 and 9, data URIs can only be used for images, but not for navigation or JavaScript generated file downloads.[7]
  • Data URIs are included as a simple stream, and many processing environments (such as web browsers) may not support using containers (such as multipart/alternative or message/rfc822) to provide greater complexity such as metadatadata compression, or content negotiation.
  • Base64-encoded data URIs are 1/3 times larger in size than their binary equivalent. (However, this overhead is reduced to 2–3% if the HTTP server compresses the response using gzip)[8]
  • Data URIs do not carry a file name as a normal linked file would. When saving, a default file name for the specified MIME type is generally used.
  • Referencing the same resource (such as an embedded small image) more than once from the same document results in multiple copies of the embedded resource. In comparison, an external resource can be referenced arbitrarily many times, yet downloaded and decoded only once.
  • Data URIs make it more difficult for security software to filter content.[9]



An HTML fragment embedding a picture of small red dot: Red-dot-5px.png

<img src="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAUA
9TXL0Y4OHwAAAABJRU5ErkJggg==" alt="Red dot" />

As demonstrated above, data URIs encoded with base64 may contain whitespace for readability.


A CSS rule that includes a background image:

ul.checklist li.complete {
    padding-left: 20px;
    background: white url('data:image/png;base64,iVBORw0KGgoAA
P9/AFGGFyjOXZtQAAAAAElFTkSuQmCC') no-repeat scroll left top;

In Mozilla Firefox 5Google Chrome 17, and IE 9 (released June, 2011), encoded data must not contain newlines.


JavaScript statement that opens an embedded subwindow, as for a footnote link:'data:text/html;charset=utf-8,' + 
    encodeURIComponent( // Escape for URL formatting
        '<!DOCTYPE html>'+
        '<html lang="en">'+
        '<head><title>Embedded Window</title></head>'+

This example does not work with Internet Explorer 8 due to its security restrictions that prevent navigable file types from being used.[4]

How to Create a Simple WordPress Plugin


When giving support or offering various snippets we frequently advise people to create a WordPress plugin to handle all of their modifications. Here is how you create a very simple plugin to house all your little snippets and modifications.

Basic Structure

A WordPress plugin can consist of as little as a single PHP file however for consistency I always create folder and at least the single PHP file within it. The folder and the file should have the same name with the exception of the file extension. We’re going to call our plugin “My Custom Functions” so the directory and file would be named accordingly.

  • Folder: my-custom-functions
  • File: my-custom-functions.php

When we’re done we’re going to upload this newly created folder with the file inside it to our wp-content/plugins directory on our server.

Basics of a WordPress Plugin file

There are no doubt endless conversations that could be had about all the things that can go into your plugin file but I’m going to cover the absolute necessities to get you started quickly.

WordPress Plugin Header

Here is all that needs to be in your plugin file for WordPress to recognize it once uploaded.

Plugin Name: My Custom Functions

See how easy that is? At this point you have a plugin that you can activate in your WordPress plugins area. Of course our plugin doesn’t actually do anything yet but the point is you have laid the foundation of your very own plugin and it was super easy.

Now there are other elements that you can include in this header. Things like a description, version, author, etc. You can read more about those here: /Writing_a_Plugin #File_Headers

The rest of your plugin

As I said earlier there is really no end to what you can place inside your plugin but at this point, at a very basic level, you can think of it like your themes functions.php file. By that I mean that if you did nothing else you could place all those little code snippets that you love so much into this file instead of in your functions.php file and they would work exactly the same way.

As an example consider this little snippet that I sometimes use when I want to very quickly be able to redirect a page completely to a different page whether on the same site or another site entirely.

function my_custom_redirect () {
    global $post;
    if ( is_page() || is_object( $post ) ) {
        if ( $redirect = get_post_meta($post->ID, ‘redirect’, true ) ) {
                        wp_redirect( $redirect );
add_action( ‘get_header’, ‘my_custom_redirect’ );

This snippet is very glamorous but what it essentially does is allow me to add a piece of custom post meta to any page called “redirect” with a URL. When the page is called and has this custom post meta then it’s automatically redirected to the new URL. Let take a look at it altogether now.

Plugin Name: My Custom Functions
function my_custom_redirect () {
    global $post;
    if ( is_page() || is_object( $post ) ) {
        if ( $redirect = get_post_meta($post-&gt;ID, ‘redirect’, true ) ) {
                        wp_redirect( $redirect );
add_action( ‘get_header’, ‘my_custom_redirect’ );

As you can see this plugin is not complicated at all and the best part is if something seems to be going fishy with my redirects I don’t have to dig through a huge file of miscellaneous functions to find it. In fact I could have named this plugin My Custom Redirect and then if anything happened I could just deactivate this one plugin without having an adverse affect on my entire site.

 In Summary

You can no longer say that it’s to difficult to create a plugin. It’s just as easy as adding code to your functions.php, which you should almost never do, and so much cooler too.

Bonus Round

If you are interested in digging into creating WordPress plugins more complicated than what I describe above consider checking out the following resources.

Enterprise Integration Using REST


La mayoría de los REST APIs son construidos pensando en un sólo propósito para integrarse en un solo sistema. En este artículo , discutiremos las restricciones y flexibilidad que tienes con APIs no públicos, y las lecciones aprendidas de hacer integraciones RESTful a gran escala que utilizan varios equipos de trabajo.

18 November 2013

¿Por qué se utiliza REST en los sistemas corporativos?

Hacer cambios en un sistema heredado de gran tamaño  es muy duro dentro del a industria IT.

Muchas veces nos metemos en ese tipo de cambios imaginando que la inmaculada arquitectura que el nuevo sistema tendrá y cometemos el error de subestimar la dificultad de la tarea. Ciertos aspectos salen a la luz , aspectos que no son entendibles cuando son vistos contra el telón de fondo de un sistema heredado completo , hecho a medida y masivamente enredado. Lo primero que se intenta es comprar un paquete software con el objetivo de reducir el esfuerzo de integración con la promesa de no estar atado nunca más a un sistema que no tiene soporte.Como segunda opción se propone realizar una arquitectura orientada para integración, con el objetivo de reemplazar sólo algunas partes del nuevo sistema y reducir asi el esfuerzo de cambiar todo el proyecto.

ThoughtWorks ha estado involucrado en varios grandes proyectos que heredan sistemas. REST sobre HTTP es una opción muy apetecible para muchos proyectos y es por la que apostamos normalmente. Es simple de utilizar y de entender y no requiere ningún framework raro para empezar a trabajar con él. Arquitecturamente hablando, REST está probado que tiene buena escalabilidad y encaja bien con modelado de dominios.

Define entornos lógicos – uno por cada necesidad

Many large IT organizations inherit a legacy of expensive environments from mainframes or large vendor installations and try to shoe-horn services into a predefined list of inflexible environments. Unfortunately, managing an enterprise-wide set of environments that all developers must use gives up one of the principal advantages of RESTful services: lightweight environments. While services may be a facade in front of applications that require substantial horsepower, the services themselves tend to be simple to deploy and host, and testable through a browser and a command line. Furthermore, techniques such as using ATOM-like event feeds avoid the need for extensive middleware infrastructure in spinning up a new environment. The key insight is to understand the concept of a logical environment:

logical environment is an appropriately isolated set of interrelated applications, services, and infrastructural components needed to satisfy a business or development need.

The components needed to satisfy a development need may be quite different for the various teams and roles than the components needed to satisfy a business need. Few developers in large organizations expect to run an isolated full-stack environment, and isolation should go only as far as needed to make developers productive. For example, in a retail project, a developer in the order entry team may require services for the product catalog and customer management, but perhaps not for warehouse management. In production, each of those may have a load-balanced cluster supporting them, but developers and QAs value isolation over performance and availability. In the extreme case, different developers may have different logical environments on the same VM. In this case, the isolation can be accomplished by making ports and database names part of the environment configuration.

Figure 1

Figure 1: Environmental isolation is independent of the hardware hosting the environment

The other problem with shared environments is that everybody gets upgraded at the same time, which is often not appropriate in the chaotic world of development. Much better is to put the release schedule in the hands of the individuals affected by it – this is equally true for production releases as it is for developers upgrading a service they depend on in their sandbox environment. This can be particularly important for QAs, a role that has a stong need for managing the release cadence within their logical environment. Testing requires a known and stable set of versions for the services involved, and developers find fixing bugs considerably easier when the version is known.

In one large engagement, we defined a declarative description of environments using Yaml. A top level key would define an environment name, sub-keys defined the required services for that environment, and the next level of keys defined environment-specific configuration for each service:

    webservers: [localhost]
    port: 8080
    logPath: /var/log/product
    dbserver: localhost
    dbname: product
    webservers: [localhost]
    port: 9080
    logPath: /var/log/customer
    dbserver: localhost
    dbname: customer

Ruthless attention to deployment automation and appropriate investment in infrastructure meant that some services existed in over 50 logical environments, a mind-blowing number for a company accustomed to mainframes. Tools like Ansible help declaratively describe environments without a heavy up-front investment. To allow for the kind of lightweight ad-hoc environments that developers use, it’s often helpful to define a single environment withlocalhost as the server name, which can be spun up on a local virtual machine using something like Vagrant. This allows environmental elasticity by using the same environment configuration but different VMs.

What about packages?

Vendor packages complicate environment creation, as they are rarely built to support easy deployment and environmental elasticity. Many of them ship with an installation document that changes with every upgrade and no reliable mechanism to replay changes in multiple environments. Licensing also adds a hurdle, although most vendors will allow low cost development licenses.

If you find yourself burdened with a vendor package that is hard to deploy, there are a couple of remediation strategies. If the package does not require complicated licensing during installation, you may be able to do the vendor’s work of automating the installation and upgrade. Alternatively you can set up a cloneable VM, which gives you elasticity but complicates upgrades. Essentially, this is the bake vs. fry distinction in configuration management discussions.

When neither option is available, there are other ways of achieving some level of isolation, although none will be comparable to actual environmental isolation. There may be a way of using natural data boundaries within the application to allow some measure of developer isolation. Different user accounts tend to be an easy data boundary, although users tend to share global state. Better still is to provide different tenants to individual developers, as multi-tenant applications are designed to prevent cross-tenant traffic. This approach is clearly a work-around, has scaling challenges, and does not provide release scheduling independence.

Ease of deployment and environment management should be one of the criteria by which packages are selected.

The best solution, of course, is to vet such operational considerations during the vendor selection process. Ease of deployment and environment management should be one of the criteria by which packages are selected. During the selection process, we have to consider not only feature set and fit-to-purpose, but ease of integration and the productivity of the integration developers.

Use versioning only as a last resort

An important corollary to the definition of a logical environment is the notion of cohesion – each environment should have only one instance of a given service. Unfortunately, in large projects where each team moves at a different pace, it is all too easy to run into the classic diamond dependency problem usually associated with compile time dependencies:

Figure 2

Figure 2: Incompatible version requirements

In my experience, one of the first solutions RESTful architects reach for is versioning. I take a more controversial view. To borrow Jamie Zawinski’s famous dig on regular expressions:

Some people, when confronted with a problem, think “I know, I’ll use versioning.” Now they have 2.1.0 problems.

The problem is that versioning can significantly complicate understanding, testing, and troubleshooting a system. As soon as you have multiple incompatible versions of the same service used by multiple consumers, developers have to maintain bug fixes in all supported versions. If they are maintained in a single codebase, developers risk breaking an old version by adding a new feature to a new version simply because of shared code pathways. If the versions are independently deployed, the operational footprint becomes more complex to monitor and support. This extra complexity is either overlooked or justified by simplifying the release process of interdependent services. However, the release complexity can be mitigated significantly with a disciplined use of consumer-based testing (discussed in the next section), an intriguing option available to enterprise APIs that is not available to public APIs.

For many types of changes, versioning can be avoided by other techniques. Postel’s Lawstates that you should be liberal in what you accept and conservative in what you send. This is sage advice for service development. Unfortunately, the default behavior of some deserializers breaks this advice by throwing an exception when an unexpected field is provided by a consumer. This is unfortunate, as it could simply be the consumer passing additional diagnostics over the wire with no expectation of consumption, or it could be the consumer preparing for a future update of the producer, passing a new field before the producer is prepared to deal with it. It could be the producer adding a new field to a response body which the consumer is free to ignore. None of these situation warrants an exception.

Automatic deserialization usually falls into the pitfall of coupling consumers and producers.

It’s usually possible to configure the deserializer to be more tolerant. Though it’s not mainstream advice, I prefer to avoid automatic deserialization altogether. Automatic deserialization usually falls into the WSDL pitfall of coupling consumers and producers by duplicating a static class structure in both. Hand-coded deserialization also allows for fewer assumptions to be made in the incoming data. As Martin Fowler describes in Tolerant Reader, using XPath expressions like //order allows nesting changes above the order element without breaking deserialization.

When it comes to service contract design, a little up-front design can pay big benefits. In one project, a contract had been developed with inconsistent casing of the attributes – firstNameand LastName for example. Developers on consumer teams no doubt swore under their breath when they developed against the contract, but they swore quite loudly when the contract was subsequently “fixed” without notice.

In large SOA projects, I prefer writing many stories at service boundaries. This does lead to the obvious challenge of making sure the end-to-end functionality aligns with business goals (a problem I discuss later), but has many advantages. First, they naturally tend to involve the tech lead or architect in the analysis, giving them time to think about the granularity of the concepts and mock out the contract to form a cohesive description of the resource. Writing the acceptance criteria requires thinking through the various error conditions and response codes. Having QA review at service boundaries gives another opportunity to catch the obvious mistakes, like the casing issue just mentioned. Finally, in much the same way that test-driven development encourages loose coupling by making sure each class has at least two consumers – the consumer it was written for and the tests – service boundary stories help ensure that the service endpoint is reusable rather than overly specific to the end-to-end functionality it is initially developed for. Being able to test a service endpoint in isolation prevents coupling it too tightly to the end-to-end workflow it supports.

Producers can also signal when they need to make a breaking change using semantic versioning. This is a simple scheme to add well known meanings to the MAJOR.MINOR.PATCH portions of a version, including incrementing the MAJOR version for breaking changes. If you have a disciplined set of consumer-driven tests (described shortly), you may be able to upgrade all consumers on the same release. Of course, that isn’t always possible, and at some point the complexity of supporting multiple versions of the same service may be justified because coordinating a release of its dependencies is even more complex. Once you reach the point where you have to use versioning, there two principal techniques to choose between: URL versioning and HTTP header versioning. In your choice it is important to understand that the versioning scheme you select is first and foremost a release management strategy.

URL versioning involves including a version number in the URL (such as /customers/v1/… – the MAJOR version in semantic versioning is sufficient). For this the consumer will have to wait until the producer has been released. URL versioning has the advantage of being very visible, and testable through a browser. Nevertheless, URL versioning suffers an important flaw when one service provides links to another service with the expectation that the consumer will follow the link (this is most common when trying to use hypermedia to drive workflow). If the hyperlinked service upgrades to a new version, coordinating upgrades across such dependencies can get tricky. For example, the customer service links to the product service, and the UI follows that link blindly, unaware of product versioning since the version is embedded in the provided link. When we make a non-backwards compatible upgrade to the product service, we ultimately want to upgrade the customer service as well to update the link, but we have to be careful to first upgrade the UI. In this situation, the simplest solution is often to upgrade all three components at the same time, which is effectively the same release management strategy as not versioning at all.

Duncan Beaumont Cragg suggests simply extending the URL space rather than versioning it. When you need to make an incompatible change, simply create a new resource rather than versioning the existing one. On the surface, there is a small change between/customers/v2/profile and /customers/extendedProfile. They may even be implemented the same way. However, from a communication standpoint, there is a world of difference between the two options. Versioning is a much broader topic, and in large organizations, versioning can often require coordination with multiple outside teams, including architecture and release management, whereas teams tend to have autonomy to add new resources.

HTTP header versioning puts information into the HTTP header indicating which version the consumer will accept. This is most commonly associated with the Content-Type, for example, application/vnd.acme.customer-v1+json, which allows content negotiation to manage the version. A client can send a list of supported versions in the Accept header, and the server can respond with the version used in the Content-Type header, or send a 415 HTTP status code for an unsupported version request. This appeals to purist RESTafarians, and is immune to the flaw mentioned above with URL versioning, as the ultimate consumer gets to decide which version to request. Of course, it becomes harder to test through a browser and easier to overlook when developing. It’s helpful to augment header versioning by also putting the version number in request and response bodies, when present, to provide additional visibility. Header versioning also introduces challenges with caching. The Varyheader was designed to enable the same URL to be cached in different ways, but it adds complexity to your network configuration, and you risk running into a misconfigured network cache that ignores the Vary header.

Catch integration problems with consumer-based testing

Consumer-based testing is one of the most valuable practices I’ve seen that makes REST scale within an enterprise, but before we dive in, we need to understand the concept of a deployment pipeline.

Deployment Pipelines

In their groundbreaking book on Continuous Delivery, Jez Humble and Dave Farley portray the deployment pipeline as the path code takes from checkin to production. If we follow a checkin to a production release in a large organization, we might find the following steps:

  • A developer checks in new code.
  • The continuous integration tool compiles, packages, and runs unit tests against the source code (often called the commit stage).
  • The continuous integration tool deploys to a sandbox environment to run a set of automated tests against the deployed service in isolation.
  • The application team deploys to a showcase environment where internal user acceptance occurs by the business stakeholder.
  • The central QA team deploys to a systems integration test (SIT) environment, where they test alongside other applications and services.
  • Release management deploys into pre-production, where the application team, security, and operations perform some manual validation of the quality of the release.
  • Release management deploys into production.

Modeling that workflow as a series of stages gives us our deployment pipeline, which lets us visualize the version of our service in each of the pipeline stages:

Figure 3

Figure 3: Simple deployment pipeline

The pipeline depicted above describes the flow for a single service in isolation. The reality of large-scale SOA projects is considerably more complicated as we add integration:

Figure 4

Figure 4: Integrated deployment pipeline

Note that in the integrated pipeline, I’ve left out a lot of detail in the early stages. Different teams often have different stages for the team-facing portions of the pipeline, which may be isolated from true external dependencies. Some teams, for example, may add manual QA or performance test stages. At the organizational stages – SIT, pre-prod, and production – it is fairly common for all code to progress the same way, and for those stages to test the services integrated together. The further you go down the pipeline, the more integrated it is and the more production-like are the environments.

An investment in stubbing can pay off large dividends in the early stages of a pipeline. It’s comforting to know that when your build goes red it’s because of broken code within your team and not a change in the environment you happen to be testing in. Using a test double helps eradicate a leading cause of non-determinism in your tests. Libraries like VCR allow you to record real service calls and replay the responses later during automated test runs. Using test doubles does leave you exposed to integration problems, though, and most of the complexity in enterprises involves integration. Fortunately, Ian Robinson describes a solution to tangling the integration complexity, which fits in well with our deployment pipeline.

Consumer-based testing

Consumer-based testing is counter-intuitive, as it relies on the consumers writing tests for the producer. When writing contract tests, a consumer writes a test against a service it uses to confirm that the service contract satisfies the consumer’s needs. For example, the order entry team may depend on the code and description of the product service, and that the monthly charge is a number, and so they write a test like this:

public void ValidateProductAttributes()
    var url = UrlForTestProduct();
    var response = new HttpResource(url)

    Assert.That(response.StatusCode, Is.EqualTo(200));
    AssertHasXPath(response.Body, "//productCode");
    AssertHasXPath(response.Body, "//description");
    AssertHasXPath(response.Body, "//monthlyCharge");
    AssertNumeric(ValueFor(response.Body, "//monthlyCharge"));

This enables a neat trick in the deployment pipeline. After an individual service passes its internal pipeline, all services and consumers go through a unified contract test stage. It could be triggered by the consumer changing tests, or the producer committing changes to the service. Each consumer runs its tests against the new version of the changed service, and any failure prevents the new version from progressing in the pipeline.

In our example, let’s say a new build of the order entry service progresses to the contract test stage:

Figure 5

Figure 5: The contract test stage

It depends on the product and billing services, and since the new code could have changed its contract tests, it runs its tests against the last version of the product and billing services to make it into the contract test stage. The UI depends on the order entry service, so the last version of the UI code in the contract test stage runs its tests against order entry. This means that both the services and their consumer tests are required in the contract test stage. Since the product service has no dependencies, it has no consumer tests. Let’s look at our diamond again; this time note that there is only one version of each service depended on.

Figure 6

Figure 6: Sample contract test run

Triggering only the tests associated with a particular change can get tricky, but you can go a long way simply by running all contract tests each time a new service is deployed to the contract test stage of the pipeline. This would include the grey lines in Figure 6, which aren’t relevant to the change that was introduced. It is a tradeoff between speed of the test run and how complex you’re willing to make the dependency management.

Assuming all the tests pass, we now have a set of services that have been shown to work together. We can record the set of them together, and their associated versions.

Figure 7

Figure 7: Successful contract test run

At this point, all versions of the services involved are captured in a single deployable artifact set, or DAS. This DAS can become the single deployable artifact for higher stages of the deployment pipeline. Alternatively, it can provide a compatibility reference if supporting independent releases is required. Either way, it represents a set of components that have been proven to speak the same language.

If the new order entry code broke the UI consumer tests, the combined artifact does not progress. This doesn’t necessarily indicate a problem in the service; it could be a misunderstanding of the contract by the consumer. Or, it could be an intentional breaking change from the producer, although according to semantic versioning etiquette, they should have incremented their MAJOR number if it was intentional. In any case, failing contract tests triggers a conversation between the two teams early, rather than days or weeks later when the consumer updates their environment.

Figure 8

Figure 8: Breaking contract change

What about data?

One of the harder challenges with getting comprehensive consumer-based testing is generating valuable test data. The contract test above assumes a known test product. I hid this assumption away with a UrlForTestProduct method, which presumably requires having a URL for a specific product. Hard-coding assumptions about data available at the time of the test can be a fragile approach, as there are no guarantees that product will continue to exist in the future. Furthermore, some endpoints may require data consistency across multiple services to work. For instance, order entry may submit an order to billing with a set of associated products. The billing endpoint will need to have a consistent set of product data.

One of the more robust strategies is to create the test data during the test run, as you guarantee the data exists before using it. This presupposes that each service allows creating new resources, which isn’t always the case, although at one client we added test-only endpoints to facilitate test data creation. These endpoints were not exposed in production. This strategy can still require complicated test setup in the billing example above. After creating a test product, you would have to force synchronization with the order entry and billing services, an operation that is often asynchronous in nature. An alternate strategy is to have each service publish a cohesive set of golden test data that it guarantees to be stable. This is usually best encapsulated in some higher level data boundary. For example, you may create a test marketing brand or line of business that crosses all the services for the sole purpose of providing fake data that won’t have production impacts. Either way, wrangling test data should be a first class concern for enabling a robust service deployment pipeline.

Do not let systems monopolize resources

Defining data boundaries poorly is one of the most expensive mistakes an architect can make. A common anti-pattern is to attempt to store all information about an entity in a single data store, and export it to dependent systems as needed, a strategy encouraged by a superficial misunderstanding of master data management (MDM). The problem with such a strategy is that it violates Conway’s Law, which states that software architectures are bound to reflect the structure of the organization that built them [1].

Let’s look at an example of a product catalog. In the legacy system, one team entered new product codes and their associated rates. A provisioning team used another tool to enter appropriate configuration, such as the codes downstream phone provisioning systems needed, and the service codes to turn on TV channels in another application. The finance team entered general ledger information about the product in their financial tool, and the invoicing team added special invoicing rules in yet a different application. The communication between these teams was managed by the business, not by technology.

Transitioning to a single application for all product data can be a disastrous exercise, primarily because those different business teams all have a different definition of what a product is. What a customer service representative thinks of as a single product may have to split in two to support proper accounting. The invoicing team, highly concerned with reducing call rates by simplifying the invoice, often needs to merge two product codes into a single line on the bill. And of course there’s always marketing, who redefines products with reckless abandon. Attempting to rationalize the entire enterprise view of a product into a single catalog simply makes that catalog both fragile and inflexible. Effectively, the entire enterprise must now descend into the product catalog to make a change. The surface area of change is significantly increased, and the ripple effects from a change become hard to reason about. Conway’s Law is violated, as the communication paths of the system no longer represent the communication paths of the organization.

Figure 9

Figure 9: A data modeling disaster

I am more than a little skeptical of universal data models that try to standardize the canonical representation of something as important as a product across an enterprise and its integration partners. The telecommunications industry made just such a data model, called the TM Forum Shared Information/Data Model (TMF SID). The claim is that, by standardizing on the SID, different companies, or departments within a company, can use the same terms to describe the same logical entities. I suspect such a massive undertaking would not be sustained without some successes, but I’ve not seen them.

My recommended solution, borrowing from Eric Evans’ Domain Driven Design , is to wrap each team’s definition of a product behind a bounded context. A bounded context is a linguistic boundary, within which a term is guaranteed to mean the same thing wherever it is used. Outside the bounded context, no such guarantees exist, and a combination of integration and business process must manage the translation. That a financial product is different from a provisionable product can now be modeled in a way that abides by Conway’s Law.

Providing well-defined bounded contexts around packages is a great use of facade services. A natural consequence of vendors evolving their package to support multiple businesses is that the feature set of the package likely extends beyond your enterprise needs. Wrapping all communication with the package behind a facade service allows you to limit and shape the package data into the bounded context that your business process defines, and provide a clean API that transforms the package-specific vernacular into the language defined by your enterprise.

Use a small subset of data for centralized entities

The way we do this technically is to shrink the set of master data that resides in the product catalog, and replicate that data to other systems, where it is augmented. This technique is sometimes associated with “swivel chair integration,” but it’s not the same. In true swivel chair integration, the same information is entered in multiple applications that do not directly integrate. With bounded contexts, master data is replicated, and is contextually augmented and transformed in dependent applications.

The key to defining a central resource entity like a product is to understand what the business team that creates new products thinks a product is. In our hypothetical example, they decide that a product defines some basic rates and common descriptive information used by all departments. That gives us our definition of a product in the product service: a product is something we can sell to a customer that has a single rate. In the legacy world, after creating a product, the product team sends out an email to other departments. This event is likely worth automating, which we can do by publishing on a queue or a change feed exposed on our service. However, we should not pretend that we can automate away the business process that this event triggers, nor should we move that business process inside the central catalog.

When finance receives it, they have to decide how to decompose the product. Let’s say it was a bundled package of TV sports channels at a discounted price, a concept that neatly fits the product teams definition of a product. However, finance needs to make sure that each sports station within the bundle gets their royalty fees, and so they need to send the single rate to different general ledger buckets. Now we have a financial definition: a product is a association of revenue to a general ledger account. In the use case just described, we can imagine the finance application receiving the NewProduct event, and providing an interface for the user to assign portions of the revenue to ledger accounts.

Each business unit has a different model for common entities with explicit translation between their bounded contexts

When billing receives the event, they need to decide whether or not to prorate the package. Perhaps, concerned with too many customers ordering a prorated sports channel only to cancel the next day after watching the Big Game, they decide that this particular sports package requires paying for a full month up front. We start with the definition that a product is a recurring charge to a customer, and augment with a set of potentially complex configuration beyond the simple monthly rate.

Invoicing defines a product as a line on a bill. They may decide, perhaps driven by a marketing strategy, that customers who purchased two separate sports bundles should see only one line on the bill called “Super Sports Package,” with the summed amount. Again, we can imagine an application facilitating receiving the NewProduct event and allowing such combining, or we can imagine developers coding new rules upon such product introductions.

Figure 10

Figure 10: Example integration using bounded contexts

This example shows four different bounded contexts, and the NewProduct event being propagated using a notification feed, which is a common RESTful approach. When a new product is created, the product service records the new product as an event which it exposes on an endpoint. Consumers poll the endpoint to receive all events since the last time they polled at an endpoint that may look like /notifications?since=2013-10-01.

Use epics to coordinate business features

Earlier, I recommended considering service endpoints as story boundaries, with the caveat that we may lose the benefit of traditional agile stories – namely, that they are aligned with business functionality. Since teams work with different priorities and at different speeds, this problem is exacerbated in large scale SOA, with the risk that we lose the forest for the trees. Imagine a business feature to bill for the first month of a product. A true business flow may require a series of service calls prior to that point, involving customer creation, product lookup, order creation, and field technician approval. At scale, those service endpoints will be implemented by different teams.

The agile toolbox has always included epics for coordinating groups of stories along a single high level feature. I recommend treating them as first class citizens of program management for large scale SOA. In fact, I believe much of the ceremony we attach user stories deserves to be at the epic level for such projects, since the epics often stand in for our business friendly accounts of requirements.

For example, a Create Customer epic may involve the order entry and billing teams in addition to the customer management team, each of which have individual service-oriented or application-specific stories for tracking their work. In our hypothetical example where the services are used by a user interface developed by a separate team, we may not be able to showcase the fruits of our labor to the business until we’ve completed the full system flow.

Some of the care and management of epics can be lifted from story practices on smaller engagements. Getting architectural review of an epic to define cross-functional requirements, and business analysts to define acceptance requirements can help keep the whole picture in mind. Cross-team showcases should be managed at the epic level, and these may be the first showcases where the actual business user flow is shown.

Program-level metrics keep epics as the principal metric for tracking velocity, as team user-story velocity can give a false sense of progress.

The important consideration is that program-level metrics keep epics as the principal metric for tracking velocity, as team user-story velocity can give a false sense of progress. The symptom to watch for is when velocity burnups show the program delivering on time, but nothing seems to work. I was on one project where this was the case based on individual story velocity tracked by the individual teams. Some teams were on time, others slightly behind schedule, but we were unable to showcase anything to the business after months of development. Simply changing the program-level burnups to show the number of epics completed instead of the number of stories completed was illuminating. Despite individual teams showing significant progress, we had only completed one epic. Worse yet, at least one story from a full two-thirds of all epics needed to release was in play at the same time. Traditional software kanban approaches attempt to limit work in progress at the story level. When we realized the scope of the problem, we were able to course correct by resequencing the story development to limit the number of epics in progress at the same time.

Wrapping up

Regardless of the technology or architecture, scaling software development is tricky business. We often fool ourselves into thinking otherwise by pretending that it’s “just integration.” Eric Evans’ once said that, in any large system, some of it will be poorly designed. My experience – even with highly skilled teams – has led me to believe that he’s right. Our principal goal of integration, therefore, is to ensure that we insulate ourselves from another subsystem’s design.

I am an advocate for a RESTful service integration strategy. I believe REST makes for simpler development and, because RESTful messages tend to be self-descriptive, simpler testing and troubleshooting. However, it is far from the silver bullet some imagine it to be, and RESTful integration at scale requires paying attention to the lessons described above. My own experience has led me to believe that:

  • Environmental isolation matters. Paying attention to deployment automation is critical. Selecting packages that disregard good deployment practices will slow everyone who needs to integrate with that package down.
  • Versioning prematurely adds unnecessary complexity to the system. Practices such as tolerant deserialization and endpoint-based story analysis help you delay versioning, and are useful practices even if you subsequently add semantic versioning support.
  • Using consumer tests greatly reduces the release management complexity of upgrading a set of inter-dependent services, and further helps delay versioning.
  • Attempting to let a service control all data about an entity is disastrous. Do not ignore business process and the different definitions the business has for an entity.
  • Coordinating business functionality is unlikely to happen at the user story level at scale. Epics are required to sequence business releases.

Using hypermedia, content negotiation, and a uniform interface get all the attention in RESTful discussions, and they are valuable techniques, but making integration solutions scale requires us to step away from the mechanics of REST and look at social and organizational issues. Navigating such issues successfully does not mean that each individual service, or component, will be well designed. It does mean that you will have solid practices in place to incrementally deliver business value and ensure appropriate robustness at the integration layer, and those practices may spell the difference between a successful delivery and a failed one.

For articles on similar topics…

…take a look at the following tags:

application integration web servicesenterprise architecture


Special thanks to Martin Fowler, Damien Del Russo, Danilo Sato, Duncan Beaumont Cragg, and Jennifer Smith for early feedback on this article. Many of the ideas came from using my ThoughtWorks colleagues as sounding boards on different projects. While there are too many to name here, I would like to call out Ryan Murray, Mike Mason, and Manoj Mahalingam for their particular influence on some of the ideas presented here.


1: Conways’ Law

There are several different viewpoints on Conway’s Law, and some see it as a purely descriptive tautology. I’m using it more in line with the variation that Wikipedia associates with James Coplien and Neil Harrison. It may indeed be a descriptive law of software within an organization, but I believe that’s only because software that attempts to work against Conway’s Law is doomed to fail.

Significant Revisions

18 November 2013: added option of extending the URL space to avoid versioning, and a footnote to clarify Conway’s Law usage

12 November 2013: added section on using epics, thus completed initial publication

08 November 2013: added section against letting systems monopolize resources

31 October 2013: added section on consumer-based testing

24 October 2013: added section on versioning

21 October 2013: release first edition with logical environments section

Como un buen whisky escoces, los programadores mejoran con la edad


Los programadores más viejos siempre han sufrido de discriminación por la edad. Pero de acuerdo con los estudios realizados por investifadores de la North Carolina State University, las compañias deberían pensar dos veces contratar a un joven hacker en vez de un programador senior.

Emerson Murphy-Hill, profesor asistente de Ciencias de la Computación de la North Carolina State University y co-autor del estudio, dice que los programadores veteranos tienen más recorrido para las empresas de los que ellas pueden pensar. “Sabemos que ciertas aspectos se vuelven peores, como tu agudeza visual”, dice. “Pero no es todo malo. En algunas cosas eres mejor, como en inteligencia emocional y social”.

Dice que tendemos a pensar en que programar  es algo que sólo se practica cuando eres joven: pasas tus veintitantos trabajando 80 horas / semana,  y entonces lo dejas y te vuelves gestor o jefe de proyecto. Pero eso puede no ser la mejor manera de llevar la carrera de un programador.

Para determinar si los programadores se vuelven mejores o peores con la edad, los investigadores han buscado los programadores que están mejor en el ranking de StackOverflow, un site donde los programadores pueden preguntar y contestar sobre programación. Los usuarios valoran las respuestas de sus compañeros programadores, y entonces el site utiliza estas valoraciones para generar un puntuación de reputación de cada programador. Comparando estos rankings con la edad de los programadores, el estudio ha encontrado que estas valoraciones crecian cuando los programadores rondaban los 50 años.

El estudio también intento valorar los conocimientos de cada uno de los programadores haciendo un seguimiento de cuantos diferentes temas habian escrito. El estudio encontró que los programadores más jovenes responden a las cuestiones en un numero pequeño de áreas  ,y que el rango se amplia cuando los programadores son mayores en edad.

Finalmente, el estudio buscó cuantas preguntas contestaron los programadores sobre tecnologías de menos de 10 años de antiguedad, y se encontró que los programadores mayores tenían más  conocimiento que los más jóvenes sobre las nuevas plataformas móviles tales como iOS y Windows Phone. Para otras tecnologías, no habia un gap significativo entre mayores y jóvenes.

Los investigadores concluyeron que cualquier prejuicio sobre los programadores viejos no es soportado por los datos que ofrece StackOverflow. Pero este estudio tiene límites. Muchos usuarios de StackOverflow no informan de su edad, y parece que sólo los programadores viejos son los representados en el estudio, basándose en los datos del Bureau of Labor Statistics.

Los programadores mayores que utilizan el site pueden estar haciendo un esfuerzo consciente para conservar sus conocimientos técnicos actualizados y para promocionarse ellos mismos. O podrían utilizar el site porque ellos saben que son cultos en la materia, mientras que sus compañeros no tan cultos permanecen fuera , lo que falsearía los resultados. Y , desde luego, la reputación de las valoraciones StackOverflow  no necesariamente son correlativas a las habilidades técnicas.

The paper detailing the study, Is Programming Knowledge Related To Age?, will be presented at the Working Conference on Mining Software Repositories in San Francisco on May 18. But it’s just a start. In an effort to draw better conclusions, Murphy-Hill says his team hopes to look at a wider variety of programmer populations. He says they’re also interested in finding out why younger developers contribute more to open source than older developers.