Author Archives: admin

GIS: CartoDB.js from the ground up

[Fuente: http://academy.cartodb.com/courses/beginners-course/]

This introduction to CartoDB.js from the ground up will take you through the common uses of the library. You’ll start out by pulling your maps from CartoDB with only a few lines of code. Later you’ll customize your maps with JavaScript, interact with the data, add custom SQL queries, integrate other features, and much more.

createVis vs. createLayer

Since this course explains CartoDB.js, it relies heavily on the JavaScript programming language. If you are not familiar with the language, check out some of the great free resources available on the web by looking at this StackExchange post. We also recommend Codecademy and Eloquent JavaScript to get started.

The CartoDB.js API provides powerful tools to build dynamic web apps. Along with CartoCSS, other JS libraries, and our SQL API, the sky’s the limit. This course, CartoDB.js from the Ground Up, will show you how to build amazing apps in a small amount of time.

In CartoDB, there are two main methods to bring your maps into custom webpages, createVis and createLayer.

The first method, createVis allows for quick and easy maps with a large degree of customization. It gives two map layers in an array: layer 0 is the base map; layer 1 is the CartoDB data layer.

The second method, createLayer, allows for much more customization, including the combining of layers from separate maps, each with its own levels of customization. createLayer also allows client-side control over basemaps.

Both methods allow custom CartoCSS styling, SQL queries, and overlay options (zoom controls, a search box, a share button, etc.). Before showing these methods, we need to be introduced to these methods’ main sources of information.

viz.json, nice to meet you

Up to this point, all of the methods for displaying maps to the world have involved the first two sharing options you’ve seen in the sharing panel (see below). The first, “Get the link,” creates a shortened URL that points to a map in your account on CartoDB’s website. The second, “Embed it,” gives you an iframe that you can drop into your custom web page. The third option, “CartoDB.js,” will be our jumping off point for this course because you’ll easily be able to see how the API’s methods line up with the data hierarchy of your map’s metadata.

Share panel

A viz.json is a file that contains all the data needed to reproduce the visualization you created in CartoDB. An analogy one can make is that CartoDB.js is like a DVD player, the viz.json is like the DVD disc, and CartoDB represents all the parts needed to create a film (cameras, actors, director, producers, etc.).

Download the viz.json used in this lesson here. You can download a viz.json from any visualization you’ve created and inspect it with your favorite text editor, or view it in your browser if you have a JSON viewer. For this lesson, we will be using the viz.json for a multi-layer map similar to the one created at the end of Course 1. If you’re unfamiliar with the JSON file format, check out the official site or Wikipedia for a lot more information.

There’s a lot of metadata in this file. Browsing through all the possibilities shows you how much power you have to customize your maps in the CartoDB Editor. Review the documentation for CartoDB Editor to explore what some of these JSON entries allow you to do in your maps.

Screenshot of viz.json

Looking at your viz.json, find the top-most level called layers. You can see that it’s an array of two objects. The first object’s options have type “Tiled” and a name of “CartoDB Flat Blue.” This layer, layers[0], corresponds to the base layer map of our visualization. If you try changing the base map in CartoDB Editor and reload the viz.json, you will see the information in this layer change accordingly. Make note of other properties included in this options object as they will come up again later.

The next object down, layers[1], contains information about the data that was loaded into the map and visualized. The first entry, type, tells you that this is a group of layers. Under options, you can see some of the information that’s used by the CartoDB.js API to retrieve information from the servers. In contrast to layers[0], the majority of this second object in the layers array is taken up by layer_definition. In our case, we have two sublayers in layers[1] because there are two objects in the layers array that’s under layer_definition. In future lessons, we will retrieve these layers by calling

sublayer1 = layers[1].getSubLayer(0);
sublayer2 = layers[1].getSubLayer(1);
...

Looking back at our viz.json, we can see that the zeroth layer, buried under options, has a layer_name of “us_counties” and comes from our us_counties dataset back in the Beginner’s Course. The second comes from another familiar dataset on tornados in the United States. Other important info to pick out:

  • sql: tells you the SQL statement used with each data set (defaults to select * from dataset)
  • visible: means it will display (defaults to true)
  • cartocss: tells you about the styles applied to your map
  • interactivity: tells you the columns that is click/hover enabled
sql: '...'
visible: true
cartocss: '...'
interactivity: 'column1, column2'

In summation, the viz.json is CartoDB.js’s conduit to the data, queries, basemaps, styles, etc. that you set when you created a visualization with the data you uploaded into your CartoDB account. Now that we’ve thoroughly met with our viz.json, let’s look at the two most important JavaScript methods that interact with it.

Check out the documentation for viz.json here.

CreateVis

The most basic way to display your map from CartoDB.js involves a call to

cartodb.createVis(div_id, viz_json_url)

Couched between the <script> ... </script> tags, createVis puts a map and CartoDB data layers into the DOM element you specify. In the snippet below we assume that <div id='map'></div> placed earlier in an HTML file.

window.onload = function() {
  var vizjson = 'link from share panel';
  cartodb.createVis('map', vizjson);
}

And that’s it! All you need is that snippet of code, a script block that sources CartoDB.js, and inclusion of the CartoDB.js CSS file. It’s really one of the easiest ways to create a custom map on your webpage.

createVis also accepts options that you specifiy outside of the CartoDB Editor. They take the form of a JS object, and can be passed as a third optional argument.

var options = {
  center: [40.4000, -3.6833], // Madrid
  zoom: 7,
  scrollwheel: true
};

cartodb.createVis('map',vizjson,options);

To see createVis out in the wild, check out an awesome example in our Map of the Week series on our blog.

Documentation for cartodb.createVis.

CreateLayer

If you want to exercise more control over the layers and base map, createLayer may be the best option for you. You specifiy the base map yourself and load the layer from one or multiple viz.json files. Unlike createVis, createLayer needs a map object, such as one created by Google Maps or Leaflet. This difference allows for more control of the basemap for the JavaScript/HTML you’re writing.

A basic Leaflet map without your data can be created as follows:

window.onload = function() {
  // Choose center and zoom level
  var options = {
    center: [41.8369, -87.6847], // Chicago
    zoom: 7
  }

  // Instantiate map on specified DOM element
  var map_object = new L.Map(dom_id, options);

  // Add a basemap to the map object just created
  L.tileLayer('http://tile.stamen.com/toner/{z}/{x}/{y}.png', {
    attribution: 'Stamen'
  }).addTo(map_object);
}

Here we pulled the base map tiles from Stamen. There are many other options basemap options–learn more about your options in this great tutorial.

The map we just created doesn’t have any CartoDB data layers yet. If you’re just adding a single layer, you can put your data on top of the basemap from above. If you want to add more, you just repeat the process. We’ll be doing much more with this later.

This is the basic snippet to put your data on top of the map you just created. Drop this in below theL.tileLayer section.

var vizjson = 'link from share panel';
cartodb.createLayer(map_object, vizjson).addTo(map_object);

Check out this Map of the Week entry to see createLayer at work.

The documentation for createLayer.

Summing it up. And finally making something!

Now that we’re done with our crash course on the basics, let’s finally dive into making our first map with CartoDB.js.

Use this template, the URL for the viz.json linked above, and the code snippets for createVis or createLayer to make your first map using CartoDB.js. There are a couple of new things to notice about the template. Besides the normal HTML skeleton, the template includes the CartoDB.js library

<script src="http://libs.cartocdn.com/cartodb.js/v3/3.15/cartodb.js"></script>

between the <body> tags AND the map styling sheet

<link rel="stylesheet" href="http://libs.cartocdn.com/cartodb.js/v3/3.15/themes/css/cartodb.css" />

between the <head> tags. You need them both to get your maps going.

After you get it working, swap out the viz.json we provided with some of the viz.jsons from your own visualizations. Try putting in the createVis examples introduced before. Check out stellar examples in the Map Gallery, look at some of the examples in the official CartoDB.js repository, and hack away! If you prefer JS Fiddle, run the demo here.

Example of simple map created with CartoDB.js

By the way, CartoDB.js is open source. Fork it and contribute.

GIS: What is CartoDB

[Fuente: https://cartodb.com/attributions]

It’s a location intelligence tool to create amazing visualizations of geospatial data in the cloud. Upload your data, create your visualizations without a line of code and share them with the world.

  • Editor

    CartoDB is the web’s easiest tool to create, share, and publish your interactive maps. Use our powerful in-browser Editor to transform your data into beautiful visualizations.

    Learn more

  •  Platform

    CartoDB hosts a multitude of high performance APIs to help leverage location intelligence and transform your data into actionable deep insights and develop masterful visualizations.

    Learn more

  • Extras

    Build tools using CartoDB: layering maps with interactive functionality, and leveraging location-based data to clarify your message and amplify the global impact of your maps.

    Learn more

    Data Attribution

    In CartoDB you can find two types of layers: basemaps and user data. Basemap is the layer that you see on the background of the maps and user data layer is the user data displayed on top of it. The data behind these two layers come from different places.

    • Basemaps

      We use a variety of different basemaps by default. We recommend you read their attribution information and respect their licenses when using them.

    • Stamen Tiles

      Stamen produces Toner, a tile set created using OpenStreetMap. Map tiles by Stamen Design, under CC BY 3.0. Data by OpenStreetMap, under CC BY SA.

    •  User tileset

      On the CartoDB interface, the user has the possibility to use external tile sets. The attribution to those tile sets should be state by the user on the interface where it is publishing the map, or using the JS library to do so.

    • User data

      On CartoDB most data is uploaded by the users. When the users upload their data, they acknowledge they have the right to do so and should provide attribution information together with the map or as complementary text visible together with the map

      Technology

      CartoDB is an Open Source project (under a BSD license) and it makes use of many different Open Source projects. It is very complicated to describe every single library/project used on the whole stack, so do not consider this a comprehensive list and please check the GitHub page for more information.

    Geocoding

Icons

In CartoDB you can use several icon sets to style markers or patterns. Those icons or images come from different places:

  • Pin of maps

    A set of map icons created by Freepik.

  • Maki Icons

    An Open Source icon set from Mapbox.

  • SimpleCircle Places

    A set of map icons created by SimpleIcon.

    CARTODB EDITOR

    The Power of Location Data at Your Fingertips

    INTRODUCTION

    The CartoDB Editor

    The CartoDB Editor is a self-service mapping and analysis tool that combines an intuitive interface with powerful discovery features. Mix and combine your datasets to get fresh insights into your visualizations. You don’t need to be an expert to start mapping your data today. Point and click interfaces let you do everything from design, to analysis, to publishing APIs.

    Create rich, dynamic maps in moments

    Whether you’re starting from a spreadsheet, connecting your favorite business software, or drawing from vast sensor networks, CartoDB brings your location data to life. CartoDB looks through your data and suggests map types that highlight key trends. Our intuitive tools mean you can make your visualizations as simple – or sophisticated – as you like.

    Learn more

    UNCOVER INSIGHTS

    Insights at your fingertips

    The CartoDB Editor helps you ask ‘where’ and ‘why.’ Filter, cluster, and explore location-based trends. Test your hunches and gain new perspective by incorporating our public, market-specific, and specialty data. Do advanced analysis on the fly, and see the results in real time.

    Discover location insights across industries

    Polish and Publish

    From the boardroom to major media outlets, CartoDB maps tell data stories with clarity and power. Share your insights securely with your team, or broadcast them to the world.

    • Seamless Embedding

      Embed your map in your article or blog, Reddit, WordPress or nearly anywhere. Every CartoDB plan includes unlimited map views — just watch it go viral.

    •  Pixel-Perfect Styling

      Adjust the style, annotate, and animate your maps to convey just the right message. Play around with style variations to get just the right visual.

    • Public and Private Sharing

      Publish and share your data visualizations safely and securely. Share your data-driven insights with just your colleagues or the world.

    FEATURES

    The CartoDB Advantage

    •  Born on the Web

      Never any software to install, so you can access the latest features any time, anywhere.

    •  Easy to Learn

      Great tutorials and documentation, and a gallery of amazing example visualizations.

    • Sync Your Data

      CartoDB connects to the places where your data already lives, so your analysis is always up-to-date.

    •  Amazing Support

      Our customer success experts are here to help. Receive the care that a cartographer needs.

    • Secure

      Our editor is built with leading, secure cloud-based providers. Your Location Intelligence stays your business – safe and sound.

    •  Performance

      We’re designed to answer big questions. Even with millions of data points and a huge audience, CartoDB stays speedy.

    •  Powerful APIs

      The perfect engine for building location-intelligent applications. Get more out of your data with our APIs or build your own.

    •  Enterprise-Ready

      Smart, flexible controls to share and manage datasets, connections and visualizations.

    INTEGRATION

    Plays well with others

    CartoDB supports the formats and tools you use every day.

    File formats

    • CSV
    • GEOjson
    • SQL
    • KML
    • XLS
    • ESRI Shapefile
    • GEOtiff
    • GPX
    • GTFS

    CartoDB for

     

CASE STUDY

Location intelligence for your industry

Organizations from around the world use CartoDB to leverage location intelligence for actionable insights. View behavior analysis to Location Intelligence data-driven visualizations.

Trucos para usar tu Mac como un desarrollador profesional

Mejorando el uso del Terminal

[Fuente:http://carlosazaustre.es/blog/configura-tu-mac-como-un-desarrollador-profesionalque-2/#at_pco=smlwn-1.0&at_si=563b05fb4c123c9e&at_ab=per-13&at_pos=0&at_tot=1]

¿Quieres tener un flujo de trabajo que te permita ser ágil, ahorrar tiempo y ser más productivo en el desarrollo de tus proyectos? A continuación cuento una serie de aplicaciones y configuraciones que harán más agradable la jornada de trabajo en tu Mac. Hoy hablaremos de la terminal.

A mi me fue bien instalando iTerm2 y Oh MyZSH

Ejecutar un servidor web rapidamente

Desplazate a la carpeta donde tengas los archivos HTML y ejecuta:

python -m SimpleHTTPServer 8000

VDSL vs ADSL

[Fuente: http://blogthinkbig.com/en-que-se-diferencia-el-vdsl-del-adsl/]

¿Sabías que el VDSL soporta la difusión de TV Digital, VoD y HDTV, además de poder efectuar llamadas por videoconferencia de gran calidad?

La conexión ADSL es la tecnología de transmisión de datos más extendida y generalizada en nuestro país. A pesar de las múltiples alternativas para su contratación, seguro que en algún momento te has planteado la posibilidad de contratar directamente VDSL o cambiar tu ADSL, ADSL2 o ADSL2+ por esta otra tecnología avanzada de acceso a Internet. Frente a las dudas acerca de lo que es el Very High bit-rate Digital Subscriber Line, vamos a explicar en qué consiste y cuáles son las diferencias VDSL y ADSL.

En principio, se trata de una tecnología de acceso a Internet de banda ancha que forma parte de las tecnologías xDSL y que puede suministrarse bien de manera simétrica –26 Mbps tanto de subida como de descarga–, bien de manera asimétrica –52 Mbps de descarga y 16 Mbps de subida– bajo condiciones ideales con una distancia nula a la central y sin resistencia de los pares de cobre.

El estándar VDSL utiliza hasta cuatro canales o bandas de frecuencia diferentes para la transmisión de datos, dos para la subida –del cliente hacia el proveedor– y dos para la bajada, aumentando significativamente la potencia de transmisión de datos y su velocidad con respecto al ADSL, ADSL2 y ADSL2+. Y aunque la técnica estándar de modulación más usada en este caso es DMT –Discrete multitone modulation–, podría ser también QAM/CAP –carrierless amplitude/phase–, ambas con un rendimiento similar.

No obstante, hay que tener en cuenta que la velocidad de transmisión de datos depende de numerosos factores, como el estado de la línea y la distancia entre el usuario a la central telefónica más cercana. Sin embargo, la evolución del VDSL al VDSL2 proporcionaría una mayor velocidad que podría alcanzar hasta los 100 Mb de descarga.

Del ADSL al VDSL

La principal diferencia reside en el número de canales que permiten la transmisión de datos a alta velocidad, que en el caso del ADSL cuenta sólo con dos respecto a los cuatro del VDSL: uno de subida usuario-red y otro de bajada red-usuario con una tasa de transferencia de 8 Mbps de descarga y 1 Mbps de subida.

Con la conexión ADSL2 se consigue una mejora de la calidad del servicio ADSL con una tasa de transferencia sensiblemente mayor de 24 Mbps de descarga y2Mbps de subida, al solucionar los problemas de potencia de la líneaperturbación de la señal. Para ello se introducen mejoras sustanciales como una mejor eficiencia del modulador/codificador, además del uso de algoritmos para el tratamiento de la señal.

diferencias VDSL y ADSL

La evolución lógica del ADSL y el ADSL2 se materializa con el ADSL2+. La principal diferencia con respecto a sus antecesores reside en la capacidad de los pares de cobre a soportar el doble de espectros, proporcionando un mejor ancho de banda. De esta forma se mejora las características del servicio con una velocidad máxima 24 Mbps, siempre y cuando la distancia del usuario a la centralita no sea superior a los 5km. Hay que considerar que, según los expertos, para conseguir velocidades próximas a las máximas, la centralita más cercana no debe estar a más de 1 o 1,5 km del usuario.

En el caso del VDSL, además de transmitir datos de vídeo y otros tipos de tráfico a una velocidad de 5 a 10 veces superior al ADSL, ofrece la capacidad de soportar la difusión de TV Digital, VoD y HDTV sobre el par de cobre estándar, junto con tráfico de Internet y las habituales llamadas de voz. También satisface la demanda de los entornos empresariales y oficinas con un acceso de datos mucho más rápido y la posibilidad de efectuar llamadas por videoconferencia de gran calidad.

Imágenes | vía pixabay y wikipedia

CSS: Dive into OOCSS principles

[Fuente: http://www.nicoespeon.com/en/2013/05/dive-into-oocss/]

OOCSS – what it is, what it does, what it is, what it isn’t.

I can haz OOCSS?

Consider a new web project: a start-up dreams of developing THE indispensable webapplication of tomorrow’s web. They choose wisefully technologies, frameworks, environments they’ll use… However, the front-end rendering will still the same for the browser: HTML, CSS and Javascript.

But here’s the thing: even if the whole application will probably take advantages of some frameworks, have a look over the stylesheet and you’ll probably get something like that, with or without preprocessor:

#form-generator {
    width: 760px;
    margin: auto;
}

#form-generator label {
    font-weight: bold;
    font-size: 12px;
    /* ... */
}

#form-generator label.title {
    font-weight: normal;
    font-size: 11px;
    /* ... */
}

#form-generator textarea {
  width: 350px;
}

#form-generator .big textarea {
  width: 500px;
}

/* a bunch of CSS */

This kind of code will cause a lot of hassles over time:

  • Lack of flexibility, it can only apply to elements which are contained in the form which gets the #form-generator identifier. That could sound relevant at first, but what if you want to build other pages with some similars forms in it?
  • Too much specificity, you have to overspecified again and again selectors when you want to create some particular cases. Just look at label.title-titre or .big textarea for instance.
  • Too complicated, you need to be an expert and know the architecture of every single page to dissect and maintain all of this mess. Plus, it would probably grow as the project develops.

If you add some messy organisation for CSS files, then you get a spaghetti code which would lead to a lot of troubles, errors, code duplication as new developers enters the ring over time.

This extreme dependency between the HTML structure makes the code be especially fragile: even if it’s clean, a simple mistake of a non-expert can ruined it completely.

In short, these non-mature practices will make things harder for the start-up and promises long sleepless nights of debug and refactor.

Finally, the problem here belongs to the 1:1 relation between CSS and the number of blocks, pages, modules of the website. As long as it grows, the stylesheet grows as well. That implies development time, and so money… We could do better in terms of ROI.

Your CSS is a MESS
Freely inspired by Jonathan Snoovvk =)

The Object Oriented CSS (OOCSS) bring its own solution to these problems. It’s nor a new preprocessor, nor a new language, but a code philosophy. It’s a set of best practices, rules and advices to help your CSS become scalable.

OOCSS has 2 principles :

  1. Separate structure from skin
  2. Separate container from content

1. Separate structure from skin

Usually, elements of a website have a visual aspect which is repeated in different contexts : colors, font, borders, … the graphic charter in a whole. This is the skin.

In the same time, a bunch of “non-visible” properties are repeated : width/height, overflow, etc. This is the structure.

If you separate them, you create re-usable components that can be shared into elements which have same properties. And so we speak about objects, which may sound familiar to back-end developers by the way.

2. Separate container from content

The deal here is not to stupidly constrain the style of an element (the content) to the its context (its container).

If you are designing titles, why limit yourself to those who are into the <header>, just because this is the only title place for now? Plus, it’s very likely that all your titles have a consistent design in your whole website for ergonomics concerns.

It’s even more true for modules.

Why is OOCSS good?

The 10 best practices

Here are the 10 best practices that are part of the OOCSS spirit:

  1. Create a library per component
    Each component – button, table, link, image, clearfix, etc. – should be a piece of Lego = combinable and re-usable as you wish.
  2. Use semantic and consistent styles
    The style of a new HTML element should be predictable.
  3. Design transparent modules
    The module is the container that could be used with any content.
  4. Be flexible
    Height and width should be extensible and adapt themself – RWD inside -.
  5. Learn to love grids
    Grids allow you to control width. Height is generally defined by the content.
  6. Minimize selectors
    Keep a low specificity [0-0-1-0] to have a better control over selectors.
  7. Separate structure from skin
    You must do the distinction, it’s a fundamental principle of OOCSS. Create abstract objects for blocks structure and use class to dress these blocks, regardless their nature.
  8. Separate container from content
    You must do the distinction, it’s a fundamental principle of OOCSS. Create 1:n relationship separating the container from its content.
  9. Extend objects with multiple classes : Class/objects are just like Legos you put together to build the expected result.
  10. Use resets and YUI fonts
    This is a specific choice of the OOCSS framework.

Legos first – A best practice is to design individual pages after having created basic components for the whole website.

The 9 pitfalls to avoid

Here are the 9 pitfalls to ban, according to the OOCSS spirit:

  1. Create styles that depend on context
    There is nothing less flexible but a article > p:nth-child(2) > span.plop selector.
  2. Overspecify selectors
    div.my-class is uselessly overspecific as .my-class is sufficient enough. However, it becomes relevant if you want to override a class styling for some specific elements –strong.error will override the defaults rules of .error for this special use -.
  3. Use IDs
    What an ID can do, a class can do better. Futhermore, IDs contribute in creating unexpected specificity mess.
  4. Use shadows and border-radius on irregular backgrounds
    It could have unexpected results.
  5. Create a sprite containing every images, excepted if you don’t have a lot of pages
    It’s not very optimal in certain cases as you need to deal with image rendering exclusively in CSS.
  6. Precisely adjust the height
    An element height is controlled by its content. Separate container from content will make your life easier.
  7. Use images as texts
    I guess you can reach a better accessibility level.
  8. Be redundant
    Two components which that look too close to be differenciate on a same page look too close to be used on the website: choose one!
  9. Do early optimisation
    Developers tend to waste a lot of time on the optimisation question when it’s not critical than to focus on what is essential to go on.

OOCSS benefits

As the project is going on, you will be able to create whole new pages combining existing elementswithout adding any CSS, even for a completely new architecture.

Re-use elements is also a totally free performance gain! You’ll create new elements with 0 line of CSS code, what could be better?

OOCSS force you to think the website as a whole instead of focusing on simple pages put one along the others. You should anticipate the future.

Finally, the main advantage of this philosophy IMHO is that it makes your CSS modular, easier to maintain. This modularity makes your CSS robust: a new developer would be less likely to break the design when working on it.

OOCSS, how does that works?

Gimme concrete examples!

Remember the 2 main principles:

  1. Separate structure from skin
  2. Separate container from content

With these good practices in mind, let’s see how to apply the philosophy on a daily code.

1. Separate structure from skin

Let’s take the favorite example over blog posts: buttons. A “classic” CSS would look like this, I guess:

 1 #button {
 2     display: inline-block;
 3     padding: 4px 12px;
 4     margin-bottom: 0;
 5 
 6     font-size: 14px;
 7     line-height: 20px;
 8     color: #333333;
 9     text-align: center;
10 
11     vertical-align: middle;
12     cursor: pointer;
13 
14     background-color: #d5d5d5;
15 }
16 
17 #button-primary {
18     display: inline-block;
19     padding: 4px 12px;
20     margin-bottom: 0;
21 
22     font-size: 14px;
23     line-height: 20px;
24     color: #ffffff;
25     text-align: center;
26     text-shadow: 0 -1px 0 rgba(0, 0, 0, 0.25);
27 
28     vertical-align: middle;
29     cursor: pointer;
30 
31     background-color: #006dcc;
32 }
33 
34 #button-large {
35     display: inline-block;
36     padding: 8px 16px;
37     margin-bottom: 0;
38 
39     font-size: 18px;
40     line-height: 28px;
41     color: #333333;
42     text-align: center;
43 
44     vertical-align: middle;
45     cursor: pointer;
46 
47     background-color: #d5d5d5;
48 }
49 
50 #button-activation {
51     display: inline-block;
52     padding: 8px 16px;
53     margin-bottom: 0;
54 
55     font-size: 18px;
56     line-height: 28px;
57     color: #ffffff;
58     text-align: center;
59     text-shadow: 0 -1px 0 rgba(0, 0, 0, 0.25);
60 
61     vertical-align: middle;
62     cursor: pointer;
63 
64     background-color: #006dcc;
65 }
<!-- The corresponding HTML -->
<a href="#" id="button">Default</a>
<a href="#" id="button-primary">Primary</a>
<a href="#" id="button-large">Large</a>
<a href="#" id="button-activation">Large Primary</a>

At the end, you come with a lot of redundancy and a #button-activation which is nothing but a large version of #botton-primary.

If you refactor this CSS with OOCSS principles, you’ll get something more flexible:

 1 .button {
 2     display: inline-block;
 3     padding: 4px 12px;
 4     margin-bottom: 0;
 5 
 6     font-size: 14px;
 7     line-height: 20px;
 8     color: #333333;
 9     text-align: center;
10 
11     vertical-align: middle;
12     cursor: pointer;
13 
14     background-color: #d5d5d5;
15 }
16 
17 .button-primary {
18     color: #ffffff;
19     text-shadow: 0 -1px 0 rgba(0, 0, 0, 0.25);
20 
21     background-color: #006dcc;
22 }
23 
24 .button-large {
25     padding: 8px 16px;
26 
27     font-size: 18px;
28     line-height: 28px;
29 }
<!-- The corresponding HTML -->
<a href="#" class="button">Default</a>
<a href="#" class="button button-primary">Primary</a>
<a href="#" class="button button-large">Large</a>
<a href="#" class="button button-large button-primary">Large Primary</a>

.button define the generic properties – the “defaults” ones – for our buttons. If you override it, you come with new classes that you can combine to get the expected result, just like Legos!

Here was the trick.

2. Separate container from content

It did happen, sometimes, that my code looked like this when developing the main design:

 1 header h1 {
 2     font-family: 'Roboto', Helvetica, sans-serif;
 3     font-size: 2em;
 4 
 5     color: #F44;
 6 }
 7 
 8 /* a bunch of CSS */
 9 
10 footer h1 {
11     font-family: 'Roboto', Helvetica, sans-serif;
12     font-size: 1.5em;
13 
14     color: #F44;
15     opacity: 0.5;
16     filter: alpha(opacity = 50);
17 }
<!-- The corresponding HTML -->
<header>
    <h1>Header Title</h1>
</header>
<footer>
    <h1>Small title in the footer</h1>
</footer>

Here again, it’s a fail: I am duplicating code, which is useless and I didn’t even notice that!

The elements I defined here are simply not re-usable. They directly rely on a specific container. But it’s clear enough that some properties are not specific to this container.

And so I may have consider the following alternative:

 1 h1 {
 2     font-family: 'Roboto', Helvetica, sans-serif;
 3 
 4     color: #F44;
 5 }
 6 
 7 /* ... */
 8 
 9 h1, .h1-size { font-size: 2em;   }
10 h2, .h2-size { font-size: 1.8em; }
11 h3, .h3-size { font-size: 1.5em; }
12 
13 /* ... */
14 
15 .muted {
16     opacity: 0.5;
17     filter: alpha(opacity = 50);
18 }
<!-- The corresponding HTML -->
<header>
    <h1>Header Title</h1>
</header>
<footer>
    <h1 class="h3-size muted">Small title in the footer</h1>
</footer>

You may think “Why should I care? We don’t really reduce code here!”. For sure that’s true here… if you only focus on the title.

But OOCSS makes you think further about the future of your website. Here we don’t just improved titles flexibility, but we add few new elements:

  • Each level of title has a standard and homogene font-size. Each of these font-size is also set to an independent class and could be re-used, whatever the context

    Doing so, you’ll notice that I didn’t choose the level of title because of the default sizing it would render on the screen: it’s better for semantics, for accessibility and SEO \o/

  • I created a .muted class that allows me to mitigate the visibility/opacity of an element. I guess this little utility would be useful later and I’m sure I won’t have to duplicate that kind of code around in my CSS.

The media object

The most famous example which illustrate OOCSS is the media object created by Nicole Sullivan. It saves hundreds of lines of code.

The Media Object

It’s nothing but an abstract object which represents a media object – a picture, a video – aside to amedia body – typically a text -, to its left or to its right.

It’s a typical scheme that is repeated everywhere. Think about the Facebook display.

Here is basically the module content:

.media, .media-body { overflow:hidden; }
.media-img          { float:left; margin-right:20px; }
.media-img-rev      { float:right; margin-left:20px; }
.media-img img      { display:block; }
<!-- Base structure for the media object -->
<div class="media">
    <a href="#" class="media-img">
        <img src="#" alt="#">
    </a>
    <div class="media-body">
        <p>
            Lorem ipsum dolor sit amet, consectetur adipisicing elit.
        </p>
    </div>
</div>

These few re-usable lines of CSS will save a lot of time and help to improve performance over a whole project.

You’ll note that Bootstrap has implemented it, just like inuit.css, which is not that surprising when you consider these two frameworks use OOCSS concepts.

Harry Roberts has recently proposed the flag object which incorporates more or less the media object with the vertical centering concept.

Implement the OOCSS in your project

First of all, you need to understand and adhere to the philosophy. There is no absolute rule and best practices change as the web develop and evolve. Plus, you should always consider the context of your project and define what is good for you.

That said, we can sum up the implementation of an OOCSS spirit with some concrete practices:

  • Don’t use IDs for CSS
  • Don’t overspecify your selectors (.error and not p.error), excepted for overriding reasons
  • Avoid the use of !important
  • Distinguish components of your project and create modules out of them
  • If relevant, adopt a CSS framework based over OOCSS principles. I’d advice you to useBootstrap if you need a predefined design ; inuit.css is you just need about structure and deal with the design by yourself. Note that the oocss framework from Nicole Sullivan exists as a reference.

Few comments

Good remarks and poor ideas

No need to say that these philosophy is a non-absolute vision of CSS that we could debate on.

However, we can point out the following things:

  • It’s absolutely possible to use IDs in HTML for JavaScript purpose, for instance. Moreover, identifiers could be useful as namespaces for a particular module that you want to share without breaking anything.
  • One of the arguments against OOCSS is that you are “polluting” HTML with a bunch of classes that some HTML-evangelist don’t appreciate, which is understandable. However,OOCSS doesn’t impact the HTML semantics.
  • OOCSS principles can perform miracles over big projects, where it worth to implement them. For little ones, it’s up to you! However, the philosophy is relevant regardless the size of your project and doesn’t take that long to be implemented than an ordinary development when you get used to it. Rather, modules you’ve already created for any other project could be re-used in a new one, you just have to change their design a bit and there you go!

Even if OOCSS is now a well-spread vision for the benefits it brings, I must say that I was suspicious when I read the assertion: “avoid the use of ID in CSS, ever”.
But since I understood the benefits of this practice, I warmly recommend it.

That said, as I get smacked for my code organisation, I now use IDs in my CSS in a unique place: the layout.

It’s just a matter of adaptation to your project needs and philosophy.

Another philosophy

Another vision of CSS give its importance to the context of elements.

It’s another philosophy, interface-oriented which considers that CSS is defined to design the existing HTML interface. To pollute HTML with classes is a non-sense when selectors and good code organisation can deal with the design.

Even though I don’t adhere to this practice – at least for a project which is meant to evolve -, it’s an interesting philosophy which could be relevant for small projects, for instance.

To go further

Here is a list of links you may have a look to go deeper into OOCSS:

Conclusion

Our little dive into OOCSS is now over.

After OOCSSAdopt the “Oh Oh” CSS

Plop!

Qué es Git Flow

[Fuente: http://aprendegit.com/que-es-git-flow/]

¿Qué es git-flow?

Si quieres seguir esta serie, debes disponer de una máquina con git instalado:

  • Windows: msysgit que puedes descargar de este enlace
  • Mac: a través de homebrew o macports
  • Linux: a través del gestor de paquetes de tu distribución

Flujos de trabajo

Hace unos días participé en el Open Space de Calidad del Software organizado en Madrid este mes de febrero. En la reunión se abordaron varios temas que iban desde responder a preguntas como ¿qué se calidad del software? ¿cuánto cuestan los tests funcionales? o ¿cómo hacer testing de desarrollos para dispositivos móviles? pasando por otros tan exóticos como el Pirata Roberts, llegando incluso a plantearse hasta la eliminación de los responsables de calidad de la faz de la tierra.

En casi todas las conversaciones en las que tuve la oportunidad de participar había un denominador común: las ramas. Se hablaba de ramas para hacer hot-fixes urgentes, ramas para desarrollar nuevas versiones separadas de las ramas maestras donde está la versión en producción. Ramas para probar nuevas versiones, ramas y repositorios para trabajar con proveedores externos, ramas para hacer pruebas en pre-producción, ramas para que los departamentos de calidad hagan sus pruebas antes de liberar nuevas versiones. Con git podemos crear ramas “como churros” y ese fin de semana tuve la oportunidad de compartir con varios colegas de profesión cómo utilizar las ramas para hacer el bien. Sin embargo, esta facilidad para crear ramas también se puede utilizar para hacer el mal y sembrar el terror. Más de una vez he visto ramas creadas sin ningún criterio, sin ningún flujo de información detrás que las sustente. Esta situación suele llevar al repositorio al caos más absoluto.

Para no acabar en el caos, debemos establecer unas “reglas del juego” que todo el equipo debe respetar. Aunque a grandes rasgos casi todos los proyectos pueden utilizar unas reglas de base comunes, las reglas deben ser flexibles para adaptarse a los cambios que puedan surgir en el tablero de juego; al fin y al cabo, las necesidades y particularidades de cada equipo, empresa o proyecto no son las mismas.

¿Y cuáles son estas reglas base comunes? En enero de 2010 Vincent Driessen publicó en su blog un artículo en el que compartía con la comunidad un flujo de trabajo que a él le estaba funcionando: “A successful Git branching model”. Como él mismo cuenta en el artículo (te recomiendo encarecidamente que lo leas) Vincent propone una serie de “reglas” para organizar el trabajo del equipo.

Ramas master y develop

El trabajo se organiza en dos ramas principales:

  • Rama master: cualquier commit que pongamos en esta rama debe estar preparado para subir a producción
  • Rama develop: rama en la que está el código que conformará la siguiente versión planificada del proyecto

Cada vez que se incorpora código a master, tenemos una nueva versión.

Además de estas dos ramas, Se proponen las siguientes ramas auxiliares:

  • Feature
  • Release
  • Hotfix

Cada tipo de rama, tiene sus propias reglas, que resumimos a continuación.

Feature or topic branches

feature branches

  • Se originan a partir de la rama develop.
  • Se incorporan siempre a la rama develop.
  • Nombre: cualquiera que no sea master, develop, hotfix-* o release-*

Estas ramas se utilizan para desarrollar nuevas características de la aplicación que, una vez terminadas, se incorporan a la rama develop.

Release branches

  • Se originan a partir de la rama develop
  • Se incorporan a master y develop
  • Nombre: release-*

Estas ramas se utilizan para preparar el siguiente código en producción. En estas ramas se hacen los últimos ajustes y se corrigen los últimos bugs antes de pasar el código a producción incorporándolo a la rama master.

Hotfix brancheshotfix branches

  • Se origina a partir de la rama master
    fuente http://nvie.com/posts/a-successful-git-branching-model/
  • Se incorporan a la master y develop
  • Nombre: hotfix-*

Esas ramas se utilizan para corregir errores y bugs en el código en producción. Funcionan de forma parecida a las Releases Branches, siendo la principal diferencia que los hotfixes no se planifican.

¿Qué es git-flow?

Si queremos implementar este flujo de trabajo, cada vez que queramos hacer algo en el código, tendremos que crear la rama que corresponda, trabajar en el código, incorporar el código donde corresponda y cerrar la rama. A lo largo de nuestra jornada de trabajo necesitaremos ejecutar varias veces al día los comandos git, merge, push y pull así como hacer checkouts de diferentes ramas, borrarlas, etc. Git-flow son un conjunto de extensiones que nos ahorran bastante trabajo a la hora de ejecutar todos estos comandos, simplificando la gestión de las ramas de nuestro repositorio.

La flexibilidad de git…y el sentido común

Las “reglas” que Vincent plantea en su blog son un ejemplo de cómo git nos permite implementar un flujo de trabajo para nuestro equipo. Estas no son reglas absolutas, bien es cierto que pueden funcionar en un gran número de proyectos, aunque no siempre será así. Por ejemplo ¿qué pasa si tenemos que mantener dos o tres versiones diferentes de una misma aplicación? digamos que tenemos que mantener la versión 1.X, la 2.X y la 3.X. El tablero de juego es diferente así que necesitaremos ampliar y adaptar estas reglas para poder seguir jugando.

git es una herramienta que nos permite modificar estas reglas y, lo que es más importante, irlas cambiando y adaptando a medida que el proyecto avanza y el equipo madura. Una vez más, una buena dosis de sentido común será nuestra mejor aliada para responder las preguntas que nos surjan durante el camino.

Referencias:

 

AngularJS Unit Tests with Sinon.JS

[Fuente: http://sett.ociweb.com/sett/settNov2014.html]

AngularJS Unit Tests with Sinon.JS

by
Jason Schindler, Software Engineer
Object Computing, Inc. (OCI)

Introduction

AngularJS is an open-source framework for building single page web applications. It utilizes two-way data binding to dynamically keep information synchronized between the model and view layers of the application and directives as a way to extend HTML to allow developers to express views in a declarative fashion. It is primarily developed and maintained by Google. For the remainder of this article, a general working knowledge of AngularJS is helpful, but not required.

The AngularJS development team considers testing an extremely important part of the development process, and it shows. AngularJS applications are easy to unit-test due to built-in dependency injection and clear separation of roles between the view, controller, service, filter, and directive layers.

Most AngularJS projects use the Jasmine BDD-friendly testing framework along with the Karma test runner for unit testing and Protractor for end-to-end or acceptance level testing. Karma was initially developed by the AngularJS team and is capable of running tests in most any framework with community plugins. Because of their easy integration with AngularJS, I’ll be focusing on Jasmine and Karma for this article.

Sinon.JS is also an open-source framework. It provides spies, stubs, and mocks for use in JavaScript unit tests. It works with any unit testing framework and has no external dependencies. This article is only going to brush the surface of Sinon.JS capabilities. If you have not had an opportunity to use Sinon.JS yet, hopefully this will interest you enough to get started.

So what exactly are spies, stubs, and mocks?

A la hora de escribir un test unitario, tú sólo deberías preocuparte con la lógica de la unidad que está testando. Sin embargo, la mayoría de los códigos interactuan con otros módulos y sus implementaciones pueden algunas veces cruzarse en el camino de los tests. Spies , stubs y mocks son artefactos que nos proporcionan una forma de describir la formas en que la unidad que estamos testando interactua con otros módulos.En Sinon .JS:

  • Spies: Pueden envolver una función existente o ser anónimos. Un spy graba registros de todas las interacciones con una función incluyendo los argumentos de entrada , el return de la función , o si la function arroja una excepción.Los spies dejan que la función original se ejecute , lo único que hacen es espiar las llamadas.
  • StubsComo las spies , los stubs se montan sobre funciones existentes o pueden ser creados de forma anónima. También como los spies, graban todas sus interacciones. Cuando utilizas un stub, la función original no es llamada. En su lugar, le dices al stub qué es lo que te gustaría hacer cada vez que es invocada. Haciendo esto, podemos controlar el flujo de ejecución de la unidad bajo testing de forma efectiva, controlando la salida de las funciones con las cuales el código interactua.
  • MocksLos Mocks son un poco distintos de los stubs y los spies. Los Mocks crean expectations. Si le dices a un stub que siempre retorne el valor ’42’ cuando se le pasa ’12’, no se quejará si nunca es invocado con el parámetros ’12’ o si nunca es invocado. El stub esencialmente sólo sabe cómo responder cuando recibe ’12’. Las expectations difieren en que se lanzará una excepción si no se cumplen. Es decir, si tienes una expectation que dice que una función será llamada con el parámetro de ’12’ y nunca se hace , lanzará una excepción durante el paso de verificación lo que provocará que el test falle.

I mostly use stubs and anonymous spies for my tests. Anonymous spies are good for situations where the unit under test does not require a function to return any value, but I still need to assert that the call was made. Stubs are useful when I want to control the flow of execution in my test. In the examples below, I will use both anonymous spies and stubs to separate the unit under test from its dependencies.

But… Jasmine already has spies built in!

Yes it does. I have no issue with the Jasmine spies already available. I just happen to like Sinon.JS better. 🙂

Adding Sinon.JS to your AngularJS project

Let’s start by adding Karma and Sinon.JS to your project.

Add Karma dependencies to your project.

Note: If you are already using Karma with Jasmine you can skip this section, but please make sure that the version of your karma-jasmine package is 0.2.x. At the time of this writing, annpm install karma-jasmine command by default installs the 0.1.x version of karma-jasmine which is not compatible with the Sinon.JS Jasmine matchers below.

To get Karma in your project, we will be adding the karma package, one or more launchers, and karma-jasmine.

First, let’s install Karma and PhantomJS globally so that we can run them from the command line. Karma is a unit test runner that integrates nicely with AngularJS, and PhantomJS is a headless browser that is used for web application testing.

npm install -g karma phantomjs

Now we can install the karma-jasmine package, and one or more launchers.

npm install --save-dev karma-phantomjs-launcher karma-jasmine@0.2.x

This only installs the PhantomJS launcher. You may wish to install additional launchers for other browsers on your machine. Other options include: karma-firefox-launcher, karma-chrome-launcher, and karma-ie-launcher. For a more complete list, visit npmjs.org. I tend to run unit tests almost exclusively in PhantomJS to take advantage of the speedy execution time. If you start encountering odd errors (For example: not being able to use Function.prototype.bind) it helps to run your tests using another browser to verify that PhantomJS is behaving correctly.

Note: The PhantomJS bug listed above can be overcome by using es5-shim.

Create a Karma configuration file

The easiest way to create a Karma configuration file, is to run karma init from your project folder. This will ask a number of questions about your project and generate a configuration file based on your answers. After the command has completed, a file named karma.conf.js should be available in your project. Mine looks something like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
module.exports = function(config){
  config.set({
    basePath: '',
    frameworks: ['jasmine'],
    files: [
      'bower_components/angular/angular.js',
      'bower_components/angular-mocks/angular-mocks.js',
      'bower_components/sinonjs/sinon.js',
      'bower_components/jasmine-sinon/lib/jasmine-sinon.js',
      'app/app.js',
      'app/**/*.js',
      'app/**/*.test.js'
    ],
    exclude: [],
    preprocessors: {},
    reporters: ['progress'],
    port: 9876,
    colors: true,
    autoWatch: false,
    //browsers: ['Firefox','PhantomJS'],
    browsers: ['PhantomJS'],
    singleRun: true
  });
};

There are a number of options available here. As long as your files array includes the appropriate source files, dependencies, and test files, and you have installed launchers for the items in your browsers array, you should be good to go.

Note: The bower_components folder in the example above is where Bower places your dependencies by default.

Getting Sinon.JS

In addition to Sinon.JS, we will also be using jasmine-sinon which adds a number of Sinon.JS matchers to Jasmine for us.

If you are using Bower, getting Sinon.JS in your project is as simple as:

bower install --save-dev sinonjs jasmine-sinon

If you are not, please visit the links above and pull down the JavaScript files needed and place them within your project.

Once Sinon.JS and jasmine-sinon are available, make sure they are loaded in the files array of your karma.conf.js. An example of this is provided above.

Incorporating into your build

Karma plugins are available for Grunt and Gulp. If you aren’t using a build system, you can run your unit tests by executing karma start in your project folder.

Useful AngularJS/Sinon.JS recipes

Great! You should now have Sinon.JS available to your AngularJS project. Let’s go through a few ways to Sinon-ify our tests.

Use anonymous spies (or stubs) instead of NOOP or short functions.

Consider the following AngularJS controller test:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
describe('MainController', function(){
  var testController,
      testScope;
  beforeEach(function(){
    module('SinonExample');
    inject(function($rootScope, $controller){
      testScope = $rootScope.$new();
      testController = $controller('MainController', {
        $scope: testScope,
        SomeService: {
          refreshDefaults: function(){},
          registerItem: function(){},
          unRegisterItem: function(){}
        }
      });
    });
  });
  it('has default messages', function(){
    expect(testScope.helloMsg).toBe('World!');
    expect(testScope.errorMsg).toBe('');
  });
});

In this example, MainController receives $scope and SomeService dependencies. In order to properly isolate the operations of SomeService from the controller under test, I have assigned empty (or NOOP) functions to the properties in SomeService that the controller code is using. This is a sensible starting point, and correctly detaches any code in SomeService from my item under test.

So let’s complicate things a small bit. If I don’t call SomeService.refreshDefaults() before using the rest of the service, things may break. Also, I want to know that I have correctly registered MAIN with the service. Starting from the point above, the next logical step would look something like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
var testController,
    testScope,
    fakeSomeService;
beforeEach(function(){
  module('SinonExample');
  fakeSomeService = {
    refreshDefaultsCalled: false,
    lastRegisteredItem: 'NONE',
    refreshDefaults: function(){
      this.refreshDefaultsCalled = true;
    },
    registerItem: function(item){
      this.lastRegisteredItem = item;
    },
    unRegisterItem: function(){}
  };
  inject(function($rootScope, $controller){
    testScope = $rootScope.$new();
    testController = $controller('MainController', {
      $scope: testScope,
      SomeService: fakeSomeService
    });
  });
});
it('refreshes defaults on load', function(){
  expect(fakeSomeService.refreshDefaultsCalled).toBe(true);
});
it('registers MAIN on load', function(){
  expect(fakeSomeService.lastRegisteredItem).toBe('MAIN');
});

While completely usable, this test is starting to smell a little funny. We have created a fake version of SomeService that tracks if refreshDefaults was called and the final argument passed to registerItem. It isn’t difficult to imagine additional scenarios that muddy the water further. For example, tracking the number of times that refreshDefaults is called or the value of the 3rd item that was registered.

This is an excellent use case for anonymous spies. Sinon.JS spies will record when they are called, as well as the inputs and outputs of each call. In our case, we are using anonymous spies so tracking outputs isn’t needed.

Here are the same tests using Sinon.JS:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
var testController,
    testScope,
    fakeSomeService;
beforeEach(function(){
  module('SinonExample');
  fakeSomeService = {
    refreshDefaults: sinon.spy(),
    registerItem: sinon.spy(),
    unRegisterItem: sinon.spy()
  };
  inject(function($rootScope, $controller){
    testScope = $rootScope.$new();
    testController = $controller('MainController', {
      $scope: testScope,
      SomeService: fakeSomeService
    });
  });
});
it('refreshes defaults on load', function(){
  expect(fakeSomeService.refreshDefaults).toHaveBeenCalled();
});
it('registers MAIN on load', function(){
  expect(fakeSomeService.registerItem).toHaveBeenCalledWith('MAIN');
});

Isn’t that better? By replacing our NOOP functions with anonymous Sinon.JS spies, we have gained the ability to glance into the calls that have occurred without writing additional code just to do so. Additionally, we can now inspect specific calls or even the order of the calls if needed:

1
expect(fakeSomeService.registerItem).toHaveBeenCalledAfter(fakeSomeService.refreshDefaults);

If you want to switch out all current (and future) functions on a AngularJS service, there is an additional step you can take. You can use sinon.stub(serviceInstance) to switch all service functions with stubs. Because stubs do not call through to the original function and because they are also spies, we can get the same functionality at the anonymous spies above by stubbing the entire service. For example:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
var testController,
    testScope,
    stubbedSomeService;
beforeEach(function(){
  module('SinonExample');
  inject(function($rootScope, $controller, SomeService){
    testScope = $rootScope.$new();
    stubbedSomeService = sinon.stub(SomeService);
    testController = $controller('MainController', {
      $scope: testScope,
      SomeService: stubbedSomeService
    });
  });
});
it('refreshes defaults on load', function(){
  expect(stubbedSomeService.refreshDefaults).toHaveBeenCalled();
});
it('registers MAIN on load', function(){
  expect(stubbedSomeService.registerItem).toHaveBeenCalledWith('MAIN');
  expect(stubbedSomeService.registerItem).toHaveBeenCalledAfter(stubbedSomeService.refreshDefaults);
});

By injecting an instance of SomeService in our beforeEach function, we were able to stub all of the functions available to service consumers with one call. Because stubs do not call through to the original methods and are also spies, the functionality of our test doesn’t change.

Warning: Using this method to stub an entire service should only be used when you have a very good understanding of what functionality the service provides. It is usually best to only do this with services that you have written as part of your application. Also, please remember that stubs only work with functions. If you are storing strings or other non-function values as properties on your service, those will remain unchanged.

An Introduction To Unit Testing In AngularJS Applications

[Fuente:An Introduction To Unit Testing In AngularJS Applications]

An Introduction To Unit Testing In AngularJS Applications

AngularJS has grown to become one of the most popular single-page application frameworks. Developed by a dedicated team at Google, the outcome is substantial and widely used in both community and industry projects.

One of the reasons for AngularJS’ success is its outstanding ability to be tested. It’s strongly supported by Karma (the spectacular test runner written by Vojta Jína) and its multiple plugins. Karma, combined with its fellows Mocha, Chai andSinon, offers a complete toolset to produce quality code that is easy to maintain, bug-free and well documented.

The main factor that made me switch from “Well, I just launch the app and see if everything works” to “I’ve got unit tests!” was that, for the first time, I could focus on what matters and on what I enjoy in programming: creating smart algorithms and nice UIs.

I remember a component that was supposed to manage the right-click menu in an application. Trust me, it was a complex component. Depending on dozens of mixed conditions, it could show or hide buttons, submenus, etc. One day, we updated the application in production. I can remember how I felt when I launched the app, opened something, right-clicked and saw no contextual menu — just an empty ugly box that was definitive proof that something had gone really wrong. After having fixed it, re-updated the application and apologized to customer service, I decided to entirely rewrite this component in test-driven development style. The test file ended up being twice as long as the component file. It has been improved a lot since, especially its poor performance, but it never failed again in production. Rock-solid code.

A Word About Unit Testing

Unit testing has become a standard in most software companies. Customer expectations have reached a new high, and no one accepts getting two free regressions for the price of one update anymore.

If you are familiar with unit testing, then you’ll already know how confident a developer feels when refactoring tested code. If you are not familiar, then imagine getting rid of deployment stress, a “code-and-pray” coding style and never-ending feature development. The best part of? It’s automatic.

Unit testing improves code’s orthogonality. Fundamentally, code is called “orthogonal” when it’s easy to change. Fixing a bug or adding a feature entails nothing but changing the code’s behavior, as explained in The Pragmatic Programmer: From Journeyman to Master. Unit tests greatly improve code’s orthogonality by forcing you to write modular logic units, instead of large code chunks.

Unit testing also provides you with documentation that is always up to date and that informs you about the code’s intentions and functional behavior. Even if a method has a cryptic name — which is bad, but we won’t get into that here — you’ll instantly know what it does by reading its test.

Unit testing has another major advantage. It forces you to actually use your code and detect design flaws and bad smells. Take functions. What better way to make sure that functions are uncoupled from the rest of your code than by being able to test them without any boilerplate code?

Furthermore, unit testing opens the door to test-driven development. While it’s not this article’s topic, I can’t stress enough that test-driven development is a wonderful and productive way to write code.

WHAT AND WHAT NOT TO TEST

Tests must define the code’s API. This is the one principle that will guide us through this journey. An AngularJS application is, by definition, composed of modules. The elementary bricks are materialized by different concepts related to the granularity at which you look at them. At the application level, these bricks are AngularJS’ modules. At the module level, they are directives, controllers, services, filters and factories. Each one of them is able to communicate with another through its external interface.

Everything is bricks, regardless of the level you are at

All of these bricks share a common attribute. They behave as black boxes, which means that they have a inner behavior and an outer interface materialized by inputs and outputs. This is precisely what unit tests are for: to test bricks’ outer interfaces.

Ignoring the internals as much as possible is considered good practice. Unit testing — and testing in general — is a mix of stimuli and reactions.

Bootstrapping A Test Environment For AngularJS

To set up a decent testing environment for your AngularJS application, you will need several npm modules. Let’s take a quick glance at them.

KARMA: THE SPECTACULAR TEST RUNNER

Karma is an engine that runs tests against code. Although it has been written for AngularJS, it’s not specifically tied to it and can be used for any JavaScript application. It’s highly configurable through a JSON file and the use of various plugins.

All of the examples in this article can be found in the dedicated GitHub project, along with the following configuration file for Karma.

// Karma configuration
// Generated on Mon Jul 21 2014 11:48:34 GMT+0200 (CEST)
module.exports = function(config) {
  config.set({

    // base path used to resolve all patterns (e.g. files, exclude)
    basePath: '',

    // frameworks to use
    frameworks: ['mocha', 'sinon-chai'],

    // list of files / patterns to load in the browser
    files: [
      'bower_components/angular/angular.js',
      'bower_components/angular-mocks/angular-mocks.js',
      'src/*.js',
      'test/*.mocha.js'
    ],

    // list of files to exclude
    exclude: [],

    // preprocess matching files before serving them to the browser
    preprocessors: {
      'src/*.js': ['coverage']
    },

    coverageReporter: {
      type: 'text-summary',
      dir: 'coverage/'
    },

    // test results reporter to use
    reporters: ['progress', 'coverage'],

    // web server port
    port: 9876,

    // enable / disable colors in the output (reporters and logs)
    colors: true,

    // level of logging
    logLevel: config.LOG_INFO,

    // enable / disable watching file and executing tests on file changes
    autoWatch: true,

    // start these browsers
    browsers: ['PhantomJS'],

    // Continuous Integration mode
    // if true, Karma captures browsers, runs the tests and exits
    singleRun: false
  });
};

This file can be automagically generated by typing karma init in a terminal window. The available keys are described in Karma’s documentation.

Notice how sources and test files are declared. There is also a newcomer:ngMock (i.e. angular-mocks.js). ngMock is an AngularJS module that provides several testing utilities (more on that at the end of this article).

MOCHA

Mocha is a testing framework for JavaScript. It handles test suites and test cases, and it offers nice reporting features. It uses a declarative syntax to nest expectations into cases and suites. Let’s look at the following example (shamelessly stolen from Mocha’s home page):

describe('Array', function() {
  describe('#indexOf()', function() {
    it('should return -1 when the value is not present', function() {
      assert.equal(-1, [1,2,3].indexOf(5));
      assert.equal(-1, [1,2,3].indexOf(0));
    });
  });
});

You can see that the whole test is contained in a describe call. What is interesting about nesting function calls in this way is that the tests follow the code’s structure. Here, the Array suite is composed of only one subsuite,#indexOf. Others could be added, of course. This subsuite is composed of one case, which itself contains two assertions and expectations. Organizing test suites into a coherent whole is essential. It ensures that test errors will be reported with meaningful messages, thus easing the debugging process.

CHAI

We have seen how Mocha provides test-suite and test-case capabilities for JavaScript. Chai, for its part, offers various ways of checking things in test cases. These checks are performed through what are called “assertions” and basically mark a test case as failed or passed. Chai’s documentation has more on the different assertions styles.

SINON

Sinon describes itself as “standalone test spies, stubs and mocks for JavaScript.” Spies, stubs and mocks all answer the same question: How do you efficiently replace one thing with another when running a test? Suppose you have a function that takes another one in a parameter and calls it. Sinon provides a smart and concise way to monitor whether the function is called and much more (with which arguments, how many times, etc.).

Unit Testing At The Application Level

The point of the external interface of a module in an AngularJS application is its ability to be injected into another module — that it exists and has a valid definition.

beforeEach(module('myAwesomeModule'));

This is enough and will throw an error if myAwesomeModule is nowhere to be found.

Unit Testing At The Module Level

An AngularJS module can declare several types of objects. Some are services, while others are more specialized. We will go over each of them to see how they can be bootstrapped in a controlled environment and then tested.

FILTERS, SERVICES AND FACTORIES: A STORY OF DEPENDENCY INJECTION LINK

Filters, services and factories (we will refer to these as services in general) can be compared to static objects or singletons in a traditional object-oriented framework. They are easy to test because they need very few things to be ready, and these things are usually other services.

AngularJS links services to other services or objects using a very expressive dependency-injection model, which basically means asking for something in a method’s arguments.

What is great about AngularJS’ way of injecting dependencies is that mocking a piece of code’s dependencies and injecting things into test cases are super-easy. In fact, I am not even sure it could be any simpler. Let’s consider this quite useful factory:

angular.module('factories', [])
.factory('chimp', ['$log', function($log) {
  return {
    ook: function() {
      $log.warn('Ook.');
    }
  };
}]);

See how $log is injected, instead of the standard console.warn? While AngularJS will not print $log statements in Karma’s console, avoid side effects in unit tests as much as possible. I once reduced by half the duration of an application’s unit tests by mocking the tracking HTTP requests — which were all silently failing in a local environment, obviously.

describe('factories', function() {

  beforeEach(module('factories'));

  var chimp;
  var $log;

  beforeEach(inject(function(_chimp_, _$log_) {
    chimp = _chimp_;
    $log = _$log_;
    sinon.stub($log, 'warn', function() {});
  }));

  describe('when invoked', function() {

    beforeEach(function() {
      chimp.ook();
    });

    it('should say Ook', function() {
      expect($log.warn.callCount).to.equal(1);
      expect($log.warn.args[0][0]).to.equal('Ook.');
    });
  });
});

The pattern for testing filters, services or other injectables is the same. Controllers can be a bit trickier to test, though, as we will see now.

CONTROLLERS

Testing a controller could lead to some confusion. What do we test? Let’s focus on what a controller is supposed to do. You should be used to considering any tested element as a black box by now. Remember that AngularJS is a model-view-whatever (MVW) framework, which is kind of ironic because one of the few ways to define something in an AngularJS application is to use the keyword controller. Still, any kind of decent controller usually acts as a proxy between the model and the view, through objects in one way and callbacks in the other.

The controller usually configures the view using some state objects, such as the following (for a hypothetical text-editing application):

angular.module('textEditor', [])

.controller('EditionCtrl', ['$scope', function($scope) {
  $scope.state = {toolbarVisible: true, documentSaved: true};
  $scope.document = {text: 'Some text'};

  $scope.$watch('document.text', function(value) {
    $scope.state.documentSaved = false;
  }, true);

  $scope.saveDocument = function() {
    $scope.sendHTTP($scope.document.text);
    $scope.state.documentSaved = true;
  };

  $scope.sendHTTP = function(content) {
    // payload creation, HTTP request, etc.
  };
}]);

Chances are that the state will be modified by both the view and the controller. The toolbarVisible attribute will be toggled by, say, a button and a keyboard shortcut. Unit tests are not supposed to test interactions between the view and the rest of the universe; that is what end-to-end tests are for.

The documentSaved value will be mostly handled by the controller, though. Let’s test it.

describe('saving a document', function() {

  var scope;
  var ctrl;

  beforeEach(module('textEditor'));

  beforeEach(inject(function($rootScope, $controller) {
    scope = $rootScope.$new();
    ctrl = $controller('EditionCtrl', {$scope: scope});
  }));

  it('should have an initial documentSaved state', function(){
    expect(scope.state.documentSaved).to.equal(true);
  });

  describe('documentSaved property', function() {
    beforeEach(function() {
      // We don't want extra HTTP requests to be sent
      // and that's not what we're testing here.
      sinon.stub(scope, 'sendHTTP', function() {});

      // A call to $apply() must be performed, otherwise the
      // scope's watchers won't be run through.
      scope.$apply(function () {
        scope.document.text += ' And some more text';
      });
    });

    it('should watch for document.text changes', function() {
      expect(scope.state.documentSaved).to.equal(false);
    });

    describe('when calling the saveDocument function', function() {
      beforeEach(function() {
        scope.saveDocument();
      });

      it('should be set to true again', function() {
        expect(scope.state.documentSaved).to.equal(true);
      });

      afterEach(function() {
        expect(scope.sendHTTP.callCount).to.equal(1);
        expect(scope.sendHTTP.args[0][0]).to.equal(scope.document.text);
      });
    });
  });
});

An interesting side effect of this code chunk is that it not only tests changes on the documentSaved property, but also checks that the sendHTTP method actually gets called and with the proper arguments (we will see later how to test HTTP requests). This is why it’s a separated method published on the controller’s scope. Decoupling and avoiding pseudo-global states (i.e. passing the text to the method, instead of letting it read the text on the scope) always eases the process of writing tests.

DIRECTIVES

A directive is AngularJS’ way of teaching HTML new tricks and of encapsulating the logic behind those tricks. This encapsulation has several contact points with the outside that are defined in the returned object’s scope attribute. The main difference with unit testing a controller is that directives usually have an isolated scope, but they both act as a black box and, therefore, will be tested in roughly the same manner. The test’s configuration is a bit different, though.

Let’s imagine a directive that displays a div with some string inside of it and a button next to it. It could be implemented as follows:

angular.module('myDirectives', [])
.directive('superButton', function() {
  return {
    scope: {label: '=', callback: '&onClick'},
    replace: true,
    restrict: 'E',
    link: function(scope, element, attrs) {

    },
    template: '<div>' +
      '<div>{{label}}</div>' +
      '<button ng-click="callback()">Click me!</button>' +
      '</div>'
  };
});

We want to test two things here. The first thing to test is that the label gets properly passed to the first div’s content, and the second is that something happens when the button gets clicked. It’s worth saying that the actual rendering of the directive belongs slightly more to end-to-end and functional testing, but we want to include it as much as possible in our unit tests simply for the sake of failing fast. Besides, working with test-driven development is easier with unit tests than with higher-level tests, such as functional, integration and end-to-end tests.

describe('directives', function() {

  beforeEach(module('myDirectives'));

  var element;
  var outerScope;
  var innerScope;

  beforeEach(inject(function($rootScope, $compile) {
    element = angular.element('<super-button label="myLabel" on-click="myCallback()"></super-button>');

    outerScope = $rootScope;
    $compile(element)(outerScope);

    innerScope = element.isolateScope();

    outerScope.$digest();
  }));

  describe('label', function() {
    beforeEach(function() {
      outerScope.$apply(function() {
        outerScope.myLabel = "Hello world.";
      });
    })

    it('should be rendered', function() {
      expect(element[0].children[0].innerHTML).to.equal('Hello world.');
    });
  });

  describe('click callback', function() {
    var mySpy;

    beforeEach(function() {
      mySpy = sinon.spy();
      outerScope.$apply(function() {
        outerScope.myCallback = mySpy;
      });
    });

    describe('when the directive is clicked', function() {
      beforeEach(function() {
        var event = document.createEvent("MouseEvent");
        event.initMouseEvent("click", true, true);
        element[0].children[1].dispatchEvent(event);
      });

      it('should be called', function() {
        expect(mySpy.callCount).to.equal(1);
      });
    });
  });
});

This example has something important. We saw that unit tests make refactoring easy as pie, but we didn’t see how exactly. Here, we are testing that when a click happens on the button, the function passed as the on-click attribute is called. If we take a closer look at the directive’s code, we will see that this function gets locally renamed to callback. It’s published under this name on the directive’s isolated scope. We could write the following test, then:

describe('click callback', function() {
  var mySpy;

  beforeEach(function() {
    mySpy = sinon.spy();
    innerScope.callback = mySpy;
  });

  describe('when the directive is clicked', function() {
    beforeEach(function() {
      var event = document.createEvent("MouseEvent");
      event.initMouseEvent("click", true, true);
      element[0].children[1].dispatchEvent(event);
    });

    it('should be called', function() {
      expect(mySpy.callCount).to.equal(1);
    });
  });
});

And it would work, too. But then we wouldn’t be testing the external aspect of our directive. If we were to forget to add the proper key to the directive’s scopedefinition, then no test would stop us. Besides, we actually don’t care whether the directive renames the callback or calls it through another method (and if we do, then it will have to be tested elsewhere anyway).

PROVIDERS

This is the toughest of our little series. What is a provider exactly? It’s AngularJS’ own way of wiring things together before the application starts. A provider also has a factory facet — in fact, you probably know the $routeProvider and its little brother, the $route factory. Let’s write our own provider and its factory and then test them!

angular.module('myProviders', [])

.provider('coffeeMaker', function() {
  var useFrenchPress = false;
  this.useFrenchPress = function(value) {
    if (value !== undefined) {
      useFrenchPress  = !!value;
    }

    return useFrenchPress;
  };

  this.$get = function () {
    return {
      brew: function() {
        return useFrenchPress ? 'Le café.': 'A coffee.';
      }
    };
  };
});

There’s nothing fancy in this super-useful provider, which defines a flag and its accessor method. We can see the config part and the factory part (which is returned by the $get method). I won’t go over the provider’s whole implementation and use cases, but I encourage you to look at AngularJS’ official documentation about providers.

To test this provider, we could test the config part on the one hand and the factory part on the other. This wouldn’t be representative of the way a provider is generally used, though. Let’s think about the way that we use providers. First, we do some configuration; then, we use the provider’s factory in some other objects or services. We can see in our coffeeMaker that its behavior depends on the useFrenchPress flag. This is how we will proceed. First, we will set this flag, and then we’ll play with the factory to see whether it behaves accordingly.

describe('coffee maker provider', function() {
  var coffeeProvider = undefined;

  beforeEach(function() {
    // Here we create a fake module just to intercept and store the provider
    // when it's injected, i.e. during the config phase.
    angular.module('dummyModule', function() {})
      .config(['coffeeMakerProvider', function(coffeeMakerProvider) {
        coffeeProvider = coffeeMakerProvider;
      }]);

    module('myProviders', 'dummyModule');

    // This actually triggers the injection into dummyModule
    inject(function(){});
  });

  describe('with french press', function() {
    beforeEach(function() {
      coffeeProvider.useFrenchPress(true);
    });

    it('should remember the value', function() {
      expect(coffeeProvider.useFrenchPress()).to.equal(true);
    });

    it('should make some coffee', inject(function(coffeeMaker) {
      expect(coffeeMaker.brew()).to.equal('Le café.');
    }));
  });

  describe('without french press', function() {
    beforeEach(function() {
      coffeeProvider.useFrenchPress(false);
    });

    it('should remember the value', function() {
      expect(coffeeProvider.useFrenchPress()).to.equal(false);
    });

    it('should make some coffee', inject(function(coffeeMaker) {
      expect(coffeeMaker.brew()).to.equal('A coffee.');
    }));
  });
});

HTTP REQUESTS

HTTP requests are not exactly on the same level as providers or controllers. They are still an essential part of unit testing, though. If you do not have a single HTTP request in your entire app, then you can skip this section, you lucky fellow.

Roughly, HTTP requests act like inputs and outputs at any of your application’s level. In a RESTfully designed system, GET requests give data to the app, and PUT, POST and DELETE methods take some. That is what we want to test, and luckily AngularJS makes that easy.

Let’s take our factory example and add a POST request to it:

angular.module('factories_2', [])
.factory('chimp', ['$http', function($http) {
  return {
    sendMessage: function() {
      $http.post('http://chimps.org/messages', {message: 'Ook.'});
    }
  };
}]);

We obviously do not want to test this on the actual server, nor do we want to monkey-patch the XMLHttpRequest constructor. That is where $httpBackend enters the game.

describe('http', function() {

  beforeEach(module('factories_2'));

  var chimp;
  var $httpBackend;

  beforeEach(inject(function(_chimp_, _$httpBackend_) {
    chimp = _chimp_;
    $httpBackend = _$httpBackend_;
  }));

  describe('when sending a message', function() {
    beforeEach(function() {
      $httpBackend.expectPOST('http://chimps.org/messages', {message: 'Ook.'})
      .respond(200, {message: 'Ook.', id: 0});

      chimp.sendMessage();
      $httpBackend.flush();
    });

    it('should send an HTTP POST request', function() {
      $httpBackend.verifyNoOutstandingExpectation();
      $httpBackend.verifyNoOutstandingRequest();
    });
  });
});

You can see that we’ve defined which calls should be issued to the fake server and how to respond to them before doing anything else. This is useful and enables us to test our app’s response to different requests’ responses (for example, how does the application behave when the login request returns a 404?). This particular example simulates a standard POST response.

The two other lines of the beforeEach block are the function call and a newcomer, $httpBackend.flush(). The fake server does not immediately answer each request; instead, it lets you check any intermediary state that you may have configured. It waits for you to explicitly tell it to respond to any pending request it might have received.

The test itself has two methods calls on the fake server (verifyNoOutstandingExpectation and verifyNoOutstandingRequest). AngularJS’$httpBackend does not enforce strict equality between what it expects and what it actually receives unless you’ve told it to do so. You can regard these lines as two expectations, one of the number of pending requests and the other of the number of pending expectations.

ngMock Module

The ngMock module contains various utilities to help you smooth over JavaScript and AngularJS’ specifics.

$TIMEOUT, $LOG AND THE OTHERS

Using AngularJS’ injectable dependencies is better than accessing global objects such as console or window. Let’s consider console calls. They are outputs just like HTTP requests and might actually matter if you are implementing an API for which some errors must be logged. To test them, you can either monkey-patch a global object — yikes! — or use AngularJS’ nice injectable.

The $timeout dependency also provides a very convenient flush() method, just like $httpBackend. If we create a factory that provides a way to briefly set a flag to true and then restore it to its original value, then the proper way to test it’s to use $timeout.

angular.module('timeouts', [])

.factory('waiter', ['$timeout', function($timeout) {
  return {
    brieflySetSomethingToTrue: function(target, property) {
      var oldValue = target[property];

      target[property] = true;

      $timeout(function() {
        target[property] = oldValue;
      }, 100);
    }
  };
}]);

And the test will look like this:

describe('timeouts', function() {

  beforeEach(module('timeouts'));

  var waiter;
  var $timeout;

  beforeEach(inject(function(_waiter_, _$timeout_) {
    waiter = _waiter_;
    $timeout = _$timeout_;
  }));

  describe('brieflySetSomethingToTrue method', function() {
    var anyObject;

    beforeEach(function() {
      anyObject = {foo: 42};
      waiter.brieflySetSomethingToTrue(anyObject, 'foo');
    });

    it('should briefly set something to true', function() {
      expect(anyObject.foo).to.equal(true);
      $timeout.flush();
      expect(anyObject.foo).to.equal(42);
    });
  });
});

Notice how we’re checking the intermediary state and then flush()’ing the timeout.

MODULE() AND INJECT()

The module() and inject() functions help to retrieve modules and dependencies during tests. The former enables you to retrieve a module, while the latter creates an instance of $injector, which will resolve references.

it('should say Ook.', inject(function($log) {
  sinon.stub($log, 'warn', function() {});

  chimp.ook();

  expect($log.warn.callCount).to.equal(1);
  expect($log.warn.args[0][0]).to.equal('Ook.');
}));

In this test case, we’re wrapping our test case function in an inject call. This call will create an $injector instance and resolve any dependencies declared in the test case function’s arguments.

DEPENDENCY INJECTION MADE EASY

One last trick is to ask for dependencies using underscores around the name of what we are asking for. The point of this is to assign a local variable that has the same name as the dependencies. Indeed, the $injector used in our tests will remove surrounding underscores if any are found. StackOverflow has a comment on this.

Conclusion

Unit testing in AngularJS applications follows a fractal design. It tests units of code. It freezes a unit’s behavior by providing a way to automatically check its response to a given input. Note that unit tests do not replace good coding. AngularJS’ documentation is pretty clear on this point: “Angular is written with testability in mind, but it still requires that you do the right thing.”

Getting started with writing unit tests — and coding in test-driven development — is hard. However, the benefits will soon show up if you’re willing to fully test your application, especially during refactoring operations.

Tests also work well with agile methods. User stories are almost tests; they’re just not actual code (although some approaches, such as “design by contract,” minimize this difference).

FURTHER RESOURCES

sd
sd
sds
ds
d