Author Archives: admin

Android : Implementación de un patrón de diseño Dashboard

[Fuente: http://www.javacodegeeks.com/2012/06/android-dashboard-design-pattern.html]

[audio http://wpcom.files.wordpress.com/2007/01/mattmullenweg-interview.mp3]

[youtube=http://www.youtube.com/watch?v=JaNH56Vpg-A&w=640&h=385]

[recently_posts]

En breve, diremos que un Dashboard es una página conteniendo símbolos claros y grandes para acceder a la funcionalidad principal y opcionalmente un área para información de noticias relevantes.

El principal objetivo de este artículo es implementar un patrón de diseño de Dashboard como el siguiente:

Paso 1: Crear el Title bar layout

Definiremos el layout de la title bar (header o cabecera) sólo una vez pero será requerida en varias pantallas. Mostraremos/ocultaremos el botón de Home y otros botones cuandoquiera que son necesitados. Una vez que has hecho el title bar layout, podemos utilizarlo el mismo layout en otros layouts utilizando ViewStub.

He aquí un ejemplo del  (header) xml layout:

header.xml

<?xml version="1.0" encoding="utf-8"?>
	<RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"
	    android:layout_width="fill_parent"
	    android:layout_height="wrap_content"
	    android:background="@color/title_background" >

	    <LinearLayout
	        android:id="@+id/panelIconLeft"
	        android:layout_width="wrap_content"
	        android:layout_height="wrap_content"
	        android:layout_alignParentLeft="true"
	        android:layout_centerVertical="true"
	        android:layout_margin="5dp" >

	        <Button
	            android:id="@+id/btnHome"
	            android:layout_width="wrap_content"
	            android:layout_height="wrap_content"
	            android:background="@drawable/ic_home"
	            android:onClick="btnHomeClick" />
	    </LinearLayout>

	    <TextView
	        android:id="@+id/txtHeading"
	        style="@style/heading_text"
	        android:layout_width="fill_parent"
	        android:layout_height="wrap_content"
	        android:layout_centerInParent="true"
	        android:layout_marginLeft="5dp"
	        android:layout_marginRight="5dp"
	        android:layout_toLeftOf="@+id/panelIconRight"
	        android:layout_toRightOf="@id/panelIconLeft"
	        android:ellipsize="marquee"
	        android:focusable="true"
	        android:focusableInTouchMode="true"
	        android:gravity="center"
	        android:marqueeRepeatLimit="marquee_forever"
	        android:singleLine="true"
	        android:text=""
	        android:textColor="@android:color/white" />

	    <LinearLayout
	        android:id="@+id/panelIconRight"
	        android:layout_width="wrap_content"
	        android:layout_height="wrap_content"
	        android:layout_alignParentRight="true"
	        android:layout_centerVertical="true"
	        android:layout_margin="5dp" >

	        <Button
	            android:id="@+id/btnFeedback"
	            android:layout_width="wrap_content"
	            android:layout_height="wrap_content"
	            android:background="@drawable/ic_feedback"
	            android:onClick="btnFeedbackClick" />
	    </LinearLayout>

	</RelativeLayout>

El código de arriba hace referencias a estilos de styles.xml y dimensiones de dimen.xml:

styles.xml

<?xml version="1.0" encoding="utf-8"?>
	<resources>
	<style name="heading_text">
	        <item name="android:textColor">#ff000000</item>
	        <item name="android:textStyle">bold</item>
	        <item name="android:textSize">16sp</item>
	        <item name="android:padding">5dp</item>
	    </style>
	<style name="HomeButton">
	        <item name="android:layout_gravity">center_vertical</item>
	        <item name="android:layout_width">fill_parent</item>
	        <item name="android:layout_height">wrap_content</item>
	        <item name="android:layout_weight">1</item>
	        <item name="android:gravity">center_horizontal</item>
	        <item name="android:textSize">@dimen/text_size_medium</item>
	        <item name="android:textStyle">normal</item>
	        <item name="android:textColor">@color/foreground1</item>
	        <item name="android:background">@null</item>
	    </style>

	</resources>

dimen.xml

<?xml version="1.0" encoding="utf-8"?>
<resources>
    <dimen name="title_height">45dip</dimen>
    <dimen name="text_size_small">14sp</dimen>
    <dimen name="text_size_medium">18sp</dimen>
    <dimen name="text_size_large">22sp</dimen>
</resources>

Paso 2: Crea una super (abstract) class

De hecho, en esta abstract super class, definiremos:

1) event handlers para los dos botones : Home and Feedback

2) Otros métodos

Los botones de Home y Feedback , que van a ser visibles en casi todas las activities y que requerirán las mismas acciones a ser realizadas (por ejemplo , llevar al usuario a la activity de Home). Así que en vez de escribir el mismo código en cada actividad, escribiremos un event handler sólo una vez en una clase abstracta que será superclase para todas las activities.

Puedes haberte dado cuenta en el fichero header.xml de donde están definidos los botones de Home y de Feedback:

android:onClick=”btnHomeClick” (Home button)

android:onClick=”btnFeedbackClick” (Feedback button)

, asi que definiremos este método una vez en la super clase (abstracta).

Por favor, buscate un ejemplo de ViewStub si nunca lo has utilizado.

Ahora , aquí está el código de la clase abstracta, la llamaremos DashboardActivity.java

package com.technotalkative.viewstubdemo;

	import android.app.Activity;
	import android.content.Intent;
	import android.os.Bundle;
	import android.view.View;
	import android.view.ViewStub;
	import android.widget.Button;
	import android.widget.TextView;

	public abstract class DashBoardActivity extends Activity {
	    /** Called when the activity is first created. */
	    @Override
	    public void onCreate(Bundle savedInstanceState) {
	        super.onCreate(savedInstanceState);
	    }

	    public void setHeader(String title, boolean btnHomeVisible, boolean btnFeedbackVisible)
	    {
	      ViewStub stub = (ViewStub) findViewById(R.id.vsHeader);
	      View inflated = stub.inflate();

	      TextView txtTitle = (TextView) inflated.findViewById(R.id.txtHeading);
	      txtTitle.setText(title);

	      Button btnHome = (Button) inflated.findViewById(R.id.btnHome);
	      if(!btnHomeVisible)
	       btnHome.setVisibility(View.INVISIBLE);

	      Button btnFeedback = (Button) inflated.findViewById(R.id.btnFeedback);
	      if(!btnFeedbackVisible)
	       btnFeedback.setVisibility(View.INVISIBLE);

	    }

	    /**
	     * Home button click handler
	     * @param v
	     */
	    public void btnHomeClick(View v)
	    {
	     Intent intent = new Intent(getApplicationContext(), HomeActivity.class);
	     intent.setFlags (Intent.FLAG_ACTIVITY_CLEAR_TOP);
	     startActivity(intent);

	    }

	    /**
	     * Feedback button click handler
	     * @param v
	     */
	    public void btnFeedbackClick(View v)
	    {
	     Intent intent = new Intent(getApplicationContext(), FeedbackActivity.class);
	     startActivity(intent);
	    }
	}

Step 3: Define Dashboard layout

01 <?xml version="1.0" encoding="utf-8"?>
02 <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"
03     android:layout_width="fill_parent"
04     android:layout_height="fill_parent"
05     android:orientation="vertical" >
06
07     
08  
09
10     <ViewStub
11         android:id="@+id/vsHeader"
12         android:layout_width="fill_parent"
13         android:layout_height="wrap_content"
14         android:inflatedId="@+id/header"
15         android:layout="@layout/header" />
16
17     <LinearLayout
18         android:layout_width="fill_parent"
19         android:layout_height="wrap_content"
20         android:layout_weight="1"
21         android:orientation="vertical"
22         android:padding="6dip" >
23
24         <LinearLayout
25             android:layout_width="fill_parent"
26             android:layout_height="wrap_content"
27             android:layout_weight="1"
28             android:orientation="horizontal" >
29
30             <Button
31                 android:id="@+id/main_btn_eclair"
32                 style="@style/HomeButton"
33                 android:drawableTop="@drawable/android_eclair_logo"
34                 android:onClick="onButtonClicker"
35                 android:text="@string/EclairActivityTitle" />
36
37             <Button
38                 android:id="@+id/main_btn_froyo"
39                 style="@style/HomeButton"
40                 android:drawableTop="@drawable/android__logo_froyo"
41                 android:onClick="onButtonClicker"
42                 android:text="@string/FroyoActivityTitle" />
43         </LinearLayout>
44
45         <LinearLayout
46             android:layout_width="fill_parent"
47             android:layout_height="wrap_content"
48             android:layout_weight="1"
49             android:orientation="horizontal" >
50
51             <Button
52                 android:id="@+id/main_btn_gingerbread"
53                 style="@style/HomeButton"
54                 android:drawableTop="@drawable/android_gingerbread_logo"
55                 android:onClick="onButtonClicker"
56                 android:text="@string/GingerbreadActivityTitle" />
57
58             <Button
59                 android:id="@+id/main_btn_honeycomb"
60                 style="@style/HomeButton"
61                 android:drawableTop="@drawable/android_honeycomb_logo"
62                 android:onClick="onButtonClicker"
63                 android:text="@string/HoneycombActivityTitle" />
64         </LinearLayout>
65
66         <LinearLayout
67             android:layout_width="fill_parent"
68             android:layout_height="wrap_content"
69             android:layout_weight="1"
70             android:orientation="horizontal" >
71
72             <Button
73                 android:id="@+id/main_btn_ics"
74                 style="@style/HomeButton"
75                 android:drawableTop="@drawable/android_ics_logo"
76                 android:onClick="onButtonClicker"
77                 android:text="@string/ICSActivityTitle" />
78
79             <Button
80                 android:id="@+id/main_btn_jellybean"
81                 style="@style/HomeButton"
82                 android:drawableTop="@drawable/android_jellybean_logo"
83                 android:onClick="onButtonClicker"
84                 android:text="@string/JellyBeanActivityTitle" />
85         </LinearLayout>
86     </LinearLayout>
87 </LinearLayout>

Step 4: Define activity for handling this dashboard layout buttons click events.

In this activity, you will find the usage of setHeader() method to set the header for home activity, yes in this method i have passed “false” for home button because its already a home activity, but i have passed “true” for feedback button because feedback is needed to be visible. Other process are same as defining button click handlers.

01 package com.technotalkative.viewstubdemo;
02
03 import android.content.Intent;
04 import android.os.Bundle;
05 import android.view.View;
06
07 public class HomeActivity extends DashBoardActivity {
08     /** Called when the activity is first created. */
09     @Override
10     public void onCreate(Bundle savedInstanceState) {
11         super.onCreate(savedInstanceState);
12         setContentView(R.layout.main);
13         setHeader(getString(R.string.HomeActivityTitle), falsetrue);
14     }
15
16     /**
17      * Button click handler on Main activity
18      * @param v
19      */
20     public void onButtonClicker(View v)
21     {
22      Intent intent;
23
24      switch (v.getId()) {
25   case R.id.main_btn_eclair:
26    intent = new Intent(this, Activity_Eclair.class);
27    startActivity(intent);
28    break;
29
30   case R.id.main_btn_froyo:
31    intent = new Intent(this, Activity_Froyo.class);
32    startActivity(intent);
33    break;
34
35   case R.id.main_btn_gingerbread:
36    intent = new Intent(this, Activity_Gingerbread.class);
37    startActivity(intent);
38    break;
39
40   case R.id.main_btn_honeycomb:
41    intent = new Intent(this, Activity_Honeycomb.class);
42    startActivity(intent);
43    break;
44
45   case R.id.main_btn_ics:
46    intent = new Intent(this, Activity_ICS.class);
47    startActivity(intent);
48    break;
49
50   case R.id.main_btn_jellybean:
51    intent = new Intent(this, Activity_JellyBean.class);
52    startActivity(intent);
53    break;
54   default:
55    break;
56   }
57     }
58 }

Step 5: Define other activities and their UI layouts

Now, its time to define activities that we want to display based on the particular button click from dashboard. So define every activities and their layouts. Don’t forget to call setHeader() method wherever necessary.

Here is one example for such activity – Activity_Eclair.java

01 package com.technotalkative.viewstubdemo;
02
03 import android.os.Bundle;
04
05 public class Activity_Eclair extends DashBoardActivity {
06     /** Called when the activity is first created. */
07     @Override
08     public void onCreate(Bundle savedInstanceState) {
09         super.onCreate(savedInstanceState);
10         setContentView(R.layout.activity_eclair);
11         setHeader(getString(R.string.EclairActivityTitle), truetrue);
12     }
13 }

activity_eclair.xml

01 <?xml version="1.0" encoding="utf-8"?>
02 <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"
03     android:layout_width="fill_parent"
04     android:layout_height="fill_parent"
05     android:orientation="vertical" >
06
07     
08  
09
10     <ViewStub
11         android:id="@+id/vsHeader"
12         android:layout_width="fill_parent"
13         android:layout_height="wrap_content"
14         android:inflatedId="@+id/header"
15         android:layout="@layout/header" />
16
17     <TextView
18         android:id="@+id/textView1"
19         android:layout_width="match_parent"
20         android:layout_height="match_parent"
21         android:gravity="center"
22         android:text="@string/EclairActivityTitle" />
23
24 </LinearLayout>

Step 6: Declare activities inside the AnroidManifest.xml file

Now you are DONE :)

Output:

Home screen _ Landscape
inner screen
inner screen

You can download source code from here: Android – Dashboard pattern implementation

Feedback/review are always welcome :)

Reference: Android – Dashboard design pattern implementation from our JCG partner Paresh N. Mayani at the TechnoTalkative blog.

PHP : Utilizando Google Calendar

[Fuente: http://framework.zend.com/manual/1.12/en/zend.gdata.calendar.html]

You can use the Zend_Gdata_Calendar class to view, create, update, and delete events in the online Google Calendar service.

See » http://code.google.com/apis/calendar/overview.html for more information about the Google Calendar API.

Conectando al Calendar Service

El Google Calendar API , como todos los los APIs GData, están basados en el Atom Publishing Protocol (APP), un formato desarrollado en XML para gestionar recursos web. El tráfico entre un cliente y los servidores de Google Calendar tiene lugar sobre HTTP y permite tanto conexiones autenticadas como no autenticadas.

Antes de que cualquier transacción ocurra , es necesario establecer una conexión. Crear una conexión a los servidores de Calendar incluye dos pasos:

  • Crear un cliente HTTP
  • Crear un instancia del servicio Zend_Gdata_Calendar y conectarlo a ese cliente

Autenticación

El Google Calendar API permite acceso a feeds de calendarios tanto públicos como privados:

  • Los feeds públicos no requieren autenticación, pero son de sólo lectura y ofrecen funcionalidad reducidad
  • Los feeds privados ofrecen una funcionalidad más completa pero requiere una conexión autenticada a los servidores de calendar. Hay tres esquemas de autenticación que son soportados por Google Calendar:
    • ClientAuth proporciona autenticación usuario/password directa a los servidores de calendar. Debido a que este esquema requiere que los usuarios proporcionen a tu aplicación su password , este tipo de autenticación es sólo recomendado cuando otros esquemas de autenticación son insuficientes.
    • AuthSub allows authentication to the calendar servers via a Google proxy server. This provides the same level of convenience as ClientAuth but without the security risk, making this an ideal choice for web-based applications.
    • MagicCookie allows authentication based on a semi-random URL available from within the Google Calendar interface. This is the simplest authentication scheme to implement, but requires that users manually retrieve their secure URL before they can authenticate, doesn’t provide access to calendar lists, and is limited to read-only access.

The Zend_Gdata library provides support for all three authentication schemes. The rest of this chapter will assume that you are familiar the authentication schemes available and how to create an appropriate authenticated connection. For more information, please see section the Authentication section of this manual or the » Authentication Overview in the Google Data API Developer’s Guide.

Creating A Service Instance

In order to interact with Google Calendar, this library provides the Zend_Gdata_Calendar service class. This class provides a common interface to the Google Data and Atom Publishing Protocol models and assists in marshaling requests to and from the calendar servers.

Once deciding on an authentication scheme, the next step is to create an instance of Zend_Gdata_Calendar. The class constructor takes an instance of Zend_Http_Client as a single argument. This provides an interface for AuthSub and ClientAuth authentication, as both of these require creation of a special authenticated HTTP client. If no arguments are provided, an unauthenticated instance ofZend_Http_Client will be automatically created.

The example below shows how to create a Calendar service class using ClientAuth authentication:

  1. // Parameters for ClientAuth authentication
  2. $service = Zend_Gdata_Calendar::AUTH_SERVICE_NAME;
  3. $user = “sample.user@gmail.com”;
  4. $pass = “pa$$w0rd”;
  5. // Create an authenticated HTTP client
  6. $client = Zend_Gdata_ClientLogin::getHttpClient($user, $pass, $service);
  7. // Create an instance of the Calendar service
  8. $service = new Zend_Gdata_Calendar($client);

A Calendar service using AuthSub can be created in a similar, though slightly more lengthy fashion:

  1. /*
  2. * Retrieve the current URL so that the AuthSub server knows where to
  3. * redirect the user after authentication is complete.
  4. */
  5. function getCurrentUrl()
  6. {
  7.     global $_SERVER;
  8.     // Filter php_self to avoid a security vulnerability.
  9.     $php_request_uri =
  10.         htmlentities(substr($_SERVER[‘REQUEST_URI’],
  11.                             0,
  12.                             strcspn($_SERVER[‘REQUEST_URI’], “nr”)),
  13.                             ENT_QUOTES);
  14.     if (isset($_SERVER[‘HTTPS’]) &&
  15.         strtolower($_SERVER[‘HTTPS’]) == ‘on’) {
  16.         $protocol = ‘https://’;
  17.     } else {
  18.         $protocol = ‘http://’;
  19.     }
  20.     $host = $_SERVER[‘HTTP_HOST’];
  21.     if ($_SERVER[‘HTTP_PORT’] != ” &&
  22.         (($protocol == ‘http://’ && $_SERVER[‘HTTP_PORT’] != ’80’) ||
  23.         ($protocol == ‘https://’ && $_SERVER[‘HTTP_PORT’] != ‘443’))) {
  24.         $port = ‘:’ . $_SERVER[‘HTTP_PORT’];
  25.     } else {
  26.         $port = ”;
  27.     }
  28.     return $protocol . $host . $port . $php_request_uri;
  29. }
  30. /**
  31. * Obtain an AuthSub authenticated HTTP client, redirecting the user
  32. * to the AuthSub server to login if necessary.
  33. */
  34. function getAuthSubHttpClient()
  35. {
  36.     global $_SESSION, $_GET;
  37.     // if there is no AuthSub session or one-time token waiting for us,
  38.     // redirect the user to the AuthSub server to get one.
  39.     if (!isset($_SESSION[‘sessionToken’]) && !isset($_GET[‘token’])) {
  40.         // Parameters to give to AuthSub server
  41.         $next = getCurrentUrl();
  42.         $scope = “http://www.google.com/calendar/feeds/”;
  43.         $secure = false;
  44.         $session = true;
  45.         // Redirect the user to the AuthSub server to sign in
  46.         $authSubUrl = Zend_Gdata_AuthSub::getAuthSubTokenUri($next,
  47.                                                              $scope,
  48.                                                              $secure,
  49.                                                              $session);
  50.          header(“HTTP/1.0 307 Temporary redirect”);
  51.          header(“Location: ” . $authSubUrl);
  52.          exit();
  53.     }
  54.     // Convert an AuthSub one-time token into a session token if needed
  55.     if (!isset($_SESSION[‘sessionToken’]) && isset($_GET[‘token’])) {
  56.         $_SESSION[‘sessionToken’] =
  57.             Zend_Gdata_AuthSub::getAuthSubSessionToken($_GET[‘token’]);
  58.     }
  59.     // At this point we are authenticated via AuthSub and can obtain an
  60.     // authenticated HTTP client instance
  61.     // Create an authenticated HTTP client
  62.     $client = Zend_Gdata_AuthSub::getHttpClient($_SESSION[‘sessionToken’]);
  63.     return $client;
  64. }
  65. // -> Script execution begins here <-
  66. // Make sure that the user has a valid session, so we can record the
  67. // AuthSub session token once it is available.
  68. // Create an instance of the Calendar service, redirecting the user
  69. // to the AuthSub server if necessary.
  70. $service = new Zend_Gdata_Calendar(getAuthSubHttpClient());

Finally, an unauthenticated server can be created for use with either public feeds or MagicCookie authentication:

  1. // Create an instance of the Calendar service using an unauthenticated
  2. // HTTP client
  3. $service = new Zend_Gdata_Calendar();

Note that MagicCookie authentication is not supplied with the HTTP connection, but is instead specified along with the desired visibility when submitting queries. See the section on retrieving events below for an example.

Retrieving A Calendar List

The calendar service supports retrieving a list of calendars for the authenticated user. This is the same list of calendars which are displayed in the Google Calendar UI, except those marked as “hidden” are also available.

The calendar list is always private and must be accessed over an authenticated connection. It is not possible to retrieve another user’s calendar list and it cannot be accessed using MagicCookie authentication. Attempting to access a calendar list without holding appropriate credentials will fail and result in a 401 (Authentication Required) status code.

  1. $service = Zend_Gdata_Calendar::AUTH_SERVICE_NAME;
  2. $client = Zend_Gdata_ClientLogin::getHttpClient($user, $pass, $service);
  3. $service = new Zend_Gdata_Calendar($client);
  4. try {
  5.     $listFeed= $service->getCalendarListFeed();
  6. } catch (Zend_Gdata_App_Exception $e) {
  7.     echo “Error: ” . $e->getMessage();
  8. }

Calling getCalendarListFeed() creates a new instance of Zend_Gdata_Calendar_ListFeed containing each available calendar as an instance of Zend_Gdata_Calendar_ListEntry. After retrieving the feed, you can use the iterator and accessors contained within the feed to inspect the enclosed calendars.

  1. echo “<h1>Calendar List Feed</h1>”;
  2. echo “<ul>”;
  3. foreach ($listFeed as $calendar) {
  4.     echo “<li>” . $calendar->title .
  5.          ” (Event Feed: ” . $calendar->id . “)</li>”;
  6. }
  7. echo “</ul>”;

Retrieving Events

Like the list of calendars, events are also retrieved using the Zend_Gdata_Calendar service class. The event list returned is of typeZend_Gdata_Calendar_EventFeed and contains each event as an instance of Zend_Gdata_Calendar_EventEntry. As before, the iterator and accessors contained within the event feed instance allow inspection of individual events.

Queries

When retrieving events using the Calendar API, specially constructed query URLs are used to describe what events should be returned. TheZend_Gdata_Calendar_EventQuery class simplifies this task by automatically constructing a query URL based on provided parameters. A full list of these parameters is available at the » Queries section of the Google Data APIs Protocol Reference. However, there are three parameters that are worth special attention:

  • User is used to specify the user whose calendar is being searched for, and is specified as an email address. If no user is provided, “default” will be used instead to indicate the currently authenticated user (if authenticated).
  • Visibility specifies whether a users public or private calendar should be searched. If using an unauthenticated session and no MagicCookie is available, only the public feed will be available.
  • Projection specifies how much data should be returned by the server and in what format. In most cases you will want to use the “full” projection. Also available is the “basic” projection, which places most meta-data into each event’s content field as human readable text, and the “composite” projection which includes complete text for any comments alongside each event. The “composite” view is often much larger than the “full” view.

Retrieving Events In Order Of Start Time

The example below illustrates the use of the Zend_Gdata_Query class and specifies the private visibility feed, which requires that an authenticated connection is available to the calendar servers. If a MagicCookie is being used for authentication, the visibility should be instead set to “private-magicCookieValue“, where magicCookieValue is the random string obtained when viewing the private XML address in the Google Calendar UI. Events are requested chronologically by start time and only events occurring in the future are returned.

  1. $query = $service->newEventQuery();
  2. $query->setUser(‘default’);
  3. // Set to $query->setVisibility(‘private-magicCookieValue’) if using
  4. // MagicCookie auth
  5. $query->setVisibility(‘private’);
  6. $query->setProjection(‘full’);
  7. $query->setOrderby(‘starttime’);
  8. $query->setFutureevents(‘true’);
  9. // Retrieve the event list from the calendar server
  10. try {
  11.     $eventFeed = $service->getCalendarEventFeed($query);
  12. } catch (Zend_Gdata_App_Exception $e) {
  13.     echo “Error: ” . $e->getMessage();
  14. }
  15. // Iterate through the list of events, outputting them as an HTML list
  16. echo “<ul>”;
  17. foreach ($eventFeed as $event) {
  18.     echo “<li>” . $event->title . ” (Event ID: ” . $event->id . “)</li>”;
  19. }
  20. echo “</ul>”;

Additional properties such as ID, author, when, event status, visibility, web content, and content, among others are available withinZend_Gdata_Calendar_EventEntry. Refer to the » Zend Framework API Documentation and the » Calendar Protocol Reference for a complete list.

Retrieving Events In A Specified Date Range

To print out all events within a certain range, for example from December 1, 2006 through December 15, 2007, add the following two lines to the previous sample. Take care to remove “$query->setFutureevents(‘true’)“, since futureevents will override startMin andstartMax.

  1. $query->setStartMin(‘2006-12-01’);
  2. $query->setStartMax(‘2006-12-16’);

Note that startMin is inclusive whereas startMax is exclusive. As a result, only events through 2006-12-15 23:59:59 will be returned.

Retrieving Events By Fulltext Query

To print out all events which contain a specific word, for example “dogfood”, use the setQuery() method when creating the query.

  1. $query->setQuery(“dogfood”);

Retrieving Individual Events

Individual events can be retrieved by specifying their event ID as part of the query. Instead of calling getCalendarEventFeed(),getCalendarEventEntry() should be called instead.

  1. $query = $service->newEventQuery();
  2. $query->setUser(‘default’);
  3. $query->setVisibility(‘private’);
  4. $query->setProjection(‘full’);
  5. $query->setEvent($eventId);
  6. try {
  7.     $event = $service->getCalendarEventEntry($query);
  8. } catch (Zend_Gdata_App_Exception $e) {
  9.     echo “Error: ” . $e->getMessage();
  10. }

In a similar fashion, if the event URL is known, it can be passed directly into getCalendarEntry() to retrieve a specific event. In this case, no query object is required since the event URL contains all the necessary information to retrieve the event.

  1. $eventURL = “http://www.google.com/calendar/feeds/default/private”
  2.           . “/full/g829on5sq4ag12se91d10uumko”;
  3. try {
  4.     $event = $service->getCalendarEventEntry($eventURL);
  5. } catch (Zend_Gdata_App_Exception $e) {
  6.     echo “Error: ” . $e->getMessage();
  7. }

Creating Events

Creating Single-Occurrence Events

Events are added to a calendar by creating an instance of Zend_Gdata_EventEntry and populating it with the appropriate data. The calendar service instance (Zend_Gdata_Calendar) is then used to used to transparently covert the event into XML and POST it to the calendar server. Creating events requires either an AuthSub or ClientAuth authenticated connection to the calendar server.

At a minimum, the following attributes should be set:

  • Title provides the headline that will appear above the event within the Google Calendar UI.
  • When indicates the duration of the event and, optionally, any reminders that are associated with it. See the next section for more information on this attribute.

Other useful attributes that may optionally set include:

  • Author provides information about the user who created the event.
  • Content provides additional information about the event which appears when the event details are requested from within Google Calendar.
  • EventStatus indicates whether the event is confirmed, tentative, or canceled.
  • Transparency indicates whether the event should be consume time on the user’s free/busy list.
  • WebContent allows links to external content to be provided within an event.
  • Where indicates the location of the event.
  • Visibility allows the event to be hidden from the public event lists.

For a complete list of event attributes, refer to the » Zend Framework API Documentation and the » Calendar Protocol Reference. Attributes that can contain multiple values, such as where, are implemented as arrays and need to be created accordingly. Be aware that all of these attributes require objects as parameters. Trying instead to populate them using strings or primitives will result in errors during conversion to XML.

Once the event has been populated, it can be uploaded to the calendar server by passing it as an argument to the calendar service’sinsertEvent() function.

  1. // Create a new entry using the calendar service’s magic factory method
  2. $event= $service->newEventEntry();
  3. // Populate the event with the desired information
  4. // Note that each attribute is crated as an instance of a matching class
  5. $event->title = $service->newTitle(“My Event”);
  6. $event->where = array($service->newWhere(“Mountain View, California”));
  7. $event->content =
  8.     $service->newContent(” This is my awesome event. RSVP required.”);
  9. // Set the date using RFC 3339 format.
  10. $startDate = “2008-01-20”;
  11. $startTime = “14:00”;
  12. $endDate = “2008-01-20”;
  13. $endTime = “16:00”;
  14. $tzOffset = “-08”;
  15. $when = $service->newWhen();
  16. $when->startTime = “{$startDate}T{$startTime}:00.000{$tzOffset}:00”;
  17. $when->endTime = “{$endDate}T{$endTime}:00.000{$tzOffset}:00”;
  18. $event->when = array($when);
  19. // Upload the event to the calendar server
  20. // A copy of the event as it is recorded on the server is returned
  21. $newEvent = $service->insertEvent($event);

Event Schedules and Reminders

An event’s starting time and duration are determined by the value of its when property, which contains the properties startTime, endTime, and valueString. StartTime and EndTime control the duration of the event, while the valueString property is currently unused.

All-day events can be scheduled by specifying only the date omitting the time when setting startTime and endTime. Likewise, zero-duration events can be specified by omitting the endTime. In all cases, date and time values should be provided in » RFC3339 format.

  1. // Schedule the event to occur on December 05, 2007 at 2 PM PST (UTC-8)
  2. // with a duration of one hour.
  3. $when = $service->newWhen();
  4. $when->startTime = “2007-12-05T14:00:00-08:00”;
  5. $when->endTime=”2007-12-05T15:00:00:00-08:00″;
  6. // Apply the when property to an event
  7. $event->when = array($when);

The when attribute also controls when reminders are sent to a user. Reminders are stored in an array and each event may have up to find reminders associated with it.

For a reminder to be valid, it needs to have two attributes set: method and a time. Method can accept one of the following strings: “alert”, “email”, or “sms”. The time should be entered as an integer and can be set with either the property minutes, hours, days, or absoluteTime. However, a valid request may only have one of these attributes set. If a mixed time is desired, convert to the most precise unit available. For example, 1 hour and 30 minutes should be entered as 90 minutes.

  1. // Create a new reminder object. It should be set to send an email
  2. // to the user 10 minutes beforehand.
  3. $reminder = $service->newReminder();
  4. $reminder->method = “email”;
  5. $reminder->minutes = “10”;
  6. // Apply the reminder to an existing event’s when property
  7. $when = $event->when[0];
  8. $when->reminders = array($reminder);

Creating Recurring Events

Recurring events are created the same way as single-occurrence events, except a recurrence attribute should be provided instead of a where attribute. The recurrence attribute should hold a string describing the event’s recurrence pattern using properties defined in the iCalendar standard (» RFC 2445).

Exceptions to the recurrence pattern will usually be specified by a distinct recurrenceException attribute. However, the iCalendar standard provides a secondary format for defining recurrences, and the possibility that either may be used must be accounted for.

Due to the complexity of parsing recurrence patterns, further information on this them is outside the scope of this document. However, more information can be found in the » Common Elements section of the Google Data APIs Developer Guide, as well as in RFC 2445.

  1. // Create a new entry using the calendar service’s magic factory method
  2. $event= $service->newEventEntry();
  3. // Populate the event with the desired information
  4. // Note that each attribute is crated as an instance of a matching class
  5. $event->title = $service->newTitle(“My Recurring Event”);
  6. $event->where = array($service->newWhere(“Palo Alto, California”));
  7. $event->content =
  8.     $service->newContent(‘ This is my other awesome event, ‘ .
  9.                          ‘ occurring all-day every Tuesday from ‘ .
  10.                          ‘2007-05-01 until 207-09-04. No RSVP required.’);
  11. // Set the duration and frequency by specifying a recurrence pattern.
  12. $recurrence = “DTSTART;VALUE=DATE:20070501rn” .
  13.         “DTEND;VALUE=DATE:20070502rn” .
  14.         “RRULE:FREQ=WEEKLY;BYDAY=Tu;UNTIL=20070904rn”;
  15. $event->recurrence = $service->newRecurrence($recurrence);
  16. // Upload the event to the calendar server
  17. // A copy of the event as it is recorded on the server is returned
  18. $newEvent = $service->insertEvent($event);

Using QuickAdd

QuickAdd is a feature which allows events to be created using free-form text entry. For example, the string “Dinner at Joe’s Diner on Thursday” would create an event with the title “Dinner”, location “Joe’s Diner”, and date “Thursday”. To take advantage of QuickAdd, create a new QuickAdd property set to TRUE and store the freeform text as a content property.

  1. // Create a new entry using the calendar service’s magic factory method
  2. $event= $service->newEventEntry();
  3. // Populate the event with the desired information
  4. $event->content= $service->newContent(“Dinner at Joe’s Diner on Thursday”);
  5. $event->quickAdd = $service->newQuickAdd(“true”);
  6. // Upload the event to the calendar server
  7. // A copy of the event as it is recorded on the server is returned
  8. $newEvent = $service->insertEvent($event);

Modifying Events

Once an instance of an event has been obtained, the event’s attributes can be locally modified in the same way as when creating an event. Once all modifications are complete, calling the event’s save() method will upload the changes to the calendar server and return a copy of the event as it was created on the server.

In the event another user has modified the event since the local copy was retrieved, save() will fail and the server will return a 409 (Conflict) status code. To resolve this a fresh copy of the event must be retrieved from the server before attempting to resubmit any modifications.

  1. // Get the first event in the user’s event list
  2. $event = $eventFeed[0];
  3. // Change the title to a new value
  4. $event->title = $service->newTitle(“Woof!”);
  5. // Upload the changes to the server
  6. try {
  7.     $event->save();
  8. } catch (Zend_Gdata_App_Exception $e) {
  9.     echo “Error: ” . $e->getMessage();
  10. }

Deleting Events

Calendar events can be deleted either by calling the calendar service’s delete() method and providing the edit URL of an event or by calling an existing event’s own delete() method.

In either case, the deleted event will still show up on a user’s private event feed if an updateMin query parameter is provided. Deleted events can be distinguished from regular events because they will have their eventStatus property set to “http://schemas.google.com/g/2005#event.canceled”.

  1. // Option 1: Events can be deleted directly
  2. $event->delete();
  1. // Option 2: Events can be deleted supplying the edit URL of the event
  2. // to the calendar service, if known
  3. $service->delete($event->getEditLink()->href);

Accessing Event Comments

When using the full event view, comments are not directly stored within an entry. Instead, each event contains a URL to its associated comment feed which must be manually requested.

Working with comments is fundamentally similar to working with events, with the only significant difference being that a different feed and event class should be used and that the additional meta-data for events such as where and when does not exist for comments. Specifically, the comment’s author is stored in the author property, and the comment text is stored in the content property.

  1. // Extract the comment URL from the first event in a user’s feed list
  2. $event = $eventFeed[0];
  3. $commentUrl = $event->comments->feedLink->url;
  4. // Retrieve the comment list for the event
  5. try {
  6. $commentFeed = $service->getFeed($commentUrl);
  7. } catch (Zend_Gdata_App_Exception $e) {
  8.     echo “Error: ” . $e->getMessage();
  9. }
  10. // Output each comment as an HTML list
  11. echo “<ul>”;
  12. foreach ($commentFeed as $comment) {
  13.     echo “<li><em>Comment By: ” . $comment->author->name “</em><br/>” .
  14.          $comment->content . “</li>”;
  15. }
  16. echo “</ul>”;

Google APIs: cómo utilizar OAuth 2.0 para acceder

[Fuente: https://developers.google.com/accounts/docs/OAuth2?hl=es#scenarios]

Las APIs de Google utilizan el OAuth 2.0 protocol para autenticación y autorización. Goolgle dispone de varios flujos OAuth 2.0 que cubren todos los escenarios típicos, véase : web server , Javascript , dispositivo , aplicación instalada y servidor – servidor.

OAuth 2.0 es un protocolo relativamente simple y un programador puede integrar con Google’s OAuth 2.0 endpoints sin demasiado esfuerzo. Como pincelada introductoria, hay que registrar la aplicación con Google , redirigir el navegador a una URL , parsear el token de la respuesta , y enviar el token al servicio de Google API que desees.

Este artículo es una introducción a los escenarios OAuth 2.0 que Google soporta y proporciona enlaces a contenidos con más información.

Dadas las implicaciones de seguridad de acceder a la implementación correcta , recomendamos encarecidamente a los programadores utilizar las librerías OAuth 2.0 cuando interaccionan con Google’s OAuth 2.0 endpoints (ver Client libraries pars más info). Proximamente , más características serán añadidas a estas librerías.

Contents

  1. Basic Steps
  2. Simple Example
  3. Scenarios
    1. Login
    2. Web Server Applications
    3. Client-side Applications
    4. Installed Applications
    5. Devices
    6. Service Accounts
  4. Client Libraries

Pasos Básicos

Las aplicaciones siguen el mismo patrón básico cuando acceden a cualquier API de Google utilizando OAuth 2.0. A alto nivel, siempre se dan los siguientes 4 pasos:

1. Registrar la aplicación

Todas las aplicaciones que accedan al Google API deben estar registradas a través de la APIs Console.El resultado de este proceso de registro es un conjunto de valores que son conocidos tanto por Google como por tu aplicación (es decir , el client-id , el client-secret , los Javascript origins , la redirect-uri, etc). Por ejemplo: una aplicación Javascript no requiere un secret, pero una aplicación en servidor web sí.

2. Obtener el Token de acceso desde un Google Authorization Server

Antes de que tu aplicación pueda acceder a Google API , debe obtener un token de acceso que garantice el acceso al API. Un solo token de acceso puede autorizarte varios grados de acceso a multiples APIs. El conjunto de recursos y poeraciones permitidos por un token de acceso se controla durante la petición del token de acceso via una variable parámetro llamada ‘scope‘. Varios scopes pueden ser solicitados en una petición.

Hay varias formas de hacer una petición, y varia en función del tipo de aplicación que se está desarrollando. Por ejemplo: una aplicación Javascript puede solicitar un token de acceso utilizando un browser redirect a Google , mientras que una aplicación instalada en un dispositivo que no tiene browser utiliza peticiones web service.

La petición requiere que el usuario se logue en Google. Después de logarse , el usuario verá el permiso solicitado por la aplicación y será preguntado si quiere autorizar esos permisos a esa aplicación. Este proceso se llama “consentimiento del usuario“.

Si el usuario autoriza esos permisos a tu aplicación , a tu aplicación le será enviado un token de acceso o un código de autorización (el cual es utilizado para obtener un token de acceso). Si el usuario no autoriza permisos a tu aplicación , el Google Authorization Server retorna un error.

3. Enviar el Token de Acceso a un API

Después de que una aplicación ha obtenido el token de acceso, puede enviar el token de acceso en una petición a cualquier API de Google. Los tokens de acceso son válidos sólo para un conjunto de operaciones y recursos descritos en la petición del token. Por ejemplo:if an access token is issued for the Google+ API, it will not grant access to the Google Contacts API. It may, however, be sent to the Google+ API multiple times for similar operations.

Los tokens de acceso son enviados al API de Google en la HTTP Authorization header,o como un parámetro de la querystring (if HTTP header operations are not available).

4. Refresca el Token de Acceso  (opcional)

Los tokens de acceso tiene un tiempo de vida limitado, en algunos casos, una aplicación necesita acceso al Google API más allá del tiempo de un solo token de acceso. Cuando éste es el caso, tu aplicación puede obtener lo que se llama un token de refresco. Un token de refresco te permite obtener nuevos token de acceso.

Note that there are limits on the number of refresh tokens that will be issued; one limit per client/user combination, and another per user across all clients. You should save refresh tokens in long-term storage and continue to use them as long as they remain valid. If your application requests too many refresh tokens, it may run into these limits, in which case older refresh tokens will stop working.

Ejemplo simple

A continuación un ejemplo trivial de cómo utilizar Google’s OAuth 2.0 endpoint para obtener acceso al API de Google. It’s a Python web application running on App Engine. The flow of the example is fairly straightforward:

  1. Cuando la aplicación se carga , muestra al usuario el enlace “Login”
  2. Cuando el usuario hace click en Login, le será solicitado el login en Google y le pedirá que dé autorización de información básica  de la cuenta a la aplicación (consentimiento del usuario)
  3. Si el usuario concede los permisos, la aplicación recibe un token de acceso
  4. Una vez que tiene el token de acceso, la aplicación presenta el token de acceo al Google API que proporciona la información básica de la cuenta.(https://www.googleapis.com/oauth2/v1/userinfo)
  5. La aplicación renderiza le información básica de la cuenta en una tabla simple

Try it out for yourself!

Scenarios

Login

El login de usuario es siempre una parte esencial para acceder a la mayoría de las apis de Google. Tu aplicación puede utilizar el sistema de autenticación de Google para delegar la autenticación y obtención del perfil a Google.

La secuencia de login comienza redireccionando el navegador (popup , o página web normal si es necesario) a una url de Google con un conjunto de parámetros en la query string. Será Google el que se encargue de seleccionar la sesión correcta (el usuario puede haberse logado previamente con múltiples identidades) , aceptar y validar las credenciales del usuario y el one-time-password (si la cuenta lo requiere) , obteniendo consentimiento para proporcionar información básica del perfil, así como retornar un token de acceso OAuth 2.0 a tu aplicación.

El resultado de la secuencia de autenticación del usuario es un token de acceso OAuth 2.0, con el que ya puedes obtener información del perfil del usuario invocando al UserInfo Google API.

La información retornada del servicio de UserInfo puede ser utilizada durante el registro del usuario lo que minimiza los datos que el usuario tiene que introducir en el registro.

Además está el beneficio añadido de que tu site no tiene que mantener usuarios / contraseña en un almacenamiento seguro.

Para más información Login documentation.

Web Server Applications

Los servidores de Autorización OAuth2.0 de Google soportan aplicaciones web de servidor (es decir PHP , Java , Python , Ruby, ASP.NET,etc) . Esta secuencia comienza redireccionando el navegador (popup , o página completa si es necesario) a un url de Google con un conjunto de parámetros en la query string que indican el API y el tipo de acceso que nuestra aplicación solicita. Como en otros escenarios, Google se encarga de autenticar al usuario, selección de la sesiones que tenga ya guardadas el usuario y del consentimiento, pero el resultado de la secuencia en este caso es un “authorization code”. Después de recibir el código de autorización , la aplicación puede intercambiar el código por un código de acceso y un refresh token.

La aplicación puede acceder al Google API después de que se reciba el token de acceso.

Para más información -> Web Server documentation.

Client-side Applications

The Google OAuth 2.0 Authorization Server supports JavaScript applications (JavaScript running in a browser). Like the other scenarios, this one begins by redirecting a browser (popup, or full-page if needed) to a Google URL with a set of query string parameters that indicate the type of Google API access the application requires. Google handles the user authentication, session selection, and user consent. The result is an access token. The client should then validate the token. After validation, the client includes the access token in a Google API request.

For more information, see the Client-side documentation.

Installed Application

The Google OAuth 2.0 Authorization Server supports desktop and mobile applications (e.g. Android, Windows, Mac OS, iOS, Blackberry, etc.). These applications, in general, cannot keep secrets.

The sequence for installed applications is similar to the one shown in the Web Server section, but there are three exceptions:

  1. When registering the application, you specify that the application is an Installed application. This results in a different value for the redirect_uri parameter.
  2. The client_id and client_secret obtained during registration are embedded in the source code of your application. In this context, the client_secret is obviously not treated as a secret.
  3. The authorization code is returned to your application differently.

This sequence begins by redirecting a browser (either a browser embedded in the application or the system browser) to a Google URL with a set of query parameters that indicate the type of Google API access the application requires. Like other scenarios, Google handles the user authentication, session selection, and user consent. The result of the sequence is an authorization code. Your application can choose to have the authorization code returned in the title of the web page or to a http://localhost port. Once the application receives the authorization code, it can exchange the code for an access token and a refresh token.

After the application has received the access and refresh tokens, it may store the refresh token for future use, and use the access token to access a Google API. Once the access token expires, the application obtains a new one with the refresh token.

For more information, see the Installed Application documentation.

Devices

The Google OAuth 2.0 Authorization Server supports applications that run on devices with limited input capabilities (e.g. game consoles, video cameras, printers). In these cases, the user must have separate access to a computer or device with richer input capabilities. The user will first interact with application on the limited device, obtain an URL and a code from the device, then switch to a device or computer with richer input capabilities and launch a browser. Once in a browser, the user will navigate to the URL specified on the device, authenticate, and enter the code.

The sequence begins with the application making a request to a Google URL for a new code. The response contains several parameters, including the URL and code that should be shown to the user. The application should present these values to the user, and begin polling a Google URL at a specified interval. The response to a message in this polling sequence indicates whether or not the user has approved access. After the user approves access (via another computer or device), the response contains an access and refresh token.

After the application has received the access and refresh tokens, it may store the refresh token for future use, and use the access token to access a Google API. Once the access token expires, the application obtains a new one with the refresh token.

For more information, see the Device documentation.

Cuentas de Servicio (Service Accounts)

Varios APIs de Google actuan desde el lado de una aplicación y no acceden a información del usuario. Ejemplos de estos APIs son por ejemplo el API de Predicciones y el Google Cloud Storage.Cuando una aplicación accede a Google Cloud Storage, la aplicación necesita probar su propia identidad antes de realizar operaciones en la nube así que necesita obtener la aprobación del usuario.

Hay también una opción para que una aplicación pueda solicitar acceso delegado a un recurso en entornos de enterprise. El Google’s OAuth 2.0 Authorization Server soporta estos tipos de aplicaciones, y esta sección describe cómo una aplicación puede probar su identidad antes de acceso a un Google API compatible.

El mecanismo de esta interacción requiere que las aplicaciones se creen de forma criptográfica unos JSON Web Tokens (JWTs). Se recomienda también en este caso a los programadores que para esto utilicen un librería. Escribir este código sin el uso de una librería que abstraiga la creación de token y el firmado es muy dado a errores que pueden tener un impacto severo en la seguridad de tus aplicaciones. Para una lista librerias que soportan  este flujo , ver  OAuth 2.0 Service Accounts documentation

Esta secuencia comienza con la creación de una Service Account. Puedes crear una en la Google APIs console, o si estás utilizando Google App Engine, se crea una automáticamente cuando arrancas la aplicación GAE. Durante la creación de la Service Account en la Google APIs Console se te preguntará para descargarte una private key. Estate seguro de guardar esta private key en un lugar seguro. After the Service Account has been created, you will also have access to the client ID associated with the private key. You will need both when coding your application.

After obtaining the client ID and private key from the Google APIs Console, create a JWT and sign it with the private key, and construct an access token request in the appropriate format. Your application then sends the token request to the Google OAuth 2.0 Authorization Server and an access token will be returned. The application can access the API only after receiving the access token. When the access token expires, the application repeats the process.

For more information, see the Service Account documentation.

Client libraries

The following client libraries make implementing OAuth 2.0 even simpler by integrating with popular frameworks:

GAE : Images API

[Fuente : https://developers.google.com/appengine/docs/java/images/]

Images Java API Overview

App Engine provides the ability to manipulate image data using a dedicated Images service. The Images service can resize, rotate, flip, and crop images; it can composite multiple images into a single image; and it can convert image data between several formats. It can also enhance photographs using a predefined algorithm. The API can also provide information about an image, such as its format, width, height, and a histogram of color values.

The Images service can accept image data directly from the app, or it can use a Blobstore value or a Google Cloud Storage value. When the source is the Blobstore or Google Cloud Storage, the size of the image to transform can be up to the maximum size of a Blobstore value or Google Cloud Storage value. However, the transformed image is returned directly to the app, and so must be no larger than 32 megabytes. This is potentially useful for making thumbnail images of photographs uploaded to the Blobstore or Google Cloud Storage by users.

  1. Transforming Images in Java
  2. Available Image Transformations
  3. Image Formats
  4. Transforming Images from the Blobstore
  5. Images and the Development Server
  6. Quotas and Limits

Transforming Images in Java

The Image service Java API lets you apply transformations to images, using a service instead of performing image processing on the application server. The app prepares an Image object with the image data to transform, and a Transform object with instructions on how to transform the image. The app gets an ImagesService object, then calls its applyTransform() method with the Image and the Transform objects. The method returns an Image object of the transformed image.

The app gets ImagesService, Image and Transform instances using the ImagesServiceFactory.

import com.google.appengine.api.images.Image;
import com.google.appengine.api.images.ImagesService;
import com.google.appengine.api.images.ImagesServiceFactory;
import com.google.appengine.api.images.Transform;

// ...
        byte[] oldImageData;  // ...

        ImagesService imagesService = ImagesServiceFactory.getImagesService();

        Image oldImage = ImagesServiceFactory.makeImage(oldImageData);
        Transform resize = ImagesServiceFactory.makeResize(200, 300);

        Image newImage = imagesService.applyTransform(resize, oldImage);

        byte[] newImageData = newImage.getImageData();

Multiple transforms can be combined into a single action using a CompositeTransform instance. See the images API reference.

Available Image Transformations

The Images service can resize, rotate, flip, and crop images, and enhance photographs. It can also composite multiple images into a single image.

Resize

You can resize the image while maintaining the same aspect ratio. Neither the width nor the height of the resized image can exceed 4000 pixels.

Rotate

You can rotate the image in 90 degree increments.

Flip Horizontally

You can flip the image horizontally.

Flip Vertically

You can flip the image vertically.

Crop

You can crop the image with a given bounding box.

I’m Feeling Lucky

The “I’m Feeling Lucky” transform enhances dark and bright colors in an image and adjusts both color and contrast to optimal levels.

Image Formats

The service accepts image data in the JPEG, PNG, WEBP, GIF (including animated GIF), BMP, TIFF and ICO formats.

It can return transformed images in the JPEG, WEBP and PNG formats. If the input format and the output format are different, the service converts the input data to the output format before performing the transformation.

Transforming Images from the Blobstore

The Images service can use a value from the Blobstore as the source for a transformation. You have two ways to transform images from the Blobstore:

  1. Using the ImageServiceFactory() class allows you to perform simple image transformations, such as crop, flip, and rotate.
  2. Using getServingUrl() allows you to dynamically resize and crop images, so you don’t need to store different image sizes on the server. This method returns a URL that serves the image, and transformations to the image are encoded in this URL.

Using the ImageServiceFactory() Class

You can transform images from the Blobstore as long as the image size is smaller than the maximum Blobstore value size. Note, however, that the result of the transformation is returned directly to the app, and must therefore not exceed the API response limit of 32 megabytes. You can use this to make thumbnail images of photographs uploaded by users.

To transform an image from the Blobstore in Java, you create the Image object by calling the static methodImageServiceFactory.makeImageFromBlob(), passing it a blobstore.BlobKey value. The rest of the API behaves as expected. TheapplyTransform() method returns the result of the transforms, or throws an ImageServiceFailureException if the result is larger than the maximum size of 1 megabyte.

import com.google.appengine.api.images.Image;
import com.google.appengine.api.images.ImagesService;
import com.google.appengine.api.images.ImagesServiceFactory;
import com.google.appengine.api.images.Transform;

// ...
        BlobKey blobKey;  // ...

        ImagesService imagesService = ImagesServiceFactory.getImagesService();

        Image oldImage = ImagesServiceFactory.makeImageFromBlob(blobKey);
        Transform resize = ImagesServiceFactory.makeResize(200, 300);

        Image newImage = imagesService.applyTransform(resize, oldImage);

        byte[] newImageData = newImage.getImageData();

Using getServingUrl()

The getServingUrl() method allows you to generate a stable, dedicated URL for serving web-suitable image thumbnails. You simply store a single copy of your original image in Blobstore, and then request a high-performance per-image URL. This special URL can serve that image resized and/or cropped automatically, and serving from this URL does not incur any CPU or dynamic serving load on your application (though bandwidth is still charged as usual). Images are served with low latency from a highly optimized, cookieless infrastructure.

The URL returned by this method is always public, but not guessable; private URLs are not currently supported. If you wish to stop serving the URL, delete it using the deleteServingUrl method.

If you supply the arguments, this method returns a URL encoded with the arguments specified. If you do not supply any arguments, this method returns the default URL for the image, for example:

http://your_app_id.appspot.com/randomStringImageId

You can then add arguments to this URL to get the desired size and crop parameters. The available arguments are:

  • =sxx where xx is an integer from 0–1600 representing the length, in pixels, of the image’s longest side. For example, adding =s32 resizes the image so its longest dimension is 32 pixels.
  • =sxx-c where xx is an integer from 0–1600 representing the cropped image size in pixels, and -c tells the system to crop the image.
// Resize the image to 32 pixels (aspect-ratio preserved)
http://your_app_id.appspot.com/randomStringImageId=s32

// Crop the image to 32 pixels
http://your_app_id.appspot.com/randomStringImageId=s32-c

Images and the Development Server

The development server uses your local machine to perform the capabilities of the Images service.

The Java development server uses the ImageIO framework to simulate the Image service. The “I’m Feeling Lucky” photo enhancement feature is not supported. The WEBP image format is only supported if a suitable decoder plugin has been installed. The Java VP8 decoder plugin can be used, for example.

Quotas and Limits

Each Images service request counts toward the Image Manipulation API Calls quota. An app can perform multiple transformations of an image in a single API call.

Data sent to the Images service counts toward the Data Sent to (Images) API quota. Data received from the Images service counts toward the Data Received from (Images) API quota.

Each transformation of an image counts toward the Transformations Executed quota.

For more information on quotas, see Quotas, and the “Quota Details” section of the Admin Console.

In addition to quotas, the following limits apply to the use of the Images service:

Limit Amount
maximum data size of image sent to service 32 megabytes
maximum data size of image received from service 32 megabytes
maximum size of image sent or received from service 50 megapixels

Egit User Guide

[http://wiki.eclipse.org/EGit/User_Guide]

Getting Started

Overview

If you’re new to Git or distributed version control systems generally, then you might want to read Git for Eclipse Users first. More background and details can be found in the on-line book Pro Git.

If there you are coming from CVS, you can find common CVS workflows for Git Platform-releng/Git Workflows.

Basic Tutorial: Adding a project to version control

Configuration

Identifying yourself

Whenever the history of the repository is changed (technically, whenever a commit is created), Git keeps track of the user who created that commit. The identification consists of a name (typically a person’s name) and an e-mail address. This information is stored in file ~/.gitconfig under dedicated keys.

EGit will ask you for this information when you create your first commit. By default, this dialog is shown only once until you create a new workspace or tick the checkbox “Show initial configuration dialog” on the Git Preference page:

Image:Egit-0.11-initialConfigurationDialog.png

You can also untick “Don’t show this dialog again” if you want to see it again later.

Instead of using this dialog, you can always change this information using the Git configuration:

  • Click Preferences > Team > Git > Configuration
  • Click New Entry and enter the key value pairs user.email and user.name

Image:Egit-0.9-getstarted-email.png

Image:Egit-0.9-getstarted-name.png

Setting up the Home Directory on Windows

Add the environment variable HOME to your environment variables.

  1. In Windows 7, type “environment” at the start menu
  2. Select “Edit environment variables for your account”
  3. Click the “New” button.
  4. Enter “HOME” in the name field
  5. Enter “%USERPROFILE%” or some other path in the value field.
  6. Click OK, and OK again. You have just added the Home directory on Windows.

EGit needs this path for looking up the user configuration (.gitconfig). HOME should point to your home directory e.g. C:UsersTomEnsure correct case! E.g. C:users instead of C:Users may cause problems!

If the HOME variable is not defined the home directory will be calculated by concatenating HOMEDRIVE and HOMEPATH.

If both HOME and HOMEDRIVE are not defined HOMESHARE will be used.

EGit shows a warning if HOME is not defined explicitly. Keep in mind that if you set the HOME environment variable while Eclipse is running, you will still see following warning. You will have to restart Eclipse for it to recognize the HOME value.

Image:Egit no home.png

Pointing out the System wide configuration

If you use Git for Windows as a companion to EGit, make sure EGit knows where Git is installed so it can find the “system wide settings”, e.g. how core.autocrlf is set. Go to the settings and look under Team>Git>Configuration and then the System Settings tab.

If you selected one of the options to use Git from the Command Line Prompt when you installed Git for Windows, then the location of the system wide settings is filled in with a path and everything is fine. If not, use the Browse button to locate where Git is installed, e.g. C:Program Files(x86)Git.

This advice also applies to users of other Git packagings, e.g. Git under Cygwin or TortoiseGit.

Non-Windows users should in theory check this setting, but the system wide settings are usually not used on non-Windows platforms.

Create Repository

  • Create a new Java project HelloWorld. (In this case, the project was built outside of your Eclipse Workspace.)

Image:Egit-0.9-getstarted-project.png

  • Select the project, click File > Team > Share Project
  • Select repository type Git and click Next

Image:Egit-0.9-getstarted-share.png

  • To configure the Git repository select the new project HelloWorld

Image:Egit-0.9-getstarted-create-project.png

  • Click Create Repository to initialize a new Git repository for the HelloWorld project. If your project already resides in the working tree of an existing Git repository the repository is chosen automatically.

Image:Egit-0.9-getstarted-project-created.png

  • Click Finish to close the wizard.
  • The decorator text “[master]” behind the project shows that this project is tracked in a repository on the master branch and the question mark decorators show that the .classpath and .project and the .settings files are not yet under version control

Image:Egit-0.9-getstarted-shared-project.png

Track Changes

  • Click Team > Add on the project node. (This menu item may read Add to Index on recent versions of Egit)
  • The + decorators show that now the project’s files have been added to version control
  • Mark the “bin” folder as “ignored by Git”, either by right-clicking on it and selecting Team > Ignore or by creating a file .gitignore in the project folder with the following content
/bin
  • This excludes the bin folder from Git’s list of tracked files.
  • Add .gitignore to version control (Team > Add):

Image:Egit-0.11-getstarted-ignore-added.png

  • You may have to set your Package Explorer filters in order to see .gitignore displayed in the Package Explorer. To access filters, select the down arrow at right of Package Explorer tab to display View Menu.

Image:Pe downarrow1.png

  • Select Filters… from the View Menu and you will be presented with the Java Element Filters dialog. Unselect the top entry to display files that begin with . (period) such as .gitignore.

Image:Filters.png

  • Click Team > Commit in the project context menu
  • Enter a commit message explaining your change, the first line (followed by an empty line) will become the short log for this commit. By default the author and committer are taken from the .gitconfig file in your home directory.
  • You may click Add Signed-off-by to add a Signed-off-by: tag.
  • If you are committing the change of another author you may alter the author field to give the name and email address of the author.
  • Click Commit to commit your first change.

Image:Egit-0.9-getstarted-commit.png

  • Note that the decorators of the committed files changed.

Image:Egit-0.9-getstarted-commited.png

Inspect History

  • Click Team > Show in History from the context menu to inspect the history of a resource

Image:Egit-0.11-getstarted-history1.png

  • Create a new Java class Hello.java and implement it
  • Add it to version control and commit your change
  • Improve your implementation and commit the improved class
  • The resource history should now show 2 commits for this class

Image:Egit-0.9-getstarted-application.png

Image:Egit-0.11-getstarted-history2.png

  • Click the Compare Mode toggle button in the History View
  • Double click src/Hello.java in the Resource list of the History View to open your last committed change in the Compare View

Image:Egit-0.11-getstarted-compare.png
Congratulations, you just have mastered your first project using Git !

GitHub Tutorial

Create Local Repository

Create Repository at GitHub

  • create a new repository at GitHub

Image:Egit-0.10-github-create-repo.png

On the next screen you can see the URLs you may use to access your fresh new repository:

  • click SSH to choose the SSH protocol. It can be used for read and write access
  • click HTTP to choose the HTTP protocol. It can also be used for read and write access.
  • click Git Read-Only to choose the anonymous git protocol for cloning. It’s the most efficient protocol git supports. Since the git protocol doesn’t support authentication it’s usually used to provide efficient read-only access to public repositories.

Image:Egit-0.10-github-cloneurl.png

Eclipse SSH Configuration

  • Open the Eclipse Preferences and ensure that your SSH2 home is configured correctly (usually this is ~/.ssh) and contains your SSH2 keys

Image:Egit-0.10-ssh-preferences.png

Push Upstream

  • Click Team > Remote > Push… and copy and paste the SSH URL of your new GitHub repository
  • If you are behind a firewall which doesn’t allow SSH traffic use the GitHub HTTPS URL instead and provide your GitHub user and password instead of using the uploaded public SSH key. To store your credentials into the Eclipse secure store click Store in Secure Store.
  • Note: many HTTP proxies are configured to block HTTP URLs containing a user name, since disclosing a user name in an HTTP URL is considered a security risk. In that case remove the username from the HTTP URL and only provide it in the user field. It will be sent as an HTTP header.

Image:Egit-0.10-github-pushurl.png

  • Click Next and on first connection accept GitHub’s host key.
  • Enter your SSH key’s passphrase and click OK.
  • On the next wizard page click Add all branches spec to map your local branch names 1:1 to the same branch names in the destination repository.

Image:Egit-0.10-github-push-refspec.png

  • Click Next. The push confirmation dialog will show a preview of the changes that will be pushed to the destination repository.

Image:Egit-0.10-github-push-preview.png

  • Click Finish to confirm that you want to push these changes.
  • The next dialog reports the result of the push operation.

Image:Egit-0.10-github-pushresult.png

  • Point your browser at your GitHub repository to see that your new repository content has arrived

Image:Egit-0.10-github-pushed-repo.png

EGit/Git For Eclipse Users

[Fuente : http://wiki.eclipse.org/EGit/Git_For_Eclipse_Users]

Este post está dirigido a aquellos que han estado utilizando Eclipse y que han estado utilizando hasta ahora CVS o SVN. El contenido de este artículo es sobre Git y lo que significa para un usuario de Eclipse, y especificamente, cómo afecta a la hora de trabajar en proyectos desde Eclipse.org.

Este artículo no va de las ventajas de Git sobre CVS/SVN , o de Git versus otros sistemas de control de versiones distribuidos como como Mercurial.

Una vez que hayas entendido las diferencias conceptuales entre CVS/SVN y Git , y empieces a utilizar Git , puedes sentir que sea dificil volver atrás. Utilizar Git es como ver la TV en color , una vez que las descubierto, es dificil volver al blanco y negro.

Contents

[hide]

Centralised version control systems

So, what do you need to know about Git? Well, both CVS and SVN are known as centralised version control systems (CVCS). That is, there is one Master repository where people share code; everyone checks out their code (or branch) from that repository, and checks changes back in. For code that needs to be sent person-to-person (for example, for review, or as a way of contributing fixes), it is possible to create a patch, which is a diff of your code against the given Master repository version (often HEAD, but sometimes a branch like Eclipse_35).

Two problems surface with a centralised version control system, although they aren’t immediately obvious:

  • You need to be ‘online’ to perform actions, like diff or patch.*
  • Patches generated against a particular branch can become outdated fairly quickly as development of the snapshot-in-time branch moves on (e.g. when it is time to apply the patch, HEAD is different than it was when the patch was generated).

The first problem is rarely apparent for those working with Eclipse in a location at (or near) the repository itself. Those in the same continent will rarely experience delays due to global network variation; in addition, they tend to be employed in an organisation and sit at a desktop connected to wired networking for most of the day. Road warriors (those with laptops and who code from the local coffee shop) tend to operate in a more frequently disconnected mode, which limits repository functionality to when they are connected. (*A note on SVN: since SVN keeps the last-known checkout, it’s possible to do a limited set of operations while disconnected from SVN, like diff from the last-known checkout. However, in general, you are prevented from doing many of the operations that are possible while connected.)

The second problem is simply an artifact of the way in which patches work. These are generally performed against HEAD (a snapshot in time) and then applied later (sometimes months or even eight years later). Although they record the version of the file they were patched against, the patch itself is sensitive to big changes in the file, sometimes leading to the patch’s being inapplicable. Even relatively simple operations, like a file rename, can throw a well-formed CVCS patch out of the window.

Distributed Version Control Systems

Distributed Version Control Systems (DVCS) are a family of version control systems unlike those with which many are familiar. Two of the most popular are Git and Hg, although others (DarcsBazaarBitkeeper, etc.) exist. In a DVCS each user has a complete copy of the repository, including its entire history. A user may potentially push changes to or pull changes from any other repository. Although policy may confer special status on one or more repositories, in principle every repository is a first-class citizen in the DVCS model. This stands in contrast to a centralised version control system, where every individual checks files into and out of an authoritative repository.

☞ Each user has a full copy of the repository

This initially sounds impossible, especially if you’re used to centralised version control systems, and even more so if they involve pessimistic file-based locking. (If you do firmly want pessimistic locking, please stop reading here. Thanks.) Questions arise, like:

  1. If everyone has a copy of the repository, don’t all the forks diverge?
  2. Where is the master repository kept?
  3. Isn’t the repository, like, really big?
  4. No really, I like pessimistic locking.

Let’s answer each one of these questions in turn. (If I missed your favourite question, then please feel free to add one in the comments.)

  1. Yes, the forks can diverge. But after all, open-source can diverge anyway. There’s nothing stopping me from forking the dev.eclipse.org codebase, and publishing my own version of it called Maclipse. The key thing here is that whilst forks are possible, forking is not a bad thing in itself. After all, look at Linux and Android; originally, they shared a history, but are now different. XFree86 and X.Org split over licensing issues. MySQL was forked to create MariaDB, and so on. The key thing about forks is that the best survive. X.Org is now the default X client, whereas XFree86 was the default beforehand. The jury is still out on MySQL versus MariaDB. And although Maclipse has been downloaded literally tens of times, it hasn’t caused a dent in Eclipse’s growth.
    ☞ Forks happen
  2. Do not try to bend the master repository – that’s impossible. Instead, try only to realise the truth; there is no master repository. In fact, there’s a veritable matrix of master repositories possible. Each repository can be considered a node in a graph; nodes in the graph can be connected to each other in any way. However, rather than an n-n set of links, the graph usually self-organises into a tree-like structure, logically associating with one point that acts as a funnel for everything else. In a sense, that’s a master repository – everyone has already made the choice; now you have to understand it. Should an oracle intervene, a neo-master can be chosen.
    ☞ There is no master repository
  3. Given that there is no master repository, it becomes clear that the repository must live in its entirety on each of the nodes in the DVCS. This usually leads to fears about the size of the repository, even taking into account that storage is cheap. A key point here is that DVCS repositories are usually far smaller than their counterpart CVCS repositories, not least of the reasons for which being that everyone has to have a full repository in order to do any work. It’s a natural consequence that they’re smaller. However, they’re smaller also because each repository contains far less scope than a CVCS repository. For example, most organisations will have one mammoth CVCS repository with several thousand top-level ‘modules’ (or ‘projects’) underneath. Because of the administrative overhead of ‘creating a new repository’, it is often easier to reuse the same one for everything. (SVN put some limits on how wide it could grow, which CVS tended not to have; but even so, the main Apache SVN is over 900k revisions.) By contrast, a DVCS is usually nothing more than a directory with a few administrative files inside. It doesn’t require administrator privileges or specific ports; in fact, since there’s no central server to speak of, it doesn’t even need to be shared by network protocols. As a result, a DVCS repository is much more granular – and easy to create – than a conventional CVCS repository. Firstly, it’s always on your machine (there’s no centralised server to configure) and secondly, all you need access to is a file system. So typically, a DVCS “repository” will often be at the level of an Eclipse project or project working set. For example, although the CVS RT repository is shared by Equinox and ECF, a DVCS-based solution would almost certainly see the Equinox and ECF projects in their own repositories; perhaps, even breaking down further into (say) ECF-Doc and ECF-Bundles. Think of a DVCS repository as one or a few Eclipse projects instead of hundreds of projects together.
    ☞ DVCS repositories are much smaller, typically because they contain only a small number of highly-related projects
  4. That’s not a question. Look, if you want the benefits of a centralised DVCS with pessimistic locking and pessimistic users, then go look at ClearCase.
    ☞ Friends don’t let friends use ClearCase

How does it work?

There are two pieces of information that identify elements in a CVCS; a file’s name, and its version (sometimes called revision). In the case of CVS, each file has its own version stream (1.1, 1.2, 1.3), whilst in SVN, each changeset has a ‘repository revision’ number. Tags (or branches) are symbolic identifiers which may be attached to any specific set of files or repository revision, and are mostly for human consumption (e.g. HEAD, trunk, ECLIPSE_35).

This doesn’t work in a DVCS. Because there is no central repository, there is no central repository version number (either for the repository as a whole, or for individual files).

Instead, a DVCS operates at the level of a changeset. Logically, a repository is made up of an initial (empty) state, followed by many changesets. (A changeset is merely a change to a set of files; if you think ‘patch’ from CVS or SVN, you’re not far off.)

Identifying a changeset is much harder. We can’t use a (global) revision number, because that concept isn’t used. Instead, a changeset is represented as a hash of its contents. For example, given the changeset:

--- a/README.txt
+++ b/README.txt
@@ -1 +1 @@
-SVN is great
+Git is great

we can create a ‘hash’ using (for example) md5, to generate the string 0878a8189e6a3ae1ded86d9e9c7cbe3f. When referring to our change with others, we can use this hash to identify the change in question.

☞ Changesets are identified by a hash of their contents

Clearly, though, this doesn’t work on its own. What happens if we do the same change later on? It would have the same change, and we don’t want the same hash value.

What happens is that a changeset contains two things; the change itself, and a back-pointer to the previous changeset. In other words, we end up with something like:

previous: 48b2179994d494485b79504e8b5a6b23ce24a026
--- a/README.txt
+++ b/README.txt
@@ -1 +1 @@
-SVN is great
+Git is great
☞ Changesets (recursively) contain pointers to the previous changeset

Now, if we were to have the same change again, the previous value would be different, so we’d get a different hash value. We could set up an argument:

previous: 48b2179994d494485b79504e8b5a6b23ce24a026
--- a/README.txt
+++ b/README.txt
@@ -1 +1 @@
-SVN is great
+Git is great

previous: 8cafc7ecd01d86977d2af254fc400cee
--- a/README.txt
+++ b/README.txt
@@ -1 +1 @@
-Git is great
+SVN is great

previous: cba3ef5b2d1101c2ac44846dc4cdc6f4
--- a/README.txt
+++ b/README.txt
@@ -1 +1 @@
-Git is great
+SVN is great

Each time, the value of the changeset includes a pointer to what comes before, so the hash is continually changing.

Note: Rather than using md5, as shown here, most DVCS (including Git) use an sha1 hash instead. Also, the exact way that the prior elements in the tree are stored, and their relationships, isn’t accurately portrayed above; however, it gives sufficiently well the idea of how they are organised.

☞ Git changesets are identified by an SHA-1 hash

Changesets and branches

Given that a changeset is a long value like 48b2179994d494485b79504e8b5a6b23ce24a026, it can be unfriendly to use. Fortunately, there are a couple of ways around this. Git, like other DVCSs, allow you to use an abbreviated form of the changeset, provided that it’s unique in the repository. For small repositories, this means that you can refer to changesets by really short values, like 48b21 or even 48. Conventionally, developers often use 6 digits of the hash – but large projects (like the Linux kernel) tend to have to use slightly larger references in order to have uniqueness.

☞ Git hashes can be shortened to any unique prefix

The current version of your repository is simply a pointer to the end of the tree. For this reason, it’s often referred to as a tip, but HEAD is used the symbolic identifier for what the current repository is pointing to. Similarly, any branch can be referred to by its changeset id, which includes that and all prior changes. The default branch is usually called master.

☞ The default ‘trunk’ is called ‘master’ in Git
☞ The tip of the current branch is referred to as ‘HEAD’

As a direct corollary to this, creating branches in a DVCS is fast. All that happens is that the repository on disk is updated to point to a different element in the (already physically present) tree, and you’re done. Furthermore, it’s trivial to ping-pong between different branches on the same repository that may contain different states and evolve independently.

☞ Creating, and switching between, branches is fast

Because branching is so fast, branches get used for things where a user of a CVCS wouldn’t normally use branching. For example, each bug in Bugzilla could have a new branch associated with it; if a couple of independent features are being worked on concurrently, they’d get their own branch; if you needed to drop back to do maintenance work on an ECLIPSE_35 branch, then you’d switch to a branch for that as well. Branches get created at least as frequently as changesetsmight in CVS, if not more so.

☞ Create a new branch for each Bugzilla or feature item that you work on
☞ Think of branches as throwaway changesets

Merging

With great power comes great flexibility, but ultimately, you want to get your changes into some kind of merged stream (like HEAD). One of the fears of unconstrained branching is that of unconstrained merge pains later on. SVN makes this slightly less difficult than CVS, but unless you merge to HEAD frequently, you can easily get lost – particularly when refactorings start happening.

☞ It’s painful to merge in a CVCS; therefore branches tend not to happen

Fortunately, DVCSs are all about merging. Given that each node in the changeset tree contains a pointer to its previous node (and transitively, to the beginning of time), it’s much more powerful than the standard flat CVCS diff. In other words, not only do you know what changes need to be made, but also what point in history they need to be made. So, if you have a changeset that renames a file, and then merge in a changeset that points to the file as it was before it was renamed, a CVCS will just fall over; but a DVCS will be able to apply the change before the rename occurred, and then play forward the changes.

Merges are just the weaving together of two (or more) local branches into one. The git merge documentation has some graphical examples of this; but basically, it’s just like any other merge you’ve seen. However, unlike CVCS, you don’t have to specify anything about where you’re merging from and to; the trees automatically know what their split point was in the past, and can work it out from there.

☞ Merging in a DVCS like Git is trivial

Pulling and pushing

So far, we’ve not talked much about the distributed nature of DVCS. Implicitly, though, the changes and ideas above are all to support distribution.

Given that a DVCS tree is merely a pointer to a branch (which transitively contains a long list of previous branches), and that each one of these nodes is identified by its hash, then you and I can share the same revision identifiers for common parts of our tree. There are three cases to consider for comparing our two trees:

  • Your tip is an ancestor of my tip
  • My tip is an ancestor of your tip
  • Neither of our tips are direct ancestors; however, we both share a common ancestor

The first two cases are trivial; if we synchronise trees, they just become a fast-forward merge. In fact, if that occurs, chances are you won’t know who is ahead of the other; it will just happen.

The last case is only slightly more tricky; a common ancestor must be found; say, 746d6c. Then I send changes between my tip and 746d6c, and you send changes between your tip and 746d6c. That way, we both end up with the same contents on our repositories.

Changes flow between repositories by push and pull operations. In essence, it doesn’t matter whether I push my changes to you, or you pull my changes from me; the net result is the same. However, in the case of Eclipse.org infrastructure, it’s likely that a central Git repository will be writable only by Eclipse committers. Thus, if I contribute a fix, I can ask a committer to pull the fix from my repository, and then they (after reviewing, and optionally rebasing) can push the fix to the Eclipse.org repository.

The best part of a DVCS is that it takes care of all the paperwork for you. You don’t need to use SVN-like 314:321 tags to remind you where you branched from; you don’t even have to worry if you haven’t updated recently. It all just works.

☞ Pulling and pushing in a DVCS like Git is trivial

Cloning and remotes

Where you can push (or pull) to is configured on a per (local) repository basis. Typically, if you clone an existing project, then a remote name called origin is automatically set up for you. For example, if you wanted to get hold of org.eclipse.babel.server.git, then you could do:

git clone git://git.eclipse.org/gitroot/babel/org.eclipse.babel.server.git

We can then keep up-to-date with what’s happening on the remote server by executing a pull from the remote:

git pull origin

…but we’re not limited to one repository. Let’s say we wanted to create a separate copy on GitHub for easy forking; we can do that by adding another remote Git URL and then pushing to that:

git remote add github http://github.com/alblue/babel.git
git push github

We can now use git push and git pull to move items between the two git repositories. By default, they both refer to the special-named origin, but you can specify whatever remote to talk to on the command line.

☞ Origin is the name of the default remote, but you can have many remotes per repository.

Initialising, committing and branching

To create a new Git repository, the git init command is used. This creates an empty repository in the current directory. They can, but often don’t, end with .git – typically it’s only repositories pushed to remote servers that use the .git extension. As noted above, a Git repository should ideally hold only one or a few highly related/coupled projects.

☞ ‘git init’ creates a fresh repository in the current directory

Git allows you to commit files, much like any other VCS. Each commit may be a single file, or many files; and a message goes along with it. Unlike other VCS, Git has a separate concept of an index, which is a set of files that would be committed. You can think of it as an active changeset; as you’re working on multiple files, you want only some changes to be committed as a unit. These files get git added to the index first, then git committed subsequently. (If you don’t like this behaviour, there’s a git commit -a option, which performs as CVS or SVN would.)

☞ ‘git add’ is used to add files and track changes to files
☞ ‘git commit’ is used to commit tracked files

To create branches, you can use git branch (which creates, but does not switch to, the new branch) and git checkout (which switches to the new branch). A shorthand for new branches is git checkout -b, which creates-and-switches to a branch. At any point, git branch shows you a list of branches and marks the current one with a * next to the name.

☞ ‘git branch’ is used to create and list branches
☞ ‘git checkout’ is used to switch branches
☞ ‘git checkout -b’ is used to create and then switch branches

Worked example

Here’s a transcript of working with setting up an initial repository, then copying data to and from a ‘remote’ repository, albeit in a different directory on the same system. The instructions are for a Unix-like environment (e.g. Cygwin on Windows).

$ mkdir /tmp/example
$ cd /tmp/example
$ git init
Initialized empty Git repository in /tmp/example/.git/
$ echo "Hello, world" > README.txt
$ git commit # Won't commit files by default
# On branch master
#
# Initial commit
#
# Untracked files:
#   (use "git add <file>..." to include in what will be committed)
#
#	README.txt
nothing added to commit but untracked files present (use "git add" to track)
$ git add README.txt # Similar to Team -> Add to Version Control
$ # git commit # Would prompt for message
$ git commit -m "Added README.txt"
[master (root-commit) 0dd1f35] Added README.txt
 1 files changed, 1 insertions(+), 0 deletions(-)
 create mode 100644 README.txt
$ echo "Hello, solar system" > README.txt
$ git commit
# On branch master
# Changed but not updated:
#   (use "git add <file>..." to update what will be committed)
#   (use "git checkout -- <file>..." to discard changes in working directory)
#
#	modified:   README.txt
#
no changes added to commit (use "git add" and/or "git commit -a")
$ git commit -a -m "Updated README.txt"
[master 9b1939a] Updated README.txt
 1 files changed, 1 insertions(+), 1 deletions(-)
$ git log --graph --oneline # Shows graph nodes (not much here) and change info
* 9b1939a Updated README.txt
* 0dd1f35 Added README.txt
$ git checkout -b french 0dd1f35 # create and switch to a new branch 'french'
Switched to a new branch 'french'
$ cat README.txt
Hello, world
$ echo "Bonjour, tout le monde" > README.txt
$ git add README.txt # or commit -a
$ git commit -m "Ajouté README.txt"
[french 66a644c] Ajouté README.txt
 1 files changed, 1 insertions(+), 1 deletions(-)
$ git log --graph --oneline
* 66a644c Ajouté README.txt
* 0dd1f35 Added README.txt
$ git checkout -b web 0dd1f35 # Create and checkout a branch 'web' from initial commit
$ echo '<a href="http://git.eclipse.org">git.eclipse.org</a>' > index.html
$ git add index.html
$ git commit -m "Added homepage"
[web d47e30c] Added homepage
 1 files changed, 1 insertions(+), 0 deletions(-)
 create mode 100644 index.html
$ git checkout master
$ git branch # See what branches we've got
  french
* master
  web
$ git merge web # pull 'web' into current branch 'master'
Merge made by recursive.
 index.html |    1 +
 1 files changed, 1 insertions(+), 0 deletions(-)
 create mode 100644 index.html
$ git checkout french # Switch to 'french' branch
Switched to branch 'french'
$ git merge web # And merge in the same
Merge made by recursive.
 index.html |    1 +
 1 files changed, 1 insertions(+), 0 deletions(-)
 create mode 100644 index.html
$ git log --graph --oneline
*   e974231 Merge branch 'web' into french
|
| * d47e30c Added homepage
* | 66a644c Ajouté README.txt
|/
* 0dd1f35 Added README.txt
$ git checkout master
$ git log --graph --oneline
*   e3de4de Merge branch 'web'
|
| * d47e30c Added homepage
* | 9b1939a Updated README.txt
|/
* 0dd1f35 Added README.txt
$ (mkdir /tmp/other;cd /tmp/other;git init) # Could do this in other process
$ (cd /tmp/other;git config --bool core.bare true) # Need to tell git that /tmp/other is a bare repository so we can "push" to it
Initialized empty Git repository in /tmp/other/.git/
$ git remote add other /tmp/other # could be a URL over http/git
$ git push other master # push branch 'master' to remote repository 'other'
Counting objects: 11, done.
Delta compression using up to 2 threads.
Compressing objects: 100% (7/7), done.
Writing objects: 100% (11/11), 981 bytes, done.
Total 11 (delta 1), reused 0 (delta 0)
Unpacking objects: 100% (11/11), done.
To /tmp/other
 * [new branch]      master -> master
$ git push --all other # Push all branches to 'other'
Counting objects: 8, done.
Delta compression using up to 2 threads.
Compressing objects: 100% (3/3), done.
Writing objects: 100% (5/5), 567 bytes, done.
Total 5 (delta 0), reused 0 (delta 0)
Unpacking objects: 100% (5/5), done.
To /tmp/other
 * [new branch]      french -> french
 * [new branch]      web -> web
$ cd /tmp/other # Switch to 'other' repository
$ git config --bool core.bare false # need to allow this repository to have checked out files
$ ls # Nothing to be seen, but it's there
$ git branch
  french
* master
  web
$ git checkout web # Get the contents of the 'web' branch in other
$ ls
README.txt index.html
$ echo '<h1>Git rocks!</h1>' >> index.html
$ git commit -a -m "Added Git Rocks!"
[web 510621a] Added Git Rocks
 1 files changed, 1 insertions(+), 0 deletions(-)
$ cd /tmp/example # Back to first repo
$ git pull other web # Pull changes from 'other' repo 'web' branch
remote: Counting objects: 5, done.
remote: Compressing objects: 100% (3/3), done.
remote: Total 3 (delta 0), reused 0 (delta 0)
Unpacking objects: 100% (3/3), done.
From /tmp/other
 * branch            web        -> FETCH_HEAD
Merge made by recursive.
 index.html |    1 +
 1 files changed, 1 insertions(+), 0 deletions(-)
$ git log --graph --oneline
*   146932f Merge branch 'web' of /tmp/other
|
| * 510621a Added Git Rocks
* |   e3de4de Merge branch 'web'
| 
| |/
| * d47e30c Added homepage
* | 9b1939a Updated README.txt
|/
* 0dd1f35 Added README.txt

Rebasing and fast-forwarding

Often, you’ll work on a branch for a while and then want to commit it to the repository. You can do this at any point, but it’s considered good practice to rebase your local branch before doing so. For example, you can end up with multiple branches in the log (with git log --graph --oneline):

*   f0fde4e Merge change I11dc6200
|
| * 86dfb92 Mark the next version as 0.6
* |   0c8c04d Merge change I908e4c77
| 
| |/
|/|
| * 843dc8f Add support for logAllRefUpdates configuration parameter
* | 74ba6fc Remove TODO file and move to bugzilla
* | ba7c6e8 Fix SUBMITTING_PATCHES to follow the Eclipse IP process
* | c5e8589 Fix tabs-to-spaces in SUBMITTING_PATCHES
* | 677ca7b Update SUBMITTING_PATCHES to point to Contributor Guide
* | 8847865 Document protected members of RevObjectList
* | a0a0ce8 Make it possible to clear a PlotCommitList
* | 4a3870f Include description for missing bundle prereqs
|/
* 144b16d Cleanup MANIFEST.MF in JGit

What happened here was that two branches split off from change 144b16d, ultimately driving another branch at 74ba6fc and a few merges (at 0c8c04d and f0fde4e). (You can see a similar effect in Google Code’s Hg view of Wave Protocol.) Ultimately, whilst the DVCS can handle these long-running branches and subsequent merges, humans tend to prefer to see fewer branches in the final repository.

fast-forward merge (in Git terms) is one which doesn’t need any kind of merge operation. This usually happens when you are moving from an older branch to a newer branch on the same timeline; such as when updating to a newer version from a remote repository. These are essentially just moving the HEAD pointer further down the branch.

rebase is uprooting the branch from the original commit, and re-writing history as if it had been done from the current point in time. For example, in the above Git trace, 1441b16d to 843dc8f to 0c8c0fd was only one commit off the main tree. Had the change been rebased on 74ba6fc, then we would have only seen a single timeline across those commits. It’s generally considered good practice to rebase changes prior to pushing to a remote tree to avoid these kind of fan-outs, but it’s not necessary to do so. Furthermore, the rebase operation changes the sha1 hashes of your tree, which can affect those who have forked your repository. Best practice is to frequently rebase your changes in your own local repository, but once they’ve been made public (by pushing to a shared repository) to avoid rebasing further.

☞ Rebasing replants your tree; but do it on local branches only

Extensiones de Chrome: entorno de background y paso de mensajes

Caution: Consider using event pages instead. Learn more.

A common need for extensions is to have a single long-running script to manage some task or state. Background pages to the rescue.

As the architecture overview explains, the background page is an HTML page that runs in the extension process. It exists for the lifetime of your extension, and only one instance of it at a time is active.

In a typical extension with a background page, the UI — for example, the browser action or page action and any options page — is implemented by dumb views. When the view needs some state, it requests the state from the background page. When the background page notices a state change, the background page tells the views to update.

Manifest

Register your background page in the extension manifest. In the common case, a background page does not require any HTML markup. These kind of background pages can be implemented using JavaScript files alone, like this:

{
  "name": "My extension",
  ...
  "background": {
    "scripts": ["background.js"]
  },
  ...
}

A background page will be generated by the extension system that includes each of the files listed in the scripts property.

If you need to specify HTML in your background page, you can do that using the page property instead:

{
  "name": "My extension",
  ...
  "background": {
    "page": "background.html"
  },
  ...
}

If you need the browser to start up early—so you can display notifications, for example—then you might also want to specify the“background” permission.

Details

You can communicate between your various pages using direct script calls, similar to how frames can communicate. The extension.getViews method returns a list of window objects for every active page belonging to your extension, and the extension.getBackgroundPage method returns the background page.

Example

The following code snippet demonstrates how the background page can interact with other pages in the extension. It also shows how you can use the background page to handle events such as user clicks.

The extension in this example has a background page and multiple pages created (with tabs.create) from a file named image.html.

//In background.js:
// React when a browser action's icon is clicked.
chrome.browserAction.onClicked.addListener(function(tab) {
  var viewTabUrl = chrome.extension.getURL('image.html');
  var imageUrl = /* an image's URL */;

  // Look through all the pages in this extension to find one we can use.
  var views = chrome.extension.getViews();
  for (var i = 0; i < views.length; i++) {
    var view = views[i];

    // If this view has the right URL and hasn't been used yet...
    if (view.location.href == viewTabUrl && !view.imageAlreadySet) {

      // ...call one of its functions and set a property.
      view.setImageUrl(imageUrl);
      view.imageAlreadySet = true;
      break; // we're done
    }
  }
});

//In image.html:
<html>
  <script>
    function setImageUrl(url) {
      document.getElementById('target').src = url;
    }
  </script>

  <body>
    <p>
    Image here:
    </p>

    <img id="target" src="white.png" width="640" height="480">

  </body>
</html>

Message Passing

Since content scripts run in the context of a web page and not the extension, they often need some way of communicating with the rest of the extension. For example, an RSS reader extension might use content scripts to detect the presence of an RSS feed on a page, then notify the background page in order to display a page action icon for that page.

Communication between extensions and their content scripts works by using message passing. Either side can listen for messages sent from the other end, and respond on the same channel. A message can contain any valid JSON object (null, boolean, number, string, array, or object). There is a simple API for one-time requests and a more complex API that allows you to have long-lived connections for exchanging multiple messages with a shared context. It is also possible to send a message to another extension if you know its ID, which is covered in the cross-extension messages section.

Simple one-time requests

If you only need to send a single message to another part of your extension (and optionally get a response back), you should use the simplified runtime.sendMessage or tabs.sendMessage methods. This lets you send a one-time JSON-serializable message from a content script to extension, or vice versa, respectively. An optional callback parameter allows you handle the response from the other side, if there is one.

Sending a request from a content script looks like this:

contentscript.js
================
chrome.runtime.sendMessage({greeting: "hello"}, function(response) {
  console.log(response.farewell);
});

Sending a request from the extension to a content script looks very similar, except that you need to specify which tab to send it to. This example demonstrates sending a message to the content script in the selected tab.

background.html
===============
chrome.tabs.getSelected(null, function(tab) {
  chrome.tabs.sendMessage(tab.id, {greeting: "hello"}, function(response) {
    console.log(response.farewell);
  });
});

On the receiving end, you need to set up an runtime.onMessage event listener to handle the message. This looks the same from a content script or extension page.

chrome.runtime.onMessage.addListener(
  function(request, sender, sendResponse) {
    console.log(sender.tab ?
                "from a content script:" + sender.tab.url :
                "from the extension");
    if (request.greeting == "hello")
      sendResponse({farewell: "goodbye"});
  });

Note: If multiple pages are listening for onMessage events, only the first to call sendResponse() for a particular event will succeed in sending the response. All other responses to that event will be ignored.

Long-lived connections

Sometimes it’s useful to have a conversation that lasts longer than a single request and response. In this case, you can open a long-lived channel from your content script to an extension page, or vice versa, using runtime.connect or tabs.connect respectively. The channel can optionally have a name, allowing you to distinguish between different types of connections.

One use case might be an automatic form fill extension. The content script could open a channel to the extension page for a particular login, and send a message to the extension for each input element on the page to request the form data to fill in. The shared connection allows the extension to keep shared state linking the several messages coming from the content script.

When establishing a connection, each end is given a runtime.Port object which is used for sending and receiving messages through that connection.

Here is how you open a channel from a content script, and send and listen for messages:

contentscript.js
================
var port = chrome.runtime.connect({name: "knockknock"});
port.postMessage({joke: "Knock knock"});
port.onMessage.addListener(function(msg) {
  if (msg.question == "Who's there?")
    port.postMessage({answer: "Madame"});
  else if (msg.question == "Madame who?")
    port.postMessage({answer: "Madame... Bovary"});
});

Sending a request from the extension to a content script looks very similar, except that you need to specify which tab to connect to. Simply replace the call to connect in the above example with tabs.connect.

In order to handle incoming connections, you need to set up a runtime.onConnect event listener. This looks the same from a content script or an extension page. When another part of your extension calls “connect()”, this event is fired, along with the runtime.Portobject you can use to send and receive messages through the connection. Here’s what it looks like to respond to incoming connections:

chrome.runtime.onConnect.addListener(function(port) {
  console.assert(port.name == "knockknock");
  port.onMessage.addListener(function(msg) {
    if (msg.joke == "Knock knock")
      port.postMessage({question: "Who's there?"});
    else if (msg.answer == "Madame")
      port.postMessage({question: "Madame who?"});
    else if (msg.answer == "Madame... Bovary")
      port.postMessage({question: "I don't get it."});
  });
});

You may want to find out when a connection is closed, for example if you are maintaining separate state for each open port. For this you can listen to the runtime.Port event. This event is fired either when the other side of the channel manually calls runtime.Port, or when the page containing the port is unloaded (for example if the tab is navigated). onDisconnect is guaranteed to be fired only once for any given port.

Cross-extension messaging

In addition to sending messages between different components in your extension, you can use the messaging API to communicate with other extensions. This lets you expose a public API that other extensions can take advantage of.

Listening for incoming requests and connections is similar to the internal case, except you use the runtime.onMessageExternal orruntime.onConnectExternal methods. Here’s an example of each:

// For simple requests:
chrome.runtime.onMessageExternal.addListener(
  function(request, sender, sendResponse) {
    if (sender.id == blacklistedExtension)
      return;  // don't allow this extension access
    else if (request.getTargetData)
      sendResponse({targetData: targetData});
    else if (request.activateLasers) {
      var success = activateLasers();
      sendResponse({activateLasers: success});
    }
  });

// For long-lived connections:
chrome.runtime.onConnectExternal.addListener(function(port) {
  port.onMessage.addListener(function(msg) {
    // See other examples for sample onMessage handlers.
  });
});

Likewise, sending a message to another extension is similar to sending one within your extension. The only difference is that you must pass the ID of the extension you want to communicate with. For example:

// The ID of the extension we want to talk to.
var laserExtensionId = "abcdefghijklmnoabcdefhijklmnoabc";

// Make a simple request:
chrome.runtime.sendMessage(laserExtensionId, {getTargetData: true},
  function(response) {
    if (targetInRange(response.targetData))
      chrome.runtime.sendMessage(laserExtensionId, {activateLasers: true});
  });

// Start a long-running conversation:
var port = chrome.runtime.connect(laserExtensionId);
port.postMessage(...);

Security considerations

When receiving a message from a content script or another extension, your background page should be careful not to fall victim to cross-site scripting. Specifically, avoid using dangerous APIs such as the below:

background.html
===============
chrome.tabs.sendMessage(tab.id, {greeting: "hello"}, function(response) {
  // WARNING! Might be evaluating an evil script!
  var resp = eval("(" + response.farewell + ")");
});

background.html
===============
chrome.tabs.sendMessage(tab.id, {greeting: "hello"}, function(response) {
  // WARNING! Might be injecting a malicious script!
  document.getElementById("resp").innerHTML = response.farewell;
});

Instead, prefer safer APIs that do not run scripts:

background.html
===============
chrome.tabs.sendMessage(tab.id, {greeting: "hello"}, function(response) {
  // JSON.parse does not evaluate the attacker's scripts.
  var resp = JSON.parse(response.farewell);
});

background.html
===============
chrome.tabs.sendMessage(tab.id, {greeting: "hello"}, function(response) {
  // innerText does not let the attacker inject HTML elements.
  document.getElementById("resp").innerText = response.farewell;
});

Examples

You can find simple examples of communication via messages in the examples/api/messaging directory. Also see the contentscript_xhr example, in which a content script and its parent extension exchange messages, so that the parent extension can perform cross-site requests on behalf of the content script. For more examples and for help in viewing the source code, see Samples.

GAS: Programando tu primer script

This is a tutorial that will walk you through the process of building your first simple script. After completing this tutorial, you should be comfortable with the basics of how to create and execute a script from the Script Editor.

  1. Requirements
  2. What is a Script?
  3. Writing a Script
  4. Executing a Script
  5. Learn More

Requirements

Before you begin, you need a Google Account or a Google Apps Account (see this FAQ to understand the difference), a supported browser, and a basic understanding of JavaScript. If you’re new to JavaScript, MDN’s JavaScript wiki has a lot of information, including a Reference and a Guide. Note that these materials were neither developed by nor associated with Google.

What is a Script?

A script is a series of instructions you write in a programming language or scripting language to accomplish a particular task. You type in the instructions and save them as a script. The script runs only under circumstances you define.

Google Apps Script is based upon JavaScript. In addition to providing much of what’s built into JavaScript, Google Apps Script also provides a set of classes that make up the Google Apps Script API. You can use these classes and their associated methods to access Google products, make requests to third-party services and APIs, and access helpful utilities. You can learn more about these topics in the sections on Using Built-in Services and Using External APIs.

Writing a Script

In this tutorial, we will create a standalone script, which is a script that is accessible from Google Drive.

To create a standalone script, go to https://script.google.com. If you see a window that asks what type of script you’d like to create, either select Blank Project or click Close.

Your newly created script will look something like this:

A new script

The script you will create is a very simple one. It will create a new Google Document and email you a link to the newly created document. Delete the placeholder code in your new script, and then copy and paste the code below into the Script Editor.

function createAndSendDocument() {
  // Create a new document with the title 'Hello World'
  var doc = DocumentApp.create('Hello World');

  // Add a paragraph to the document
  doc.appendParagraph('This document was created by my first Google Apps Script.');

  // Save and close the document
  doc.saveAndClose();

  // Get the URL of the document
  var url = doc.getUrl();

  // Get the email address of the active user - that's you
  var emailAddress = Session.getActiveUser().getEmail();

  // Send yourself an email with a link to the document
  GmailApp.sendEmail(emailAddress,
                     'Hello from my first Google Apps Script!',
                     'Here is a link to a document created by my ' +
                     'first Google Apps Script: ' + url);
}

After you paste the code into the Script Editor, click the Save icon. You’ll be prompted to rename your project. Enter the name My First Script and then click OK. Now that you’ve created the script, move on to the next section to learn how to execute the script.

Executing a Script

For the purposes of this tutorial, we’ll run the script directly from the Script Editor. To learn about the other ways that scripts can be executed, see the section on Execution Methods for Scripts.

To execute the script, either click the Run icon or choose Run > createAndSendDocument from the menu. You will see an authorization dialog appear.

authorization dialog

This dialog tells you which services the script needs to access. Click Authorize. Next, you’ll see another dialog, prompting you to grant access to your Gmail account. This is needed so that the script can send email from your Gmail account. Click Grant access.

OAuth dialogNow you’ll see an Authorization Status page, which tells you that you can now run the script. Click Close. Note that you will only have to go through this process the first time you run a script or when a script has been modified to require more permissions than you’d previously granted.

Authorization status

Now that the script has been authorized, run it once more. This time, the script will execute. Next go to your Gmail inbox and you should see an email from your script that looks similar to this.

Email from the script

If you click the link in the email, it will open a Google Document that looks like the one below.

Document

Learn More

Now that you understand the basics of using the Script Editor and creating and running a script, it’s time to learn more. The Tutorials are a good place to find step-by-step examples of different types of scripts. For the API reference for the default services in Google Apps Script, see the Default Services section of the documentation.

If you find that you need additional help, see the support page for information about how to ask questions or raise issues with the Google Apps Script team.

Extensiones de Chrome: introducción a la programación

[fuente: http://developer.chrome.com/extensions/getstarted.html]

Extensions allow you to add functionality to Chrome without diving deeply into native code. You can create new extensions for Chrome with those core technologies that you’re already familiar with from web development: HTML, CSS, and JavaScript. If you’ve ever built a web page, you should feel right at home with extensions pretty quickly; we’ll put that to the test right now by walking through the construction of a simple extension that will give you one-click access to pictures of kittens. Kittens!

We’ll do so by implementing a UI element we call a browser action, which allows us to place a clickable icon right next to Chrome’s Omnibox for easy access. Clicking that icon will open a popup window filled with kittenish goodness, which will look something like this:

Chrome, with an extension's popup open and displaying many kittens.

If you’d like to follow along at home (and you should!), create a shiny new directory on your computer, and pop open your favourite text editor. Let’s get going!

Something to Declare

The very first thing we’ll need to create is a manifest file named manifest.json. The manifest is nothing more than a JSON-formatted table of contents, containing properties like your extension’s name and description, its version number, and so on. At a high level, we’ll use it to declare to Chrome what the extension is going to do, and what permissions it requires in order to do those things.

In order to display kittens, we’ll want to tell Chrome that we’d like to create a browser action, and that we’d like free-reign to access kittens from a particular source on the net. A manifest file containing those instructions looks like this:

{
  "manifest_version": 2,

  "name": "One-click Kittens",
  "description": "This extension demonstrates a browser action with kittens.",
  "version": "1.0",

  "permissions": [
    "https://secure.flickr.com/"
  ],
  "browser_action": {
    "default_icon": "icon.png",
    "default_popup": "popup.html"
  }
}

Go ahead and save that data to a file named manifest.json in the directory you created, or download a copy of manifest.json from our sample repository .

What does it mean?

The attribute names are fairly self-descriptive, but let’s walk through the manifest line-by-line to make sure we’re all on the same page.

The first line, which declares that we’re using version 2 of the manifest file format, is mandatory (version 1 is old, deprecated, and generally not awesome).

The next block defines the extension’s name, description, and version. These will be used both inside of Chrome to show a user which extensions you have installed, and also on the Chrome Web Store to display your extension to potentially new users. The name should be short and snappy, and the description no longer than a sentence or so (you’ll have more room for a detailed description later).

The final block first requests permission to work with data on https://secure.flickr.com/, and declares that this extension implements a browser action, assigning it a default icon and popup in the process.

Resources

You probably noticed that manifest.json pointed at two resource files when defining the browser action: icon.png and popup.html. Both resources must exist inside the extension package, so let’s create them now:

  • The popup's icon will be displayed right next to the Omnibox. icon.png will be displayed next to the Omnibox, waiting for user interaction. Download a copy of icon.png from our sample repository, Download a copy of icon.png from our sample repository , and save it into the directory you’re working in. You could also create your own if you’re so inclined; it’s just a 19px-square PNG file.
  • The popup's HTML will be rendered directly below the icon when clicked. popup.html will be rendered inside the popup window that’s created in response to a user’s click on the browser action. It’s a standard HTML file, just like you’re used to from web development, giving you more or less free reign over what the popup displays. Download a copy of popup.html from our sample repository , and save it into the directory you’re working in.popup.html requires an additional JavaScript file in order to do the work of grabbing kitten images from the web and loading them into the popup. To save you some effort, just download a copy of popup.js from our sample repository , and save it into the directory you’re working in.

You should now have four files in your working directory: icon.pngmanifest.jsonpopup.htmlpopup.js. The next step is to load those files into Chrome.

Load the extension

Extensions that you download from the Chrome Web Store are packaged up as .crx files, which is great for distribution, but not so great for development. Recognizing this, Chrome gives you a quick way of loading up your working directory for testing. Let’s do that now.

  1. Visit chrome://extensions in your browser (or open up the Chrome menu by clicking the icon to the far right of the Omnibox:The menu's icon is three horizontal bars.. and select Extensions under the Tools menu to get to the same place).
  2. Ensure that the Developer Mode checkbox in the top right-hand corner is checked.
  3. Click Load unpacked extension… to pop up a file-selection dialog.
  4. Navigate to the directory in which your extension files live, and select it.

If the extension is valid, it’ll be loaded up and active right away! If it’s invalid, an error message will be displayed at the top of the page. Correct the error, and try again.

Fiddle with Code

Now that you’ve got your first extension up and running, let’s fiddle with things so that you have an idea what your development process might look like. As a trivial example, let’s change the data source to search for pictures of puppies instead of kittens.

Hop into popup.js, and edit line 11 from var QUERY = 'kittens'; to read var QUERY = 'puppies';, and save your changes.

If you click on your extension’s browser action again, you’ll note that your change hasn’t yet had an effect. You’ll need to let Chrome know that something has happened, either explicitly by going back to the extension page (chrome://extensions, or Tools > Extensions under the Chrome menu), and clicking Reload under your extension, or by reloading the extensions page itself (either via the reload button to the left of the Omnibox, or by hitting F5 or Ctrl-R).

Once you’ve reloaded the extension, click the browser action icon again. Puppies galore!

What next?

You now know about the manifest file’s central role in bringing things together, and you’ve mastered the basics of declaring a browser action, and rendering some kittens (or puppies!) in response to a user’s click. That’s a great start, and has hopefully gotten you interested enough to explore further. There’s a lot more out there to play around with.

  • The Chrome Extension Overview backs up a bit, and fills in a lot of detail about extensions’ architecture in general, and some specific concepts you’ll want to be familiar with going forward. It’s the best next step on your journey towards extension mastery.
  • No one writes perfect code on the first try, which means that you’ll need to learn about the options available for debugging your creations. Our debugging tutorial is perfect for that, and is well worth carefully reading.
  • Chrome extensions have access to powerful APIs above and beyond what’s available on the open web: browser actions are just the tip of the iceburg. Our chrome.* APIs documentation will walk you through each API in turn.
  • Finally, the developer’s guide has dozens of additional links to pieces of documentation you might be interested in.

Overview

Once you’ve finished this page and the Getting Started tutorial, you’ll be all set to start writing extensions.

The basics

An extension is a zipped bundle of files—HTML, CSS, JavaScript, images, and anything else you need—that adds functionality to the Google Chrome browser. Extensions are essentially web pages, and they can use all the APIs that the browser provides to web pages, from XMLHttpRequest to JSON to HTML5.

Extensions can interact with web pages or servers using content scripts or cross-origin XMLHttpRequests. Extensions can also interact programmatically with browser features such as bookmarks and tabs.

Extension UIs

Many extensions—but not packaged apps—add UI to Google Chrome in the form of browser actions or page actions. Each extension can have at most one browser action or page action. Choose a browser action when the extension is relevant to most pages. Choose a page action when the extension’s icon should appear or disappear, depending on the page.

screenshot screenshot screenshot
This mail extension uses a browser action(icon in the toolbar). This map extension uses a page action(icon in the address bar) and content script(code injected into a web page). This news extension features a browser action that, when clicked, shows a popup.

Extensions (and packaged apps) can also present a UI in other ways, such as adding to the Chrome context menu, providing an options page, or using a content script that changes how pages look. See the Developer’s Guide for a complete list of extension features, with links to implementation details for each one.

Files

Each extension has the following files:

  • manifest file
  • One or more HTML files (unless the extension is a theme)
  • Optional: One or more JavaScript files
  • Optional: Any other files your extension needs—for example, image files

While you’re working on your extension, you put all these files into a single folder. When you distribute your extension, the contents of the folder are packaged into a special ZIP file that has a .crx suffix. If you upload your extension using the Chrome Developer Dashboard, the .crx file is created for you. For details on distributing extensions, see Hosting.

Referring to files

You can put any file you like into an extension, but how do you use it? Usually, you can refer to the file using a relative URL, just as you would in an ordinary HTML page. Here’s an example of referring to a file named myimage.png that’s in a subfolder named images.

<img src="images/myimage.png">

As you might notice while you use the Google Chrome debugger, every file in an extension is also accessible by an absolute URL like this:

chrome-extension://<extensionID>/<pathToFile>

In that URL, the <extensionID> is a unique identifier that the extension system generates for each extension. You can see the IDs for all your loaded extensions by going to the URL chrome://extensions. The <pathToFile> is the location of the file under the extension’s top folder; it’s the same as the relative URL.

While you’re working on an extension (before it’s packaged), the extension ID can change. Specifically, the ID of an unpacked extension will change if you load the extension from a different directory; the ID will change again when you package the extension. If your extension’s code needs to specify the full path to a file within the extension, you can use the @@extension_id predefined message to avoid hardcoding the ID during development.

When you package an extension (typically, by uploading it with the dashboard), the extension gets a permanent ID, which remains the same even after you update the extension. Once the extension ID is permanent, you can change all occurrences of @@extension_id to use the real ID.

The manifest file

The manifest file, called manifest.json, gives information about the extension, such as the most important files and the capabilities that the extension might use. Here’s a typical manifest file for a browser action that uses information from google.com:

{
  "name": "My Extension",
  "version": "2.1",
  "description": "Gets information from Google.",
  "icons": { "128": "icon_128.png" },
  "background": {
    "persistent": false,
    "scripts": ["bg.js"]
  },
  "permissions": ["http://*.google.com/", "https://*.google.com/"],
  "browser_action": {
    "default_title": "",
    "default_icon": "icon_19.png",
    "default_popup": "popup.html"
  }
}

For details, see Manifest Files.

Architecture

Many extensions have a background page, an invisible page that holds the main logic of the extension. An extension can also contain other pages that present the extension’s UI. If an extension needs to interact with web pages that the user loads (as opposed to pages that are included in the extension), then the extension must use a content script.

The background page

The following figure shows a browser that has at least two extensions installed: a browser action (yellow icon) and a page action (blue icon). Both the browser action and the page action have background pages. This figure shows the browser action’s background page, which is defined by background.html and has JavaScript code that controls the behavior of the browser action in both windows.

Two windows and a box representing a background page (background.html). One window has a yellow icon; the other has both a yellow icon and a blue icon. The yellow icons are connected to the background page.

There are two types of background pages: persistent background pages, and event pages. Persistent background pages, as the name suggests, are always open. Event pages are opened and closed as needed. Unless you absolutely need your background page to run all the time, prefer to use an event page.

See Event Pages and Background Pages for more details.

UI pages

Extensions can contain ordinary HTML pages that display the extension’s UI. For example, a browser action can have a popup, which is implemented by an HTML file. Any extension can have an options page, which lets users customize how the extension works. Another type of special page is the override page. And finally, you can use tabs.create or window.open() to display any other HTML files that are in the extension.

The HTML pages inside an extension have complete access to each other’s DOMs, and they can invoke functions on each other.

The following figure shows the architecture of a browser action’s popup. The popup’s contents are a web page defined by an HTML file (popup.html). This extension also happens to have a background page (background.html). The popup doesn’t need to duplicate code that’s in the background page because the popup can invoke functions on the background page.

A browser window containing a browser action that's displaying a popup. The popup's HTML file (popup.html) can communicate with the extension's background page (background.html).

See Browser ActionsOptionsOverride Pages, and the Communication between pages section for more details.

Content scripts

If your extension needs to interact with web pages, then it needs a content script. A content script is some JavaScript that executes in the context of a page that’s been loaded into the browser. Think of a content script as part of that loaded page, not as part of the extension it was packaged with (its parent extension).

Content scripts can read details of the web pages the browser visits, and they can make changes to the pages. In the following figure, the content script can read and modify the DOM for the displayed web page. It cannot, however, modify the DOM of its parent extension’s background page.

A browser window with a browser action (controlled by background.html) and a content script (controlled by contentscript.js).

Content scripts aren’t completely cut off from their parent extensions. A content script can exchange messages with its parent extension, as the arrows in the following figure show. For example, a content script might send a message whenever it finds an RSS feed in a browser page. Or a background page might send a message asking a content script to change the appearance of its browser page.

Like the previous figure, but showing more of the parent extension's files, as well as a communication path between the content script and the parent extension.

For more information, see Content Scripts.

Using the chrome.* APIs

In addition to having access to all the APIs that web pages and apps can use, extensions can also use Chrome-only APIs (often called chrome.* APIs) that allow tight integration with the browser. For example, any extension or web app can use the standard window.open() method to open a URL. But if you want to specify which window that URL should be displayed in, your extension can use the Chrome-only tabs.create method instead.

Asynchronous vs. synchronous methods

Most methods in the chrome.* APIs are asynchronous: they return immediately, without waiting for the operation to finish. If you need to know the outcome of that operation, then you pass a callback function into the method. That callback is executed later (potentially much later), sometime after the method returns. Here’s an example of the signature for an asynchronous method:

chrome.tabs.create(object createProperties, function callback)

Other chrome.* methods are synchronous. Synchronous methods never have a callback because they don’t return until they’ve completed all their work. Often, synchronous methods have a return type. Consider the runtime.getURL method:

string chrome.runtime.getURL()

This method has no callback and a return type of string because it synchronously returns the URL and performs no other, asynchronous work.

Example: Using a callback

Say you want to navigate the user’s currently selected tab to a new URL. To do this, you need to get the current tab’s ID (usingtabs.query) and then make that tab go to the new URL (using tabs.update).

If query() were synchronous, you might write code like this:

   //THIS CODE DOESN'T WORK
1: var tab = chrome.tabs.query({'active': true}); //WRONG!!!
2: chrome.tabs.update(tab.id, {url:newUrl});
3: someOtherFunction();

That approach fails because query() is asynchronous. It returns without waiting for its work to complete, and it doesn’t even return a value (although some asynchronous methods do). You can tell that query() is asynchronous by the callback parameter in its signature:

chrome.tabs.query(object queryInfo, function callback)

To fix the preceding code, you must use that callback parameter. The following code shows how to define a callback function that gets the results from query() (as a parameter named tab) and calls update().

   //THIS CODE WORKS
1: chrome.tabs.query({'active': true}, function(tabs) {
2:   chrome.tabs.update(tabs[0].id, {url: newUrl});
3: });
4: someOtherFunction();

In this example, the lines are executed in the following order: 1, 4, 2. The callback function specified to query() is called (and line 2 executed) only after information about the currently selected tab is available, which is sometime after query() returns. Althoughupdate() is asynchronous, this example doesn’t use its callback parameter, since we don’t do anything about the results of the update.

More details

For more information, see the chrome.* API docs and watch this video:

Communication between pages

The HTML pages within an extension often need to communicate. Because all of an extension’s pages execute in same process on the same thread, the pages can make direct function calls to each other.

To find pages in the extension, use chrome.extension methods such as getViews() and getBackgroundPage(). Once a page has a reference to other pages within the extension, the first page can invoke functions on the other pages, and it can manipulate their DOMs.

Saving data and incognito mode

Extensions can save data using the HTML5 web storage API (such as localStorage) or by making server requests that result in saving data. Whenever you want to save something, first consider whether it’s from a window that’s in incognito mode. By default, extensions don’t run in incognito windows. You need to consider what a user expects from your extension when the browser is incognito.

Incognito mode promises that the window will leave no tracks. When dealing with data from incognito windows, do your best to honor this promise. For example, if your extension normally saves browsing history to the cloud, don’t save history from incognito windows. On the other hand, you can store your extension’s settings from any window, incognito or not.

Rule of thumb: If a piece of data might show where a user has been on the web or what the user has done, don’t store it if it’s from an incognito window.

To detect whether a window is in incognito mode, check the incognito property of the relevant tabs.Tab or windows.Window object. For example:

function saveTabData(tab, data) {
  if (tab.incognito) {
    chrome.runtime.getBackgroundPage(function(bgPage) {
      bgPage[tab.url] = data;      // Persist data ONLY in memory
    });
  } else {
    localStorage[tab.url] = data;  // OK to store data
  }
}

Now what?

Now that you’ve been introduced to extensions, you should be ready to write your own. Here are some ideas for where to go next:

Google Picker

What is Google Picker?

Google Picker is a “File Open” dialog for the information stored in Google servers.

With Google Picker, your users can select photos, videos, maps, and documents stored in Google servers. The selection is passed back to your web page or web application for further use.

Use Google Picker to let users:

  • Access their files stored across Google services.
  • Upload new files to Google, which they can use in your application.
  • Select any image or video from the Internet, which they can use in your application.

To start using Google Picker, please read the Developer’s Guide!

Google Picker Video Search Example Google Picker Docs Search Example

Google Picker API Developer’s Guide

Conventional, platform-specific applications often provide File Open dialogs. But for countless web applications, the only choice presented to users is a plain input control. Users must cut-and-paste a URL, typically from another web browser tab or window.

Google Picker aims to change this by providing users a more modern experience:

  1. Familiar — The look-and-feel users will recognize from Google Drive and other Google properties.
  2. Graphical — A dialog experience, with many views showing previews or thumbnails.
  3. Streamlined — An inline, modal window, so users never leave the main application.

Web developers can incorporate Google Picker API by just adding a few lines of JavaScript.

Table of Contents

  1. Audience
  2. Application Requirements
  3. The “Hello World” Application
  4. Showing Different Views
  5. Handling Google Drive Items
  6. Rendering in Other Languages
  7. Supporting Older Browsers

Audience

This documentation is intended for developers who wish to add Google Picker API to their pages. A basic level of JavaScript fluency is required.

Read through this document to see code samples for common scenarios.

Consult the JSON Guide to understand the object format returned by the Google Picker API.

Refer to the Reference Guide for a complete API listing for the Google Picker API.

Application Requirements

Applications that use this interface must abide by all existing Terms of Service. Most importantly, you must correctly identify yourself in your requests.

The “Hello World” Application

Create a Picker object using a PickerBuilder object. The Picker instance represents the Google Picker dialog, and is rendered on the page inside an IFRAME. Here’s an example where a Google Image Search view is shown:

<!DOCTYPE html>
<html xmlns="http://www.w3.org/1999/xhtml">
  <head>
    <meta http-equiv="content-type" content="text/html; charset=utf-8"/>
    <title>Google Picker Example</title>

    <!-- The standard Google Loader script. -->
    <script src="http://www.google.com/jsapi"></script>
    <script type="text/javascript">

    // Use the Google Loader script to load the google.picker script.
    google.setOnLoadCallback(createPicker);
    google.load('picker', '1');

    // Create and render a Picker object for searching images.
    function createPicker() {
        var picker = new google.picker.PickerBuilder().
            addView(google.picker.ViewId.IMAGE_SEARCH).
            setCallback(pickerCallback).
            build();
        picker.setVisible(true);
    }

    // A simple callback implementation.
    function pickerCallback(data) {
      var url = 'nothing';
      if (data[google.picker.Response.ACTION] == google.picker.Action.PICKED) {
        var doc = data[google.picker.Response.DOCUMENTS][0];
        url = doc[google.picker.Document.URL];
      }
      var message = 'You picked: ' + url;
      document.getElementById('result').innerHTML = message;
    }
    </script>
  </head>
  <body>
    <div id="result"></div>
  </body>
</html>

Note: If you intend to support older browsers such as Microsoft Internet Explorer 6, modify the example slightly as shown in the Supporting Older Browsers section.

Let’s walk through the relevant sections. First the common Google Loader is invoked to load the Google Picker JavaScript. Here the loader is also instructed which method to call when the loading completes. The second argument to google.​load() is the version, and for Google Picker this value must be ‘1’.

    // Use the Google Loader script to load the google.picker script.
    google.setOnLoadCallback(createPicker);
    google.load('picker', '1');

Picker renders one view at a time. Specify at least one view, either by ID (google.​picker.​ViewId.*) or by creating an instance of a type (google.​picker.​*View). Specifiy the view by type if you want additional control over how that view is rendered. If more than one view is added to the Picker, users switch from one view to another by clicking a tab on the left. Tabs can be logically grouped with ViewGroup objects.

In this simple application, a single view is specified by ID. A method to call when the user selects an item (or cancels the dialog) is also specified. Once the Picker object is constructed, setVisible(true) is called so the user can see it.

    // Create and render a Picker object for searching images.
    function createPicker() {
        var picker = new google.picker.PickerBuilder().
            addView(google.picker.ViewId.IMAGE_SEARCH).
            setCallback(pickerCallback).
            build();
        picker.setVisible(true);
    }

The following code illustrates what you can do once the user selects Select or Cancel in the Google Picker dialog. The data object below is JSON-encoded. The google.picker.Response.ACTION field will always be set. If the user selects an item, the google.picker.Response.DOCUMENTS array is also populated. In this example the google.picker.Document.URL is shown on the main page. Find details about the data fields in the JSON Guide.

    // A simple callback implementation.
    function pickerCallback(data) {
      var url = 'nothing';
      if (data[google.picker.Response.ACTION] == google.picker.Action.PICKED) {
        var doc = data[google.picker.Response.DOCUMENTS][0];
        url = doc[google.picker.Document.URL];
      }
      var message = 'You picked: ' + url;
      document.getElementById('result').innerHTML = message;
    }

Showing Different Views

Specify a view by ViewId, or by an instance of a subclass of google.​picker.​View. Standard views offered by Google Picker API are the following:

Name Description Equivalent Class
google.picker.​ViewId.DOCS All Google Drive items. google.picker.​DocsView
google.picker.​ViewId.DOCS_IMAGES Google Drive photos.
google.picker.​ViewId.DOCS_IMAGES_AND_VIDEOS Google Drive photos and videos.
google.picker.​ViewId.DOCS_VIDEOS Google Drive videos.
google.picker.​ViewId.DOCUMENTS Google Drive Documents.
google.picker.​ViewId.FOLDERS Google Drive Folders.
google.picker.​ViewId.FORMS Google Drive Forms.
google.picker.​ViewId.IMAGE_SEARCH Google Image Search. google.picker.​ImageSearchView
google.picker.​ViewId.MAPS Google Maps. google.picker.​MapsView
google.picker.​ViewId.PDFS Adobe PDF files stored in Google Drive.
google.picker.​ViewId.PHOTO_ALBUMS Picasa Web Albums photo albums. google.picker.​PhotoAlbumsView
google.picker.​ViewId.PHOTO_UPLOAD Upload to Picasa Web Albums.
google.picker.​ViewId.PHOTOS Picasa Web Albums photos. google.picker.​PhotosView
google.picker.​ViewId.PRESENTATIONS Google Drive Presentations.
google.picker.​ViewId.RECENTLY_PICKED A collection of most recently selected items.
google.picker.​ViewId.SPREADSHEETS Google Drive Spreadsheet.
google.picker.​ViewId.VIDEO_SEARCH Video Search. google.picker.​VideoSearchView
google.picker.​ViewId.WEBCAM Webcam photos and videos. google.picker.​WebCamView
google.picker.​ViewId.YOUTUBE Your YouTube videos.

The third column shows the class equivalent for the ViewId, if available. Use a class instance instead of the ViewId when you need type-specific control. For example, use the PhotosView to show Picasa Web Album’s Featured Photos gallery.

    var picker = new google.picker.PickerBuilder().
        addView(new google.picker.PhotosView().
            setType(google.picker.PhotosView.Type.FEATURED)).
        setCallback(pickerCallback).
        build();

For a comprehensive list of methods and classes, see the Reference Guide.

Ordinarily, the set of views provided to the PickerBuilder are listed vertically in a single line in the left of the Google Picker window. You may, however, prefer some of your views to be grouped visually under a common heading. Use view groups to achieve this effect. Note that the common heading must also be a view. For example, you can create a view group of photos views, headed by the Picasa Web Albums view, like the following:

    var picker = new google.picker.PickerBuilder().
        addViewGroup(
            new google.picker.ViewGroup(google.picker.ViewId.PHOTOS).
                addView(new google.picker.PhotosView().
                    setType(google.picker.PhotosView.Type.UPLOADED)).
                addView(new google.picker.PhotosView().
                    setType(google.picker.PhotosView.Type.FEATURED))).
        addView(google.picker.ViewId.RECENTLY_PICKED).
        setCallback(pickerCallback).
        build();

Use view groups as a way of filtering out specific items. In the following example, the “Google Drive” sub-view shows only the documents and presentations, respectively, not other items.

    var picker = new google.picker.PickerBuilder().
        addViewGroup(
            new google.picker.ViewGroup(google.picker.ViewId.DOCS).
                addView(google.picker.ViewId.DOCUMENTS).
                addView(google.picker.ViewId.PRESENTATIONS)).
        setCallback(pickerCallback).
        build();

Use PickerBuilder.​enableFeature() to fine-tune the appearance of the Google Picker window. For instance, if you only have a single view, you may want to hide the navigation pane to give users more space to see items. Here’s an example of a Google video search picker demonstrating this feature:

    var picker = new google.picker.PickerBuilder().
        addView(google.picker.ViewId.VIDEO_SEARCH).
        enableFeature(google.picker.Feature.NAV_HIDDEN).
        setCallback(pickerCallback).
        build();

Use View.​setQuery() to pre-populate search terms for views that include a web search. The following is a video search search example:

    picker = new google.picker.PickerBuilder().
        addView(new google.picker.View(google.picker.ViewId.VIDEO-SEARCH).
            setQuery('Hugging Cat')).
        setCallback(pickerCallback).
        build();

Handling Google Drive Items

The picker interface can display a list of the currently authenticated user’s Google Drive files. When a user selects a file from the list, the file ID is returned, and the ID may be used by an app to access the file.

The following picker example illustrates an image selector/uploader page that could be opened from an Open or Upload Drive files button in a web app. This example demonstrates how to set the AppId value, and incorporates some useful picker features such as enabling multi-select, hiding the navigation pane, and choosing the user account with the app’s current OAuth 2.0 token:

<!DOCTYPE html>
<html xmlns="http://www.w3.org/1999/xhtml">
  <head>
    <meta http-equiv="content-type" content="text/html; charset=utf-8"/>
    <title>Google Picker Example</title>

    <!-- The standard Google Loader script; use your own key. -->
    <script src="http://www.google.com/jsapi?key=AIzaSyBV6MeANy_ZaLB2f2c-XKCMA7hIu2Fy744"></script>
    <script type="text/javascript">

    // Use the Google Loader script to load the google.picker script.
    google.setOnLoadCallback(createPicker);
    google.load('picker', '1');

    // Create and render a Picker object for searching images.
    function createPicker() {
      var view = new google.picker.View(google.picker.ViewId.DOCS);
      view.setMimeTypes("image/png,image/jpeg,image/jpg");    
      var picker = new google.picker.PickerBuilder()
          .enableFeature(google.picker.Feature.NAV_HIDDEN)
          .enableFeature(google.picker.Feature.MULTISELECT_ENABLED)
          .setAppId(YOUR_APP_ID)
          .setOAuthToken(AUTH_TOKEN) //Optional: The auth token used in the current Drive API session.
          .addView(view)
          .addView(new google.picker.DocsUploadView())
          .setCallback(pickerCallback)
          .build();
       picker.setVisible(true);
    }

    // A simple callback implementation.
    function pickerCallback(data) {
      if (data.action == google.picker.Action.PICKED) {
        var fileId = data.docs[0].id;
        alert('The user selected: ' + fileId);
      }
    }
    </script>
  </head>
  <body>
    <div id="result"></div>
  </body>
</html>

The AppId set here and the client ID used for authorizing access to a user’s files must be contained in the same app. These values are shown in the APIs console for a registered app.

Important:The optional setOAuthToken function allows an app to use the current auth token to determine which Google account the picker uses to display the files. If a user is signed into multiple Google accounts, this allows the picker to display the files of the appropriate authorized account. In cases where no auth token is available, apps can use the setAuthUser function to specify which Google account the picker uses.

After obtaining the file ID from the picker when opening files, an application can then fetch the file metadata and download the file content as described in the reference documentation for files.get.

Rendering in Other Languages

Specify the UI language by providing an optional third argument when you call google.​load(). The following is a French example:

    google.load('picker', '1', {'language':'fr'});

The following is the list of locales currently supported:

af
am
ar
bg
bn
ca
cs
da
de
el
en
en-GB
es
es-419
et
eu
fa
fi
fil
fr
fr-CA
gl
gu
hi
hr
hu
id
is
it
iw
ja
kn
ko
lt
lv
ml
mr
ms
nl
no
pl
pt-BR
pt-PT
ro
ru
sk
sl
sr
sv
sw
ta
te
th
tr
uk
ur
vi
zh-CN
zh-HK
zh-TW
zu

Supporting Older Browsers

If you intend to support users using older browsers, follow these one-time steps:

  1. Download this file: https://www.google.com/ajax/picker/resources/rpc_relay.html.
  2. Place the file somewhere in the same domain as you application.
  3. Modify the Picker creation code, using the corrected path:
    var picker = new google.picker.PickerBuilder().
        addView(google.picker.ViewId.IMAGE_SEARCH).
        setCallback(pickerCallback).
        setRelayUrl('http://www.yoursite.com/somedir/rpc_relay.html').
        build();

Behind the scenes, the Google Picker API passes messages to your web page from a service hosted at Google. This is cross-domain communication, which is why special logic is necessary. On modern browsers, browser channels are used to relay the messages around. On older browsers, however, the security model requires us to bounce messages off of your server, in the same domain as your application.

 

JSON Guide

When a user selects one or more items, the Google Picker API returns a JSON-formatted object in the callback. Depending on the view from which the selection was made, different fields are present in this data object.

{
 Response.ACTION: action,
 Response.VIEW: [
   view_id,
   undefined,
   view_options {
     query: user_query,
     parent: parent_ID,
     ...
   }
 ],
 Response.DOCUMENTS: [
   {
     Document.ADDRESS_LINES: [
       address_line,
       ...
     ],
     Document.AUDIENCE: audience,
     Document.DESCRIPTION: description,
     Document.DURATION: duration,
     Document.EMBEDDABLE_URL: embed_URL,
     Document.ICON_URL: icon_URL,
     Document.ID: item_id,
     Document.IS_NEW: is_new,
     Document.LAST_EDITED_UTC: timestamp,
     Document.LATITUDE: latitude_value,
     Document.LONGITUDE: longitude_value,
     Document.MIME_TYPE: MIME_type,
     Document.NAME: item_name,
     Document.PARENT_ID: parent_ID,
     Document.PHONE_NUMBERS:
       {
         type: phone_type,
         number: phone_number,
       }
       ...
     ],
     Document.SERVICE_ID: service_id,
     Document.THUMBNAILS: [
       {
         Thumbnail.URL: thumbnail_URL,
         Thumbnail.WIDTH: thumbnail_width,
         Thumbnail.HEIGHT: thumbnail_height
       }
       ...
     ],
     Document.TYPE: type,
     Document.URL: item_URL
   },
   ...
 ]
 Response.PARENTS: [
   {
     Document.AUDIENCE: audience,
     Document.DESCRIPTION: description,
     Document.LAST_EDITED_UTC: timestamp,
     Document.MIME_TYPE: MIME_type,
     Document.NAME: item_name,
     Document.ICON_URL: icon_URL,
     Document.ID: item_ID,
     Document.IS_NEW: is_new,
     Document.SERVICE_ID: service_id,
     Document.THUMBNAILS: [
       {
         Thumbnail.URL: thumbnail_URL,
         Thumbnail.WIDTH: thumbnail_width,
         Thumbnail.HEIGHT: thumbnail_height
       }
       ...
     ],
     Document.TYPE: type,
     Document.URL: item_URL,
   },
   ...
   }
}
action The Action taken by the user to close the picker dialog.
address_line The address of a picked location.
audience The Audience of a Picasa Web Albums photo album.
description A description of the item, if provided.
duration The duration of a picked video.
embed_URL A URL for an embeddable version of the item.
icon_URL A URL for a publicly accessible version for an icon, if available.
is_new True if the picked item was uploaded then immediately picked.
item_URL A URL linking directly to the item.
item_id ID of the picked item.
item_name Name of the picked item.
latitude_value Latitude of a picked location (or of where the photo was taken if it has geo data), in degrees.
longitude_value Longitude of a picked location (or of where the photo was taken if it has geo data), in degrees.
MIME_type The MIME type of the picked item (not valid for maps).
parent_ID ID of parent item, if applicable.
phone_number The phone number of a picked location.
phone_type The type of phone number for a picked location.
service_id ServiceId that describes the service this file was picked from.
thumbnail_height The height of the publicly accessible thumbnail.
thumbnail_URL A URL for the publicly accessible thumbnail.
thumbnail_width The width of the publicly accessible thumbnail.
timestamp The number of milliseconds since January 1, 1970, 00:00:00 GMT.
type The Type of the picked item.
user_query Query string, if one was set in View.setQuery().
view_ID The ViewId of the View the item was picked from.
view_options Additional information, if known. Otherwise undefined.