Sunday, October 25, 2009

Adventures into web UI land

To explore how easy or hard it would be to write a UI component using web technologies, and then integrate it as a first class citizen in Eclipse, I reimplemented the PDE "site.xml" editor for the 0.9 release of e4, using HTML, JavaScript, Dojo, and CSS.

I thought it would be interesting to document the design decisions that I made for this. Since my experience with web UIs is somewhat limited, you should take the following with a grain of salt. I did, however, work on something that could be characterized as an "IDE in a browser" for over three years (2001-2004), so I don't consider myself a complete rookie. :-)

Note that I won't talk about integrating the web UI component into Eclipse, this posting is just about the actual web UI component itself. The Eclipse integration is a related, but different story, that deserves its own post.

Let's start with a screenshot of the original PDE editor. It has two panes, a tree on the left showing features and categories, and a bunch of fields on the right with which you can edit the currently selected object in the tree. Depending on the type of object, the right side shows different fields, for example feature environments if the selected object is a feature, or category ID, name, and description if the selected object is a category. To add new categories or features, there are some push buttons on the right side of the tree.



We should be able to achieve a very similar look and feel with a piece of web UI, after all, the "flat look" of the PDE editors was an attempt to mimic the look of a web page.

The first important decision I had to make was whether to generate the HTML, pre-filled with the data, on the server side. This has the advantage that the resulting HTML can be made to (somewhat) work even if the web browser displaying it has JavaScript disabled. You would probably use a server-side template engine for this. If you follow the link, you can see one of the disadvantages of this approach - there are a bajillion template engines to choose from. Good luck with deciding which one! ;-)

But seriously, I had other reasons for deciding to let client-side JavaScript fill in the data instead. For example, this approach scales better because the "filling in data" work is done on the client, with the server doing less work because it only serves raw data. Also, it makes it easier to handle dynamic changes to the data on the client side because there is only one way of turning data into populated widgets. Development and testing is easier because you avoid a complete layer of the cake - you don't need to restart the template engine or reload the template every time you make a change, instead you change the .html or .js file and reload the page in the browser. Finally, this approach quite naturally leads to an interface between client and server that could be used by other kinds of clients as well (you get an API if you decide to publish and maintain it).

Oh and I forgot, it wouldn't be buzzword compliant - to claim you're doing Ajax means that you have to use JavaScript and XMLHttpRequest, and to be RESTful means that the server should serve raw data.

I started with the HTML and CSS first, leaving the client-server communication for later. My first goal was to get the right kind of "look". Many things are easy to do in plain HTML, for example input fields and buttons, and the header area that says "Update Site Map". There were two things that are hard (or maybe impossible) with plain HTML: making the two panes resizable by the user, and displaying a tree of objects. I decided to use Dojo for these two things, because I had a little previous experience with Dojo, and because it had been IP-approved for use by Eclipse projects. Note that I didn't use Dojo widgets for buttons and text fields, mostly because I didn't like their Dojo-styled look and wanted the "plain" look that you get from plain buttons and plain text fields.

For the resizable panes with a draggable splitter between them, I was able to use a Dojo BorderContainer, which is configured like this:
<div dojotype="dijit.layout.BorderContainer" design="sidebar"
livesplitters="false" gutters="false"
style="width: 100%; height: 90%;">
The tree is created dynamically from JavaScript as follows:
myTree = new dijit.Tree({
id: "myTree",
model: model,
showRoot: false,
getLabel: function(item) { /* code to return the label... */ },
getIconClass: function(item, opened){
/* code returning a CSS class for the icon */
},
onClick: function(item,treeNode) { /* ... */ }
});
myTree.startup();
dojo.byId("tree").appendChild(myTree.domNode);
This creates a tree widget based on the model you are passing in, and puts it under the DOM node with the ID "tree". Of course, there is more code than what I am showing here, but I hope that the few snippets I am showing give you enough context to understand the rest.

I initially ignored the server part completely and just hard-code some example data right in the JavaScript. This allowed me to rapidly test and develop the web UI in a browser. Actually, make that "in all the browsers I cared about". During development, I kept Firefox, IE, and Safari open to make sure it worked in all of them. Some of the things you'd like to do in CSS, in particular with respect to layout, don't work quite the same in all the browsers. :-)

Here is the result, opened in a standalone web browser:



My next decision was about the client-server communication. I decided to use JSON as the data transfer format, because, as you all probably know already, "JSON has become the X in Ajax" (quote: Douglas Crockford). There was a more pragmatic reason, too: the version of Dojo that I used was a bit older and didn't support trees backed by XML data. With a little help from Dojo, you can make HTTP requests without having to worry about browser differences.

There is only one thing that is not obvious: if the web UI is in a file like "site-editor.html" that references the necessary CSS and JavaScript pieces, then how does it know its "input", i.e. which resource to operate on? Somehow, the web UI needs to get to know the full path to the site.xml file. One widely used approach for solving this problem is to (mis-)use the "anchor" part of the URL that the browser points at. So for example, if the browser's URL is http://localhost:9234/pde-webui/site-editor.html#/myproject/site.xml then it will request /pde-webui/site-editor.html from localhost:9234, followed by an attempt to find the anchor named /myproject/site.xml in the HTML. Even though it won't find this anchor, the document.location object will contain the full information. This means that our JavaScript code can just get the value of document.location.hash and then make a corresponding XMLHttpRequest to get data from the server, which it then uses to fill the widgets. Once you know about this technique (which by the way has some nice properties with respect to the back button and the browser history), you will start to notice its use in many existing and widely used web-based applications. Like for example, https://mail.google.com/mail/#inbox, or http://www.facebook.com/home.php#/bokowski?ref=profile :-)

By now, if you are still following, we have a client-side web UI consisting of some HTML, CSS and JavaScript, that will access a server over HTTP to get the data that needs to be displayed. Let's now focus on the server part.

Obviously, we need a place to serve our static .html, .css, and .js files. The canonical choice for this is to configure Jetty accordingly. I am instantiating my own instance of Jetty and instruct it to pick a port number for me. If I know that the web UI will only be accessed from the local machine, Jetty can be configured to only accept connections from localhost.

For the RESTful interface, I implemented a servlet that
  1. responds to a GET request with the data I need in JSON format, and that
  2. accepts PUT requests when the user wants to save changed data.
In reality, the servlet does a little more (like e.g. authentication), but this post is already pretty long and I need to gloss over some of the details to not bore everyone to death.

Let me just add one more thing - how did I implement the part of the servlet that converts the site.xml file into the JSON format I needed?

The most obvious approach would be to parse the XML as a DOM and then generate JSON based on that DOM. Instead, I decided to use EMF objects as an intermediate "format". My idea was that by using EMF, the example code could easily be adapted to any data model that's already EMF based, and XML files (assuming they have an XML schema) could then be considered a special case.

If you have an XML schema, you can let EMF create an .ecore file for you (using the New EMF Project wizard). For my use case, I chose to not generate any Java code from the .ecore file, because the code that generates JSON works on any EMF model, using generic EMF calls. This means that it should be very easy to change the schema, or even to use the same code for completely different models.

Just to be clear, if you know that the format of your XML never changes, or if you want to minimize the external dependencies of your code, or if you enjoy programming against the DOM API ;-), or if adding EMF as another "layer" disturbs you, it's perfectly fine to not use EMF. Maybe it would even make sense to use XML on the client side as well.

Overall, I was pretty happy with the structure that I came up with and think that it can serve as an example of how to approach the task of writing a web UI component. And now I'm looking forward to your comments, especially ones that explain how I could have made better decisions!

Thursday, October 15, 2009

JSR 330 (Dependency Injection for Java) passed!

Two days ago, JSR 330 passed its final vote, and we now have a standard way to annotate Java classes for dependency injection. Thanks to the spec lead Bob Lee, the JSR 330 expert group did its work in record time, in the open, and under one of the most permissive licenses.

Because of the transparency of the expert group work, it is no secret that I am representing IBM in that group. I even implemented the spec in order to understand it better, and made that implementation available as open source in the context of the e4 project, which is the place where Eclipse Platform folks can experiment with new technologies. (Btw, I intend to put javax.inject in Orbit for use by other Eclipse projects.)

I am happy with the current state of JSR 330 and am looking forward to working with the expert group on a planned maintenance draft that will define a portable configuration API. Because with such an API, it will be possible to reuse application code across different injector implementations.

Note that postings on this site are my own and don’t necessarily represent IBM’s positions, strategies or opinions. Similarly, IBM's votes on JSRs don't necessarily represent my personal positions, strategies, or opinions. ;-)