Archive for September, 2005

* JavaScript Benchmarking IV: JSON Revisited

Posted on September 29th, 2005 by Dave Johnson. Filed under AJAX, JavaScript, Web2.0, XML, XSLT.


My last post [1] about JSON had a helpful comment from Michael Mahemoff, the driving force behind the great AJaX Patterns site (I recommend taking a quick gander at the comments/responses from/to Michael and Dean who are both very knowledgeable in the AJaX realm).

Experimental: Comparing Processing Time of JSON and XML
Michael had commented that there was a JavaScript based JSON parser that is an alternative to using the native JavaScript eval() function which provides a more secure deserialization method. So I put the JSON parser to the test in both IE 6 and Firefox and compared it to using eval() as well as using XML. The results are shown below:


Figure 1. JSON parse and eval() as well as XML transformation processing time with number of records in Firefox.


Figure 2. JSON parse and eval() as well as XML transformation processing time with number of records in Internet Explorer.

Ok so it’s pretty clear that the JSON parser is by far the slowest option in both browsers! The result that muddies the water is that in Firefox using JSON with the eval() function is actually the fastest method. These results also re-inforce those I found in an earlier JavaScript benchmarking post [2] which revealed that JavaScript was faster than XSL-T for building an HTML table in Firefox but not in IE 6.

Analysis: Implications of Varying DataSet Size
Now, to make it crystal clear that I am not simply saying that one method is better than another I will look at what this means for choosing the proper technology. If you look at browser usage stats from say W3Schools, then we can determine the best solution depending on the expected number of records you are going to be sending to the browser and inserting into HTML. To do this I divide each set of data by the browser usage share and then add the JSON Firefox + JSON Internet Explorer processing times and do the same for the XML data. This then gives the expected average processing time given the expected number of people using each browser. The results are below.

Figure 3. JSON and XML average processing time against record number given 72% and 20% market share of Internet Explorer and Firefox/Mozilla respectively.

Due to the apparent x^2 / exponential dependence of JavaScript / JSON processing time on the number of records in IE (see Fig. 2) it is no wonder that as we move to more records the JSON total average processing time increases in the same manner. Therefore, the JSON processing time crosses the more linear processing time for XML at somewhere in the neighbourhood of 180 records. Of course this exact cross-over point will change depending on several factors such as:

  • end user browser usage for your application
  • end user browser intensity (maybe either IE or FF users will actually use the application more often due to different roles etc)
  • end user computer performance
  • object/data complexity (nested objects)
  • object/data operations (sorting, merging, selecting etc)
  • output HTML complexity

Keep in mind that this will all could all change with the upcoming versions of Internet Explorer and Firefox in terms of the speed of their respective JavaScript and XSL-T processors. Still there are also other slightly less quantitative reasons for using XML [1,3] or JSON.

References
[1] JSON and the Golden Fleece - Dave Johnson, Sept 22, 2005
[2] JavaScript Benchmarking - Part I - Dave Johnson, July 10, 2005
[3] JSON vs XML - Jep Castelein, Sept 22 2005

.



* JSON and the Golden Fleece

Posted on September 22nd, 2005 by Dave Johnson. Filed under AJAX, Web2.0, XML, XSLT.


JavaScript Object Notation (JSON) is a clever, AJaXian way of representing data for use in a web browser that supports the JavaScript programming language. However, like the golden fleece (and the fair Medea) retrieved by Jason in Greek mythology, I believe that in time it will be forgotten. Gotta love all the AJaX Greek cliches!

People have before argued that JSON is a good alternative to XML for many reasons. Here are my reasons for why I prefer XML.

Processing
First and foremost, something really irks me about using eval() in JavaScript to create objects. This can be a both a security problem and, despite what many people seem to think (I am not sure who started it), it is relatively slow, particularly as you start having nested objects. Meanwhile XML can be deserialized into objects in most OO languages and / or formatted using XSL-T (using JavaScript for example) to create any XML dialect one wishes (such as XHTML for AJaX purposes). Furthermore, in the realm of AJaX you are using XMLHTTP requests to get the data anyway, which returns the data as XML using the responseXML property.

Simplicity
Ok both XML and JSON are pretty simple. I find XML easier to write and read myself.

Extensibility
They don’t put the X in XML for nothing.

Interoperability and Data Exchange
On the server JSON requires platform / language specific converters. XML has the lovely XSL-T which is not only widely supported but it is really fast on the client AND server. This is a big plus for AJaX to have the ability to process the same data on either the client or server with one XSL-T file so there is no re-writing or converting code.

Structure and Data Types
Sure JSON has something that you could consider structure but XML has a little something called a schema which is widely supported and a necessity that allows definition of data structure as well as data types.

Data Size
In the extreme both formats could be encoded to be essentially the same. We use an encoded format for our AJaX applications which is about as small as you can get without zipping or ignoring the data.

Emerging Technologies

E4X anyone? (thanks for the link Dan)

Acronyms
Yes, you would need to change AJaX to AJaJ if you wanted to use JSON and it doesn’t really roll off the tongue.

One can really see the benefit of XML when you consider dealing with large datasets for something like an AJaX grid control. For example, a common operation in a data grid is sorting - not only is it faster to sort using XSL-T rather than an array of JavaScript objects but the exact same XSL-T can by used to sort the data on either the server or client in different situations. To investigate this further I wrote some JavaScript to test the performance of eval()ing JSON data and compared it to the performance of the same data in XML being processed using XSL-T. The script essentially generated data sets in both XML and JSON formats with varying numbers of records and then procesed them accordingly into HTML fragments to be inserted into the DOM using innerHTML. Both tests were done in IE6 on Win2K (didn’t get around to Firefox or Opera:(). The results are illustrated below.

JSON vs XML processing

As is plain to see the XML data is processed much faster - especially as we move to larger data sets. This makes sense since once the data is ready it is transformed using a fast XSL-T stylesheet which outputs XHTML. On the other hand for JSON one needs to apply the slow eval() function to the data after which the JavaScript objects have to be looped through and concatenated into a string. Admittedly, if for some reason you actually want to deal with a singular JavaScript object (ie not having many records that are being put straight into XHTML) then JSON may be the way to go.

A second interesting thing I noticed here was that using a data-driven XSL-T stylesheet rather than a declaritive one resulted in noticeably slower transformations (though still much faster than JSON). I expected this result but did not expect it to be so evident. The reason for this is because a data-driven stylesheet uses many <xsl:apply-templates select="*" /> and <xsl:template match="nodeName" /> whereas a declaritive one uses only one <xsl:template match="/" /> for the root node and many nested <xsl:for-each select="nodeName" />.

.



* JavaScript Benchmarking - Part 3.1

Posted on September 15th, 2005 by Dave Johnson. Filed under AJAX, Web2.0, XML, XSLT.


With the open source release of Google’s GOOG-AJAXSLT JavaScript library I thought that it would be interesting to look at the performance in various browsers. It is of particular importance when building responsive AJaX applications on Opera, which does not support XSL-T at this time. Of course there is no reason for using this library in Firefox or Internet Explorer since they both have support for doing XSL-T transformations.

I made a simple XSL-T document that used various functions such as , , etc and had it generates some HTML output. I measured the time taken to perform the transform operation with varying numbers of rows in the output HTML. The result is short and sweet and can be seen below.

Luckily, it performs best on Opera which is the browser that does not natively support XSL-T. That being said, in the region of interest here the built-in transformation engines in IE, NS and FF can do the work in the 1ms range.

The question is can there be any optimization done to make the code run faster? Looking through the code I did notice that there are many loops that check values such as node.children.length on every iteration and similarly access node.children[i]. Also, there are many functions that use += to concatenate strings and we all know that building an array and calling stringArray.join('') can be very fast when dealing with large strings. Depending on the size of the transformation there could be performance gains there.

Tests were done on a 2GHz Celeron running Win2K server using IE6, NS8, FF1 and OP8.

.



* SOAP + WSDL in Mozilla

Posted on September 12th, 2005 by Dave Johnson. Filed under AJAX, Semantic Web, XML, XSLT.


I sure am behind the times. I just saw found out about the SOAP and WSDL support in Mozilla / Gecko based browsers. This is very cool and I am not sure why more people are not using this … especially in AJaX circles.

The other interesting thing that I found was that you can extend the DOM in Mozilla to support Microsoft HTML Component files or HTC’s - these are used in Internet Explorer to implement things such as SOAP and WSDL support. So you can in fact have SOAP and WSDL support in Gecko with either the built in objects or using HTC’s.

Ok so why aren’t more AJaX people using this built in support for SOAP + WSDL in Mozilla? If you prefer to generate JSON on the server and pass that up you are just crazy since you could instead pass it up as XML embedded in SOAP and then use XSLT on the client to (very quickly) generate HTML or CSS or whatever from the XML.

.



* Service Oriented Architecture: The 4th Dimension of the Rich Web

Posted on September 7th, 2005 by Dave Johnson. Filed under AJAX, Semantic Web, Web2.0.


As usual, Bill Scott has recently shared with us some of his keen insight into what makes Web 2.0 tick. In his latest post he introduces and defines the three (rich) dimensions of Web 2.0 as visual, interaction and data [1].

Before the arrival of so many AJaXified applications, data was the bottleneck through which the other two dimensions had to be squeezed. Now, developers are free to work in any dimension almost completely disjoint from the others using CSS, DOM and XMLHTTP for visual, interaction and data respectively.

I say almost because the choices you make in any dimension can and do influence the others (AJaX string theory). AJaX developers generally insist on minimalist and tightly coupled data communication methods; the reason for this is simple - if you pass SOAP, or worse, some WS-* compliant messages between the server and client you are going to have lots of extra data passed back and forth and will require more processing which both take time and reduce the usability of an application. Take Google for example, to get the best performance from their AJaX applications they generally return pure JavaScript or JSOR (JavaScript on the rocks). Doing this is great for a one-off customer facing application but when you want to share and open up data it becomes a lot of work to interoperate between Google, MSN and Amazon maps. In short, by making the data dimension more complicated to allow for say SOAP interoperability, we make the job of the DOM / JavaScripr dimension that much more difficult due to the increased overhead. This trade-off in performance has to be considered.

So as Web Services and all the standards that come under that umbrella are currently moving towards implementing Service Oriented Architectures (SOA) and (maybe even more importantly) the Semantic Web, were is AJaX going? What CSS and DOM trade-offs are we willing to make for the sake of rich data? Sure AJaX is young but let’s face it, everyone and their dog was using iFrames or XMLHTTP since the 90’s. AJaX and Web 2.0 developers should think about looking to SOA for guidance if we truly want to see rich data at its best. Let’s not get hung up on a Google map + housing listing “mashup” (not to say that I wasn’t excited to see it :) ) or worrying so much about back-buttons. We need to be driving development of Internet technologies on the client as well as hacking around and pushing the boundaries of the today’s Web!

Where do we go from here? In my previous post I discussed the synergies between SOA and AJaX [2] and in light of that discussion, I have been thinking about AJaX and how to create a truly data rich Internet application. Most of my thoughts end up at the sad conclusion that we are at the mercy of the web browser vendors that don’t have WS-* or even SOAP processing built-in (which Mozilla actually does have now :) ). Alternatively, maybe we should be looking at building a 4th dimension into AJaX applications of light weight standards based on the SOA tenets (discoverability, reusability, abstract models, policies)?

[1] Richness: The Web in 3D - Bill Scott, August 30, 2005
[2] SOAJaX: Where does SOA Stop and AJaX Begin- Dave Johnson, September 02, 2005

.



* Running Out of Gas

Posted on September 6th, 2005 by Dave Johnson. Filed under Energy, Uncategorized.


What To Do
In the wake of Katrina, many gas stations in the southern US last week either ran dry or had to stop selling gas because the price was rising so quickly [1]. With the price still near $70 a barrel many people in the US and Canada are crying foul and want the governments to step in and lower taxes on their precious fuel. It seems that the US government has tried to step in many times before and after their latest plan they are still trying to fight off insurgents and bring all the greatness that is democracy to Iraq.

Addiction
In the recent weeks and months with fuel prices soaring, there has been no decrease in demand; in economic terms this is a trend indicative of an addictive substance, which is a trend also evident with crack addiction. In fact crack addiction and gasoline addiction have many things in common - not the least of which is that both have expensive, and generally ill-conceived, “wars” waged in their names. Sure it can be tough to kick an addiction but the oil addiction could well be the one perpetrating the problems in the Gulf of Mexico through the phenomena known as global warming [2]. And it appears that there are still more hurricanes in store [3].

I think that the people who made this lovely sign on a train bridge over Holloway Road said it best (sorry it’s a bit blurry).

[1] Some U.S. gas stations run dry - CBC, September 01, 2005
[2] Hurricanes getting worse with global warming - CNet News, August 03, 2005
[3] Scientists forecast more US Atlantic hurricanes in upcoming months - September 03, 2005

Tags: .



* Internet Explorer OnResize

Posted on September 2nd, 2005 by Dave Johnson. Filed under AJAX, JavaScript.


I recently came across a very strange behaviour in Internet Explorer - shock, horror!

It is related to the OnResize event. In Internet Explorer the onresize event will fire on any node for which the event is defined. Take, for example, the following HTML snippit:


If you have this in a web page and resize the window, in Internet Explorer 6 you will first see the alert from the body and then it will start at the deepest part of the hierarchy (ie the div child node) and fire all resize events until it reache the top of the node tree at which point it will again fire the event on the body tag! There are also situations where if you resize an element through JavaScript this will cause the body resize event to fire - but only once rather than twice as it does when you manually resize the window.

When run in Firefox, Opera or Netscape my sample code only fires the body onresize event once and does not fire the onresize events on elements contained in the body element.

So be careful when building AJaX components that take advantage of the onresize event.

.



* SOAJaX: Where SOA Stops and AJaX Begins

Posted on September 2nd, 2005 by Dave Johnson. Filed under AJAX, Semantic Web, Web2.0.


There has recently been a maelstrom brewing over SOAJaX with some people claiming their is no correlation whatsoever between the two [1], some comparing the software industry to the fashion industry [2], some making nice graphics outlining important implications for SOA designers [3,4] the old “it’s all semantics” argument [5] and some being completely inane [6].

As many before have noted, SOA and AJaX are both just ridiculous acronyms describing architectural paradigms that encompass entire families of web technologies - but let’s try to look beyond that :) So let’s try to answer the question of what exactly SOA and AJaX have to do with life the universe and everything.

To start with, the “it’s all semantics” argument is correct. If you look around you can find a different definition of SOA depending on the time of day [8]. So, as Dion Hinchcliffe discusses [7], I think a good place to start is looking at what exactly SOA and AJaX are.

To get the definition of SOA I went straight to the horses mouth - OASIS. Since some smart people realized that SOA was completely ambiguous and didn’t mean anything in the real world, a technical committee created specifically to define a SOA reference model (they call this the SOA-RM TC). Hopefully, the result of all the hard work being done by the SOA-RM TC should be some guidelines to help in defining what components are required to actually call something a SOA. The work is not completely done but a recent SOA-RM Technical Committee overview presentation [9] by Duane “cosmic genius” Nickull (I hope that some of the absurd smarts rub off on this Canadian technologist) of Adobe. From this presentation there are at least five things that are required for something to be considered a SOA: a service that can be called through a prescribed interface, a service description declaring all relevant aspects of the service for would be consumers, discoverability, abstract data and behavioural models, and finally a policy which imposes constraints on consumers of the service. Of course loose-coupling is also a SOA hallmark.

Ok. I did not see any mention of Flickr, CSS, XML or the “yellow fade” technique there. Things are looking grim.

Now let’s consider what a RM for AJaX might look like. I am thinking that the important things for AJaX must be some degree of Asynchronicity, JavaScript and XML? Let’s knock of the last two first. Since SOA is quite technology agnostic it cannot really have anything specifically to do with JavaScript or (although most implementations use) XML. However, we may be able to weave a connection around the thin thread that is the capital “A” in AJaX - of course the second “a” is part of the word JavaScript and so should not be capitalized but that is another kettle of fish as they say. Asynchronous. Both SOA and AJaX (for the sake of argument) use either a synchronous or an asynchronous based communication pattern. So in the strictest sense AJaX can be a nice way to consume SOA services and provide a usable interface to them. That being said, if today’s SOAs are defined using the likes of WS-* then AJaX will never rise to God-like status it is striving for because you don’t want the WS-* stack written in JavaScript. So AJaX can consume services based on a SOA if AJaX developers want to play in the same league, but today I doubt it. This is where the commonalities start and end - but hey it’s better than nothing.

Strictly speaking AJaX is simply an important layer above a SOA like any other web application framework today; they are, for the most part, separate and discrete entities. Their paths may cross at some point in an optical fibre in the middle of the Atlantic Ocean but that is as close as they come. Before AJaX rose to super-stardom developers would simply utilize SOA from their Ruby on Rails or .NET or JAVA application running on the server and then convert the returned data to HTML and serve that up to the client. Now that AJaX has landed people have the opportunity to bypass that server layer and go straight to the source - if they want to deal with SOAP, WS-* etc they can do that. In general this is not the case. Since developers are lazy by design (at least the good ones) and AJaX developers (the laziest of the bunch) have shunned XML and, in an effort to reduce the amount of JavaScript coding to be done, have come up with their own data formats (JSON, JavaScript on the rocks or JSOR, amongst other more obscure or proprietary ones). These formats were spawned outside of the standards world in the wild west that is Web 2.0. Sure you can have a system that follows the tenets of SOA and uses JSON as the data format of choice if you are building a Web 2.0 consumer facing photo sharing website but this might not be so helpful when trying to integrate supply chains.

Although SOA has not quite hit the fashion industry status that AJaX has, SOA is the bricks and mortar that our software systems of the near future will be built upon while AJaX is but the decoration nailed to the walls. It just so happens that in Web 2.0 the walls are generally quite thin and AJaX appears to, and does, blend into a bit of an ad hoc, loosely defined, SOA.

So what implications do SOA and AJaX have for each other? Dion mentioned in one of his articles [3] that AJaX would likely push SOA away from WS-* way of doing things but I contend that AJaX will not have as much influence on SOA as he suggests because:
1) the SOA crowd is more established than the Web 2.0 cowboys thus it will take more than a few rogues to completely turn the tables
2) in web browsers today there is no support for discovery and policy binding, which are necessary for SOA particularly in the enterprise
3) people have been developing web applications that consume services for many years - to think that because of the re-introduction of asynchronous requests from the browser developers will suddenly find that they need to access enterprise Web Services directly from the browser seems unfounded (and a security risk to boot)

If nothing else, AJaX is creating a new generation of developers that will at least think about rich clients and how they interact with SOAs - this is good. Also, the visual side of AJaX helps to put a pretty face to the SOA name (guilt by association) - this is also good.

To finish off on a positive note, I think the biggest implication that AJaX has for SOA is that AJaX (and Web 2.0 in general) represents a vast improvement of client applications in terms of usability, which opens up new, uncharted territory for data manipulation and visualization on the client. This new territory will likely increase the amount and variance of data that web application developers will want; thus, developers will increasingly be faced with situations in which the only logical choice will be to access data through a SOA and to buy into the SOA way of doing things. If SOA gets buy in from the vocal and loveable AJaX crowd it could be a real shot in the arm for SOA as well as the implications that SOA has in store for the future. We are already seeing this trend with our AJaX based components on various platforms, which apparently are “ready for prime-time, white collar, Fortune 1000 usage” [3] as can be seen by our customers such as Time Warner, BMW, Bank of America, Goldman Sachs, and Siemens to name a few.

The question is will AJaX stagnate as purely an extension of current web development techniques or will it mature into it’s own “light” SOA for client side development or even better will browser vendors decide to build WS-* into the browsers of the future so that AJaX can play ball with the big boys?

[1] On Atlas/AJaX and SOA - Nick Malik
[2] SOA, AJAX and REST: The Software Industry Devolves into the Fashion Industry - Dare Obasanjo
[3] State of Ajax: Progress, Challenges, and Implications for SOAs - Dion Hinchcliffe
[4] Ajax: User Interface Pattern or SOA Dissemination Engine? - Dion Hinchcliffe
[5] AJAX, SOA, and FWCAR - Mohair Sam
[6] New Specification for SOA using AJAX = JAXASS - Titus
[7] Beating a Dead Horse: What’s a SOA Again? All About Service-Orientation… - Dion Hinchcliffe
[8] Revisiting the definitive SOA definition - SearchWebServices.com
[9] An Introduction to the OASIS Reference Model for Service Oriented Architecture (SOA) - Duane Nickull

.