Archive for July, 2005

* JavaScript Benchmarking - Part II

Posted on July 29th, 2005 by Dave Johnson. Filed under AJAX, JavaScript, Web2.0.


We all know that there are about a dozen ways to do the same thing in JavaScript and the trick is knowing which one is the most efficient under various conditions. Admittedly, the subject of the test today is not necessarily a realistic one but it does nicely illustrate how JavaScript can do things in very different ways to obtain the same result.

Another thing we probably most know is that changing the object.className is quicker (and cleaner in terms of separation of style) than changing the object.style.some-style attribute [1]. If you don’t believe me I will go into more detail about how best to set the stlye of an object in a later post. However, there are still more ways to create the same effect as setting a style or class through other means. This brings us to the task at hand.

Initially, I just wanted to compare two different methods of setting the background color on a table while changing the number of rows in the table. The two methods were:
1) the classic method of setting the className attribute on the root tag of the table (boring)
2) the different method of re-purposing a DIV tag with the background-color style specified which we can move/resize such that the DIV shows up under the desired area we want the background colour to change on

Having done that, I also decided to see how things changed when I did this with a table structure created using TABLE tags versus using SPAN and DIV tags.

First the results for the latest Firefox on WinXP Pro are shown below.

Firefox hilight benchmark

One can see that Firefox is extremely fast at resizing and moving the background DIV to the position of the table (black and red lines) - far faster than setting the className (green and blue). This is true for both the table made from TABLE tags and the one made from DIV tags although setting the className on the table made of DIV tags does slightly out perform the TABLE tags.

Now for the slightly stranger case of Internet Explorer 6 on the same machine.

IE hilight benchmark

Here we see that again the fastest method is using the background DIV and a table made from DIV tags (red). However, setting the className on the table of DIV tags is also very fast - in fact it is faster than the same thing in Firefox. But the truly strange thing here is that when you use the background DIV and create the table using TABLE tags (black) it goes slower than molasses during a Canadian winter! After carefully commenting out each line of code I discovered the culprit - calculating the offsetHeight and offsetWidth of a TABLE.

I also checked the latest versions of Netscape and Opera. Netscape was more or less the exact same result as Firefox (go figure) while Opera was the exact opposite and seems to be quickest at using the className rather than moving and resizing the background DIV.

If you just skipped to the end to see the result then the final word is that setting backgrounds on tables is slow; use DIV and SPAN tags instead. Furthermore, placing a re-usable DIV in the background is fastest on both browsers. Finally, stay away from accessing the offsetHeight and offsetWidth of TABLE tags in IE 6!

[1] Benchmark test: style vs className

.



* Beyond Model-View-Controller

Posted on July 18th, 2005 by Dave Johnson. Filed under AJAX, JavaScript, Web2.0, XML.


Bill Scott of Sabre / Rico LiveGrid fame (who is now on the way to Yahoo!) recently posted an excellent blog about Ajax and the relationship it has with the Model-View-Controller architecture pattern [1]. In particular he focuses on how it applies to the Rico LiveGrid.

Although I can see how at first glance using Ajax to implement an MVC architecture seems like a good idea. Don’t get me wrong here it is without a doubt an improvement over an MVC architecture in a “traditional” or pre XML HTTP request application (though I am sure there are many MVC purists who would say Ajax is an abomination). Of course the difference between Ajax and traditional web applications is that Ajax gives you the ability to choose what data to send and receive as well as what parts of the user-interface get updated. Anyone concerned about application latency should use Ajax to send small packets of data between the View and Model using the Control layer thus improving application performance because it does not require an entire page refresh.

So Ajax can, in many cases, cut down on the amount of data flowing between the View and Model. Having said that, one can envision situations where the MVC architecture pattern is not necessarily the best solution. One of Bill’s examples is sorting. To sort data in an Ajax grid control using MVC, some event causes a request to be sent to the server where all the data is sorted and a small subset is returned and presented in the user-interface. This is very nice if you have a very large amount of data and/or if the data on the server is changing often but this can also introduce considerable latency. If you can afford to get all your data into the browser (this is obviously not the case with Sabre) either because it is of small size or changes infrequently (like a contact list say) then it can be very advantageous from a latency perspective to do data manipulation, such as sorting, in the browser. Some of this type of data can even be stored on the client machine in certain browsers [2]. Or maybe if you have an Ajax Grid that deals with smaller data sets you may want to pre-sort the data by each column to decrease the latency even further.

Given the power in today’s web browsers, there are various methods one can envision to improve the latency of Ajax operations that can significantly deviate from the MVC model. It may mean making less clean code or deviating from traditional architecture patterns but it can result in a much better product.

[1] Model-View-Controller at Wikipedia
[2] MSDN Persisting User Data

.



* A is for Asynchronous

Posted on July 15th, 2005 by Dave Johnson. Filed under AJAX, JavaScript, Web2.0, XML.


There has been a flury of activity over at Ajaxian [1] regarding the asynchronicity of Ajax and Nick Lothian’s [2] two ideas for dealing with it. Nick’s initial ideas were 1) locking the view and 2) sending view state data with all requests [3]. The first idea certainly applies in some special cases but the second is closer to a general solution I think.

One of the comments on Ajaxian from Matt pretty much sums it up though and goes something like “yes well asynch programming is nothing new … it is used in swing apps [and many others] all the time”; this was the first thought that I had when I saw the headline in my rss reader. So here are my thoughts.

Building on Nick’s second idea, I think that rather than sending ALL the view state data to the server it makes sense to “store” the view state on the client (ie leave it alone) and create a unique identifier that is sent with each request and returns with the response. This keeps the request / response less complicated and less bloated. Now on the client it is a simple task to determine what data is for what request and perform the appropriate action. Furthermore, you can keep track of the timing of the requests. In the case of Nick’s Ajax tree control, one may come across a situation where a response has not returned from the server after clicking on a tree node (it may take a long time because of many child nodes say) and the user eagerly clicks on another node in the tree. If the second node click request gets back before the first the client has to decide which request has precedence. The client can look up the request time stamp and see that there was a previous TreeNodeClick event which is still waiting for a response. As I see it there are three main paths to choose from:

1) Let the events go at their own pace (if the requests don’t change the same area of the view then who cares)
2) Cancel the second quicker event (slow down tiger lets see what is in this first node you clicked)
3) Cancel the first slow event and move on (obviously if they clicked somewhere else they don’t care about the first)
4) Keep track of all response data and queue the response from the quick server request to occur after the slow request returns (aye that’s the rub)

Ok that’s four. Given these four options one can make up rules to decide which route to take. For example, given two events such as TreeNodeClick and TreeClose, the later event should of course take precedence and have other evets cancelled. In the end it boils down to the idea that, depending on the situation, asynchronous data requests should be able to cancel, block or ignore each other.

What I see as the hard part about the A in Ajax is understanding how the data from various requests may change the view on the client and their dependencies. This can certainly be onerous for the developer but the end result is a responsive and intuitive web based user-interface.

[1] Ben/Dion and Dion’s/Ben’s AJaX Mission
[2] BadMagicNumber
[3] BadMagicNumber - AJAX: Best Practice for Asynchronous JavaScript

.



* MyWebOS

Posted on July 14th, 2005 by Dave Johnson. Filed under Web2.0.


MyWebOs [1] was pretty cool back in the good old dot-com days of lore - a true Ajaxian app before its time. They are still around and are now focusing on their web office suite Hyper Office [2]. Sadly it is pretty trimmed down with little Ajax to speak of.

[1] Internet Archive
[2] HyperOffice

.



* JavaScript Benchmarking - Part 1

Posted on July 10th, 2005 by Dave Johnson. Filed under AJAX, JavaScript, Web, XML, XSLT.


As the name suggests this is part I of a series of JavaScript benchmarking blogs. The reason for these is to investigate the performance of various Ajax programming tasks. The first entry investigates how the XSL-T processors of Internet Explorer and Firefox / Mozilla (on Windows 2000) compare and how they compare to pure Javascript code that arrives at the same end result.

So what I have done is loaded some XML and XSL for building a table structure in an HTML page. The transformation is timed and we take an average and standard deviaton for each browser. In Internet Explorer I used the Msxml2.DOMDocument.3.0 object and the XSLTProcessor in Firefox. The XSLT transformation speed is then also compared to a pure Javascript implementation. The Javascript implementation is the fastest method one can use to insert HTML into a web page [1]; a string array is used to store all the rows of the table after which the array join method is called to return a string that is inserted into the DOM using innerHTML, just as the XML/XSL approach does.

The results are somewhat surprising and can be seen below.

xslt vs script time ff - ie 50 cols
(note the y-axis should be in ms not s)

One can see that the XSL-T processor is Firefox / Mozilla leaves much to be desired and is no match for the Javascript method, nor is it any match for either the XSL-T or the Javasctipt method in Internet Explorer. On the other hand, the XSL-T and Javascript methods in Internet Explorer are more or less the same with a slight edge being given to the XSL-T method.

It is curious just how much variance their is on the Firefox XSL-T data. I am not sure what is causing this but all measurements were done 50 times to get the statistics and there was nothing significantly different on the system on which the tests were run.

So for the best cross-browser performance going with pure Javascript is not so bad when presenting large amounts of data to the user. Further tests will look at the performance of XSL-T and Javascript for sorting data, object and class level CSS manipulation and the recently released Google Javascript XSL-T [2] implementation.

These types of JavaScript speed issues are very important for companies like us [3] that make high performance Ajax controls and web based information systems.

[1] Quirksmode
[2] Google AJAXSLT
[3] eBusiness Applications

.



* Live 8 Been and Gone

Posted on July 6th, 2005 by Dave Johnson. Filed under Politics.


With the excitment of Live 8 slowly fading from our memories people are rapidly reverting to their same old selves - and rightly so, thinking about difficult problems like poverty between sets of very rich and famous people who pretend to give two shits can be tough. Everyone just wants to get back to the simple life of shopping at Wal-Mart, driving their cars and demanding such low prices that Northern countries have no choice but to impose unfair trade rules on the poor South. We must maintaing the global apartheid after all.

But there is still a glimmer of hope! The 8 richest nations on earth, the G8 (the United States, Canada, Britain, France, Germany, Italy, Japan and Russia), have recently told the media that they are writing off $40bn in debt that 18 of the poorest countries in the world owe to the 8 richest. Everyone is talking about how great this is - won’t they just have a nice time now. Well maybe not.

Although they no longer owe some $40bn, they will also no longer be benefitting from a similar amount of aid. Of course this is good since it will cut down on paperwork, and therefore jobs. This means that those countries which received the debt relief are hardly better off. And let us not forget that the majority of the problems in these countries were caused by Northern countries who either:
a) colonised those countries to get at natural resources
b) uphold unfair trade rules such that it is cheaper for an African to buy wheat from the United States than it is to buy it from their own backyard
c) support corrupt leaders for various reasons (usually war or natural resources)
d) support them to use their aid or oil money to buy arms from Northern countries
e) spend 25 times more on defence than on aid
f) pave the way for multinationals to have access to those markets
g) etc etc etc

Now that the Live 8 hype is over and the super rich and famous like Chris Martin, Madonna et al have derided us cynics as being “stupid” while relentlessly jet setting all over the world, all the “smart” groupies who attended Live 8 can can get back to their lives forgetting that there is even a place called Africa and the G8 protestors can do the real work … well they are somewhat misguided too as Monbiot points out.

.