Archive for January, 2006

* The Myth(?) of the Cheap Startup

Posted on January 30th, 2006 by Dave Johnson. Filed under Business, Web.

JotSpot is a wiki and Excite is a search engine, right? Apples are apples and oranges are, well they are oranges.

Ever since Paul Kedrosky’s presentation at VEF in November (at which Andre also famously presented), I have been contemplating the four factors that cause the 30x difference in the amount of money required to start the two aforementioned services over 10 years apart.

From the post it says the top four factors are:
1) Hardware is 100x cheaper
2) Software is free
3) Access to global labour markets
4) SEM

However, I wonder how much more difficult it is to actually build a search engine vs a wiki? Even today could you do it on cheap hardware and free software? I am also sure that building a search engine from the ground up during the first nascent Internet boom given Joe Kraus some idea (or maybe he already knew) about what it takes to build a company.

Is the market flooded with start-ups today due to these factors? Yup, open source software and cheap hardware is helping them a bit but for a small three person startup (which is of course so fashionable these days) that focuses on only a few high quality features (another popular thing to do) then you don’t need that much in terms of software and hardware anyhow. Speaking of small three person teams, they generally work in the same room and not half-way around the world to take advantage of the low cost of living in Biharipur India. Sure SEM is important but in this day and age getting air time on influential blogs like TechCrunch is certainly the way to go - this is possibly the factor that is most influential in creating an atmosphere where $100k can really get a small project out of the basement and into the bookmark(let) list of everyone who uses digg.

In my mind the real challenge always has been and always will be finding the right balance between complexity and value. Way back when Excite was founded the value of something like JotSpot might not have made it worth raising $3 million (even today) whereas the value and market potential of a search engine was. Can someone go out today and build the next Google (or even Excite) on $100k all thanks to offshore programmers and cheap hardware?

These days everyone and their pet goldfish has a startup doing something about RSS or blogging or tagging. Sadly few are providing much value and the three founders (of course) will have no choice but to go back to working as “consultants” once the $100k dries up and there is no sign of profitability in the long tail.


* Advisory Board

Posted on January 30th, 2006 by Dave Johnson. Filed under Business, Web.

I just thought that I would let everyone know a little more about EBA (some also like to call us eBusiness Applications ;) ) since we have not had time to get all this info onto our “corporate” web site.

We have recently established an advisory board of local tech gurus and I will finally get into the fray in Feb when I get back from London. The people who are on the board at this time are:

Duane Nickull – Senior Standards Strategist at Adobe Systems
Steven Fitzgerald – President of Habañeros Consulting
Chris Gora – IP Lawyer at Farris

Kris Sutherland – Director and Executive Vice President of Chalk Media

So far they have been a very valuable addition and I am sure they will be well into the coming years.

Exciting times!


* XML with Padding

Posted on January 27th, 2006 by Dave Johnson. Filed under AJAX, JavaScript, Web2.0, XML, XSLT.

So Yahoo! supports the very nice JSONP data formatting for returning JSON (with a callback) to the browser - this of course enables cross domain browser mash-ups with no server proxy.

My question to Yahoo! is then why not support XMLP? I want to be able to get my search results in XML so that I can apply some XSLT and insert the resulting XHTML into my AJAX application. I am hoping that the “callback” parameter on their REST interface will soon be available for XML. It would be essentially the exact same as that for JSON and would call the callback after the XML data is loaded into an XML document in a cross-browser fashion. While that last point would be the most sticky it is, as everyone knows, dead simple to make cross browser XML documents :)

Please Yahoo! give me my mash-up’able XML!

If you want to make it really good then feel free to either return terse element names (like “e” rather than “searchResult” or something like that) or add some meta-data to describe the XML (some call it a schema but I am not sure JSON people will be familiar with it ;) ) so that people will not complain about how “bloated” the XML is. For example:


    <searchResult encoding=”e” />
    <e>Search result 1</e>
    <e>Search result 2</e>

    <e>Search result 3</e>
    <e>Search result 4</e>
    <e>Search result 5</e>
    <e>Search result 6</e>

Come on Yahoo! help me help you!


* Injected JavaScript Object Notation (JSONP)

Posted on January 25th, 2006 by Dave Johnson. Filed under AJAX, JSON, Web2.0, XML.

I had a few comments on one of my previous posts from Brad Neuberg and Will (no link for him). Brad suggested using script injection rather than XHR + eval() to instantiate JavaScript objects as a way of getting around the poor performance that Will was having with his application (he was creating thousands of objects using eval(jsonString) and it was apparently grinding to a halt).

As a quick test I created a JavaScript file to inject into a web page using:

var s = document.getElementById(”s”);

The script file contained an object declaration using the terse JSON type of format like:

var myObj = {"glossary": [
    {"title": "example glossary","GlossDiv":
        {"title": "S","GlossList": [

            {"ID": "SGML","SortAs": "SGML","GlossTerm": "Standard Generalized Markup Language","Acronym": "SGML","Abbrev": "ISO 8879:1986","GlossDef": "A meta-markup language, used to create markup languages such as DocBook.","GlossSeeAlso": ["GML", "XML", "markup"]

(this is the sample found here)

I compared this with the bog standard:
var myObj = eval(jsonString);

I timed these two methods running on my localhost to avoid network effects on the timing for the injected script as much as possible and the eval() method had the same poor results as usual but the injected script was uber fast on IE 6. I have not checked Mozilla yet.

In the real world then one might want to have a resource such as http://server/customer/list?start=500&size=1000&var=myObj which would return records 500 to 1500 of your customer list but rather than pure JSON it would return it in a format that can be run as injected script and the result of this would be the instantiation of a variable called myObj (as specified in the querystring) that would refer to an array of those customers requested. Pretty simple and fast - the only drawback being that you are no longer returning purely data.


* More on Wink and Tag Search

Posted on January 24th, 2006 by Dave Johnson. Filed under Search, Semantic Web, Tagging, Web2.0.

I read an interesting post by Jeff Clavier the other day and have been wondering about how an implicit search context, such as that used by Wink, could work for or against you. Btw I also still get a JavaScript error on the Wink homepage when I try to click on the search box :( .

I have posted on various issues regarding tag based search before and there was good discussion on a recent(ish) post by Om Malik entitled People Power vs Google. The new problem I envision is that when you are searching for something that is syntactically the same but semantically different from concepts which you or other people have tagged, then the results will be skewed in the wrong direction. It is a very good idea on Wink’s part to put Google search results on the same page.

Of course this problem can be overcome with a little work by the searcher who can make a more exact search string; however, one could then argue that if you have to make a more exact search string to find things outside of your tagospehere then why bother when it is likely that Google searching (ie not using tags) in your area of interest will generally return the results you want with or without tags. The same is generally true with using in that it is faster to go and search on Google than to find what you are looking for on

It is interesting to think about the problem in terms of information theory. When you encode the western alphabet for transmission like using Morse code, one would usually want to devote as few bits as possible to the letters “e” and “s” because they have a high degree of redundancy. Tag supported search is similar, in that it reduces the number of tags needed to find frequently accessed information (like reducing the number of bits that represent the letter “e”) by leveraging the work that people have put into tagging pages. This can also backfire of course when you are looking for AJAX the football club rather than AJAX the wicked-awesome programming technique when most of the pages you tag with AJAX are those relating to the technology. The user essentially has to climb out of this “context pit” created by their tagging habits by specifying “AJAX amsterdam” or “AJAX football”. Really it all depends on your search habits.

I am not sure we can prevent this problem when searching for obscure or syntactically different topics. While this might be a slightly larger problem with tag based searching it can also be a problem with Google - the main difference being that Google bases its results on actual HTML links between pages, which, in my opinion, should generally result in a more robust and less biased result set. Will this problem become even worse once we start using things like the Semantic Web?


* What Makes a Service Last?

Posted on January 20th, 2006 by Dave Johnson. Filed under Semantic Web, Web2.0, XML.

I have been intently following Dion, as you do, over at the good old SOA Blog. One recent post is, as usual, more of the same commentary about Web 2.0 and SOA.

In his latest post Dion suggests that:

“Writing software from scratch will continue going away. It’s just too easy to wire things together now. Witness the growth of truly amazing mash-ups of which things like Retrievr and Meebo are only two tiny examples.”

This is a bit too far off the Web 2.0 global SOA deep end for me. Retrievr is admitedly an interesting mash-up but is it really “truly amazing”? Is it something you need to use everyday - something to write home to Mom about? I suppose it could be considered amazing from the perspective of available mash-ups but in general mash-up quality and usefulness is relatively low. From what I can tell the main reason to provide API’s to your software is to either:
a) get more users and increase your valuation when selling your Web 2.0 company to Yahoo!
b) hope that Google likes your mash-up and hires you
c) gain the support of the increasingly trendy niche of hybrid “programmer / blogger / never the cool kid in school” types to help you achieve goals a) or b)

d) attract attention to attain status of trendy hybrid “programmer / blogger / never the cool kid in school”
(please leave any other ideas in comments below)

Flickr in itself is only marginally amazing, and it was written from scratch - shock horror!

If one even considers what a mash-up really is, one finds that we have always developed software by “wiring things together” have we not? I can imagine with every level of programming language abstraction there is some journalist somewhere who heralds it as evidence of a new golden age of programming productvity. The only difference here is that programming languages - unlike mash-ups - can actually be useful!

The real amazing software that I find myself using is that which actually *enables* the mash-ups; for example, Google or eBay have great technology and are products/services that can not simply be created by mash’ing up a few JSON based JavaScript streams in a browser.

In his latest post, Dion even says:

“Maybe software developers should just go back to sprouting acronyms and delivering software that doesn’t do what people want.”

To me, he is trying to say that Web 2.0 let’s people build good, useable software - this is sort of true and I am a big believer in AJaX of course. However, I would like to know how many social networking, tagging, blogging, sharing //insert buzz word here// Web 2.0 applications we need!

The actually point that I was thinking about when I gave this post a title was that I just don’t understand why creating REST based services is really that open, easy, or robust? At least with Web Services and WSDL one can automatically build a C# or Java proxy for a service and even have JavaScript emitted for use on the client, can you do the same for the REST API so easily? In fact I find it astounding that an API such as that of Flickr, which is actually quite robust, does not even have a standard WSDL based description of the bindings (addmitedly some aspects of the API are not that complicated to warrant SOAP based services but at least a binding description would be nice) - my point being that it seems to me WSDL descriptions (or any kind of machine readable one for that matter) of mash-up enabling APIs are a few and far between despite the fact that they are actually quite useful for generating proxies etc. Also, how will these supposedly simple services work with the Semantic Web? I am not sure the Semantic Web will be that easy in itself so does that mean we should forego it and just settle for Web 2.0 or maybe 1.5? Well yeah maybe we should :S I guess I could be alone in thinking that the Semantic Web is what we should really be talking about rather than mashing-up Google with Craigslist (I know, Google + Craigslist is sooooooo 2005). The whole idea of an API for a service that one has to actually physically read makes me shudder - haven’t people had enough of mapping inputs and outputs to services (whether they are REST or otherwise)??? Maybe I should quit complaining and define a REST service description language (RSDL) that is a simple version of WSDL …

I suspect this drive to simplicity is going lead us down a path we have been on before. As you make things more simple you also, generally, make them less valuable. I know that many take the KISS principle too literally sometimes and apply it to the nth degree. Sure Google is pretty damn complex but they also have billions of dollars in revenue - complex and valuable. On the other hand, look at Retrievr - simple and worthless. Choose your poison.


* XML/XSLT in Mozilla

Posted on January 17th, 2006 by Dave Johnson. Filed under AJAX, XML, XSLT.

I had just clicked the “save and preview” button and lost my entire post … anyhow I will give it another shot but it will surely not be anywhere as lucid given my rising urge to kill :)

Given that we have been developing AJaX solutions for some time now based on Internet Explorer it is becoming a wee bit annoying that we have to cater so much to the Firefox/Mozilla using crowd simply because they are the most vocal and influential! Luckily most of our customers still use Internet Explorer. Nonetheless we are doing our best and hope to have a cross browser architecture for our AJaX components very soon. In so doing, I have been having a fun time figuring our XPath and XSLT in Mozilla so that it emulates Internet Explorer (I will likely just end up using sarissa in the end though). Having gone through most of this process, I finally understand why the majority of Mozilla developers hate XML/XSLT and love JSON! It also helps that MSDN has such great documentation I guess as well :S

Most of this work has been in an effort to create a small library that I call J4X - JSON for XML - which essentially dynamically creates a JavaScript object representing the XML behind it. This liberates developers from having to use XML interfaces to access their objects and insteads makes it just like JSON. So you get the best of both worlds - easy programatic access and XML based message formatting! In that respect it is more or less a stop-gap technology until E4X becomes more widely supported.


* MooseCamp

Posted on January 14th, 2006 by Dave Johnson. Filed under Business, Web.

First of all I hope that everyone had a great holiday season! I know that I had a nice quiet time here in london.

Anyhow, apparently Andre was kind enough to sign me up to talk about geeky AJAX stuff at MooseCamp in Vancouver on Feb 10-11. So if you want to hear some good AJAX info only a few hours after I arrive back in Vancity from a 9.5 hour flight then by all means ;)

I will also be going to the Future of Web Apps conference in London on Feb 8 if anyone is in the neighbourhood. I am really looking forward to meeting Eric Costello from Flickr :)

And no one has responded to my question about declarative data binding … :(