Skip to main content

Musings on Web 2.0

Posted by javakiddy on December 7, 2006 at 3:06 PM PST

David Van Couvering wrote an interesting blog entry recently which caught my attention. He gave a 'heads up' to a discussion stemming from a recent Tim O'Reilly blog entry (which in turn referenced other blogs, go check the original for details) debating the merits of the term "Web 2.0". The debate bounced back and forward across a couple more blog entries (and some private emails, by the sounds of it) and ended with Tim seeming to agree that the term Web 2.0 was misleading, as it suggests a HTTP/HTML centric approach.

David brings the rather long discussion into focus, by asking whether the web (HTTP/(X)HTML/CSS/Ajax/etc.) is really an ideal platform to deliver the promise of Web 2.0, and ponders what is stopping other platforms (JWS, for example) from gaining traction in this space.

As you may recall from a blog entry some time back, I'm no great fan of the web browser as an application delivery platform. Proponents coo and whoop every time someone manages to bludgeon a rebellious HTML rendering engine into mimicking a user interface 'effect' we had on the desktop two decades ago! Sometimes it feels like UI software development suffered a massive stroke sometime back in the late Nineties, and is having to slowly relearn how to do basic things like menus, lists, text highlighting and cursor tracking.

I broadly agree with David's blog (although one comment about Derby makes me wonder if he subscribes to the belief that relational databases are appropriate on the desktop when one merely wants data storage.) I'm not going to rehash his arguments here. Instead I want to throw out some random ideas about as to what a desktop Web 2.0 platform would require.About six or seven years ago I started to get seriously p*ssed off about having to put a web interface on everything I wrote. Ninety percent of the development time seemed to go in finding work arounds for user interface techniques which were standard on desktop UI APIs, and then making them work in the ever-shifting quagmire of browser bugs, incomplete W3C implementations and user settings (JavaScript on, JavaScript off, JavaScript on, JavaScript off...)

The situation was neatly summed up by one comment to Tim's blog from someone called Steve: The browser exists because of Windows and people being frightened to install new software. The browser and all of its extensions provides one of the most hideous and limited development environments that barely, and only by dint of amazing amounts of effort, qualifies as cross-platform.

At first I thought the problem could be solved by complimenting the usual HTML form elements with a new plugin element, which would allow the likes of Java applets and Flash to participate in web forms just like 'native' components via a simple software interface. There would be event callbacks to inform the applet when the form had been reset, to request a string representation of its state for when the submit button was clicked, and so on... Eventually I realised that this wasn't sufficient. What was required was a complete sacking of the web browser, and the building of a new platform which catered specifically for the needs of dynamic online applications.

I called the project SNAP (Secure Networked Application Platform) and began experimenting with just how much of the web browser 'motif' I could move over into this new type of browser. The basic idea was that the new browser would allow thin client applications to flow across the internet with the same ease as traditional web browsers had allowed hypertext documents to do the same. The application arrived via a URL as an XML file which detailed the user interface, including references to UI components (buttons/scrollbars/textareas/media players/whatever), from libraries which could be downloaded on demand. Scripts inside the XML would bind these components together by manipulating a DOM which exposed their properties and functionality, to form a coherent user interface. Seasoned coders would write libraries of heavyweight components, and less experienced coders could tie them together with scripts.

There was also a server side XML which detailed the objects which needed building, or locating, remotely whenever the application ran. One no longer saved a file, but 'bookmarked' your work against the primary server. Taking that bookmark to a totally different computer would then reload the application and your work, in the same state as when 'saved'. SNAP used web service like calls to transparently import a server side object model onto the client, allowing the thin client scripts to kick off processes which could survive after the client closed down. This meant that one could begin a long winded process at work, then go home and check in to see how far it had progressed by simply taking the bookmark with you.

If that all sounds like too big a mouthful for one person to bite off, well, quite clearly it was! I managed to get a basic prototype up and running which trialed some of the client side stuff, but it was blindingly obvious from the start that my ambition was well in excess of what I could realistically devote to such a project in terms of time and resources. And, remember, back then I was the only person in the World who had realised that the traditional HTTP/HTML model was seriously broken (or did it just seem that way?)

Getting back to the point: as I understand it, David's blog entry suggests that the demands of Web 2.0 require a new type of client side platform. A thin client platform, which allows the server to get on with processing data, freeing it from the requirement of servicing the client side user interface. A yin and yang type affair, where the new browser (let's call it Desktop 2.0) allows 'location-less' (everywhere yet nowhere) applications to dovetail into 'location-less', community built, sources of data.

Since my jottings on the back of various envelopes in aid of my doomed (but highly educational) SNAP project, things have moved on. Many technologies have matured which would have slotted right into Snapdragon (my experimental browser) — indeed today Snapdragon would be more a case of assembling the off-the-shelf pieces, rather than building something from the ground up. But if the software landscape has shifted, how about the original goals? Could a completed Snapdragon have served as a suitable Desktop 2.0? What would be required by such a piece of software in today's Web 2.0 climate?

If I was to re-write Snapdragon today, with one eye on the promise of Web 2.0, here's a wish list of what I might try to include:

  • Not just a Web 2.0 thing : Any Desktop 2.0 client must also be a replacement for Ajax. Strictly speaking GMail isn't a Web 2.0 application, but it should benefit from a Deskop 2.0 remake.

  • Low barrier to entry : A combination of lightweight scripting, coupled with heavyweight UI components, giving ease of development for the majority while not tying the hands of power users.

  • A consistent model for accessing data held locally and remotely : Ideally we should spend our time thinking about how to use the data, not worrying about where it physically lives.

  • Security on a per-datum level, not a per-connection level : Instead of denoting network connections as secure or not, we should be able to mark bits of data as sensitive, and the platform should promise to do any necessary encryption whenever said data travels via an 'accessible' channel.

  • An expiry date for data : We are used to information on the web carrying an expiry date, it comes with the HTTP territory. Such concepts are rare in traditional desktop software, but if data is to be cached as it travels around any application we develop in Desktop/Web 2.0, it needs to come with some kind of use-by date.

  • Standardised mechanisms for common collaborative working patterns on the client and server : With applications in which several people can work on the same data, certain bits of functionality appear over and over, such as the locking and/or checking in and out of data. If the client and server had some kind of recognised protocol for doing this, it would make life a lot easier for developers, as well as reduce the opportunity for bugs.

  • Some kind of standard vocabulary and UI motifs for representing common collaborative actions : Just as the GUI brought a simple, predictable and intuitive means of interacting with software, we need a consistent and unambiguous means of representing common collaborative working techniques (locking, merging, updating, etc.)

  • A standard, simple, vocabulary and UI motif for representing security : With applications living 'across' the internet, security is a priority. The problem with concepts like certificates and signers is that one has to understand the mechanics in order to truly understand any given risk. We need to devise a means by which the average user can make informed decisions about what to trust, perhaps by employing recognisable concepts from 'real life' as a means of explaining some of the more abstract concepts in computer security (just as the desktop uses familar concepts like 'folders' and 'trashcans'.)

There's been a lot talked about Web 2.0 since the term was first coined. As I understand it Tim (and presumably David) seems to think it has little to do with flashy web site design (as many think it does) and a lot to do with "user generated content" and "harnessing collective intelligence".

The above is not an exhaustive list, naturally, but it serves to put some flesh on the bones. If Web 2.0 is destined to move away from the confines of the humble web browser, onto a desktop client truly designed to accommodate it, then I suspect something like the platform I've begun to outline above would be necessary.

Or perhaps I'm getting confused with Web 3.0 ??? :)

Related Topics >>