“Google kills ‘http’ URLs”!
An announcement was made last month on the ZDNet blog: “Google kills ‘http’ URLs; about time to follow suit?“. The post describe how “Google’s Chrome browser will no longer include http:// as part of the URL field“. The post went on to add that “this has indeed ruffled some veteran’s feathers” as “FTP, HTTPS and other protocols which are non-HTTP are still used“. However Zack Whittacker, the author of the post felt that “I don’t think it’s that much of a deal, frankly. When have you ever heard on the television, radio, or in print media the use of ‘http://’?”
He’s correct – if you listen to the TV or radio you don’t hear an announcer invited the audience to visit “aitch-tee-tee-pee-colon-slash-slash“. The scheme name in URIs has become invisible – an example of a comment I made in a recent IWR interview in which, having been invited to describe how much of a techno-geek I was using an ‘IWR’s digitometer’ “My iPod Touch, mobile phone and PC are now my pen and paper – not technologies but essential tools I use every day“.
The ‘Disappearance’ of HTTP
But what does the disappearance of a technology tell us? In the case of the growing disappearance of the HTTP scheme from URIs from the perspective of the general public I think it tells us that the standard is so ubiquitous that it no longer needs to be referred to. The flip side of this is when something ubiquitous starts to become challenged by something new that we have to start referring to the old thing in new days – remember, for example, when watches were just watches, and we didn’t need to differentiate between analogue and digital watches?
The ZDNet blog post, then, provides us with a reminder of the success of the HTTP protocol – it has become so successful that we don’t think about it any more.
But how did HTTP achieve such a dominant role? I have been around the Web environment to have seen the evolution of HTTP from HTTP 0.9 through to HTTP 1.0 and then HTTP 1.1 – and I’ve even read all three specifications (although many years ago, so please don’t test me)!
If I recall correctly, HTTP 0.9 was the first published version of the HyperText Transport Protocol, which I used when I first encountered the Web (or W3 as it was often referred to in the early 90s). This had the merits of being simple – a single page I have recently discovered.
HTTP/1.0 introduced MIME types so that documents retrieved over the Web could be processed by helper applications based on the MIME type rather than the file name suffix – much of the additional length of the specification is due to the formal documentation of features provided in HTTP 0.9, I think.
Then HTTP/1.1 was released, which, I remember, provided support for caching (the UK was the first country to support a national caching service across a large community – UK HE – and the protocol support for caching in browsers and servers introduced in HTTP 1.1 was needed in order to allow old versions of resources held in caches to be refreshed ). A paper on “Key Differences between HTTP/1.0 and HTTP/1.1” provides a more detailed summary of the enhancements provided in HTTP/1.1.
And after that – nothing. A successful standard goes through a small number of refinements until the bugs, flaws and deficiencies are ironed out and is then stable for a significant period.
The Flaws in HTTP
But is this really the case? HTTP may be ubiquitous, but it has flaws which were initially pointed out by Simon Spero way back in 1995 (I should mention that I met Simon last month at the WWW 2010 conference after discussing the history of HTTP in the coffee queue!).
Building on this work in November 1998 an IETF INTERNET-DRAFT on “HTTP-NG Overview: Problem Statement, Requirements, and Solution Outline” was written which pointed out that “HTTP/1.1 is becoming strained modularity wise as well as performance wise“. The document pointed out that:
Modularity is an important kind of simplicity, and HTTP/1.x isn’t very modular. If we look carefully at HTTP/1.x, we can see it addresses three layers of concerns, but in a way that does not cleanly separate those layers: message transport, general-purpose remote method invocation, and a particular set of methods historically focused on document processing (broadly construed to include things like forms processing and searching).
The solution to these problems was HTTP/NG, which would “produce a simpler but more capable and more flexible system than the one currently provided by HTTP“. And who could argue against the value of having a simpler yet more flexible standard that is used throughout the Web?
We then saw a HTTP-NG Working Group proposed within the W3C which produced a number of documents – but nothing after 1999.
We now know that, despite the flaws which were well-documented over 10 years ago, there has been insufficient momentum to deploy a better version of HTTP/1.1. And there has also been a failure to deploy alternative transfer protocols to HTTP – I can recall in the mid 1990s former colleague at Newcastle University who were involved in reliable distributed object-oriented research work suggesting that IIOP (Internet Inter-ORB Protocol) could well replace HTTP.
What can we conclude from this history lesson? I would suggest that HTTP hasn’t succeeded because of its simplicity and elegance – rather it has succeed despite its flaws and limitations. It is ‘good enough’ – despite the objections from researchers who can point out better ways of doing things. This relates to a point made by Erik Duval who, in a position paper presented at CETIS’s Future of Interoperability Standards meeting argued that “Standards Are Not Research” and pointed out that “Once the standardization process is started, the focus shifts to consensus building“.
The consensus for HTTP is very much “it’s good enough – we don’t care about it any more“. So much so that it is becoming invisible. I wonder if there are other examples of Web standards which have stable for over a decade and we fail to notice them?