WWW

How the Internet Works, Chapter 20
The Evolution of HTTP and AJAX

posted in: How the Internet Works | 0

Hypertext Transfer Protocol, or HTTP, began as a means of transferring hypertext documents from a local server. A hypertext document is a document that supports clickable links to other places in the document, or to places in other documents. This comes down to our modern web environment as the links that may be found on practically any web page. As the internet evolved, HTTP evolved along with it, to meet new needs.

HTTP/0.9

The original version of HTTP could only get hypertext documents from a local server. So, it supported only the GET method, and the GET method only supported a path parameter. As internet needs began to evolve beyond this original purpose, newer, enhanced versions of HTTP emerged, and this original version was retroactively assigned version number 0.9.

HTTP/1.0

HTTP 1.0 added the HEAD and POST methods to HTTP. It also enhanced the GET method to allow a URI as well as a path, which allowed HTTP to specify connections to other servers as well as the local one. The addition of headers allowed requests to provide metadata, which increased the flexibility of the types of requests generated. Server responses included a status code, for example 404 if a page was not found.

HTTP/1.1

HTTP 1.1 added PUT, DELETE, TRACE and OPTIONS methods. But perhaps most importantly, it also allowed for pipelining requests, or allowing multiple requests to be submitted and multiple responses to be returned in a batch over a single TCP/IP connection.

Web pages had been evolving from simple hypertext documents to pages with images embedded in them, associated CSS and JavaScript files, and other content. Each of these separate items required a single connection, so a single web page required multiple connections to be established and dropped one after the other (at present, an average web page requires over 100 requests). The ability to reuse a single connection for the multiple files now needed to assemble a single web page resulted in considerably improved performance.

A further performance improvement came from the introduction of various cache-control mechanisms, where various elements in a page could be cached either locally or on the server for faster access on subsequent requests.

When HTTP/1.1 was delivered in 1997, it quickly became the standard means of transferring pages on the internet. At present, about 80% of websites still use its secure variant HTTPS.

HTTP/2

While pipelining was an improvement, it still suffered from the limitation of having to process requests and responses in the order in which they were received. This can result in a bottleneck called head-of-line blocking (or HOL blocking) when one request has a large amount of data (such as an image) or one response gets delayed (for example, has to be resent). In such a case, all subsequent responses have to wait to be processed until the blocking one gets processed.

HTTP/2 addresses this problem by allowing messages to be multiplexed on a single connection, meaning that responses are sent in parallel rather than serially. This eliminates the HOL blocking problem of HTTP 1.1 (it does not eliminate HOL blocking at the Transport Level with TCP, which can happen to the entire multiplexed stream). Also, headers are sent separately from data, which allows them to be compressed.

HTTP/3

HTTP/3 was officially released as an IETF standard in late 2018. It is currently available on an experimental basis in most major browsers.

HTTP/3 uses a protocol called QUIC that runs over UDP instead of TCP. It is intended to replace TLS from a security standpoint, it improves multiplexing capabilities over HTTP/2 (including mitigating HOL blocking at the packet level), and it allows for connection migration, meaning that, for example, a user on a mobile phone will be able to move a connection from cellular data network to WiFi when one becomes available, rather than having to re-establish the connection after changing networks.

AJAX

Asynchronous JavaScript and XML, or AJAX, is a programming strategy that evolved over time.

In the early days of the internet, browsers could only request entire pages. This meant that if a web application wanted to perform a minor update on a page, it still had to reload the entire page. This meant further that web applications were limited in the content that they could make available.

It became clear that to move forward, it was necessary to be able to do several operations that can’t be done using simple HTTP:

  • Requesting, and receiving, data from a server at any point, not just when loading a page
  • Re-render parts of a web page with new information without reloading the entire page
  • Sending data to a server in the background

AJAX functionality addressed these needs. AJAX is asynchronous, in that it allows for partial updating of web pages behind the scenes — in other words, without reloading the entire web page.

When a user makes a change that needs to be reflected in the web page (for example, adds an item to a shopping cart), JavaScript runs an AJAX routine that updates the server and dynamically rewrites the portion of the HTML on the page that changes. It updates the server and rewrites the page independently of each other, which is where the “asynchronous” part of AJAX comes in.

AJAX functionality was originally incorporated into the JavaScript interpreter as the XMLHttpRequest API. (See this Wikipedia article for more information.)

The next article is an overview of Web APIs.