HTTP is the protocol that makes the internet go.
Recently, several large tech companies announced that version three-ish of that protocol, known as HTTP/3, was ready to rock and being rolled out.
Now, that’s nice, but why do we care?
Well, we care because the internet is about get to get zippier for everyone – for free!
A Brief History of HTTP
HTTP, or the Hypertext Transfer Protocol, was created way back around 1990.
The official HTTP Version 1.0 specification was released to the world in 1996. Before that, there were some early 0.X versions of HTTP, but we’ll just kind of ignore those.
Before we continue, there are a few terms you need to know.
TCP = Transmission Control Protocol
TCP is generally used alongside IP, the Internet Protocol. Hence, we have IP addresses!
TCP is a protocol that allows a connection from, say, your browser to an internet site. TCP also provides a layer of reliable, ordered, and error-checked delivery of data packets. This also means that TCP can ask for lost or corrupt data to be re-transmitted, which is a rather slow affair.
In order to work its magic, TCP is kinda pokey. For each request made from your browser, there are TCP handshakes and TLS (encryption) handshakes that must occur before data can be transferred. So, we can say that TCP is a relatively “high-latency” protocol.
In other words, with TCP it takes a bit longer to establish the connection (longer wait time) before data can be shuffled to and fro.
UDP = User Datagram Protocol
UDP is a different animal entirely. It’s also paired with IP, the internet protocol, and UDP is also used to send or receive data.
But that’s where the similarity to TCP ends. With UDP, there is no ordering, error-checking, and far less reliability than with TCP.
Put it this way: If TCP is like a mailman delivering packets of data in a precise and orderly fashion, UDP is a bit like driving the mail truck down the street and just dumping envelopes as it goes.
UDP is often used for things like streaming video where a whole bunch of data packets are shot down the pipe. If some packets are lost, no big deal! The video player will compensate for the lost data, and the show will go on.
Of course, UDP can be paired with error checking at the application level. This means one can use UDP to stuff lots of data down the pipe for things like file downloads. If any packets are missing, the application can request those missing chunks again – possibly via TCP instead.
The benefit of UDP is that without all the overhead of TCP, you end up with a low-latency (low wait time) connection that can also be faster due to the lack of mailman-like checking of each envelope of data.
Where were we? Ah yes…
So, HTTP/1.0 came out. It used TCP, but it was a bit slow.
Enter HTTP/1.1 which introduced the concept of “keep-alive”.
In short, keep-alive just means that since establishing a TCP connection is slow, browsers could keep connections “alive” and re-use them instead of having to re-negotiate the connection with each request.
This made HTTP better, and faster.
But each request/response cycle still needed 1 connection, even if that connection was “kept alive”. This was better, but still not as efficient as it could be…
Then, a LOOOOONG time passed. Finally, around 2015, HTTP/2 came along. That’s what you’re using right now, and it’s built into your browser.
HTTP/2 is super-cool since instead of just keep-alive, it allowed browsers to multiplex connections.
In other words, your browser could make a connection, keep it alive, and also use it for multiple request/response cycles (HTML for a web page, a JavaScript file, a CSS file, and some images – all in 1 connection).
The Problem with HTTP/2
Well, that sounds pretty good, so what’s the big deal?
Recall that TCP does error checking and the like.
So, in our multiplexed example above, if one of the chunks of data of the JavaScript file was corrupted on its way to your browser, the remaining multiplexed CSS and image files would be delayed.
Unless everything goes swimmingly, the whole chain of files gets held up – sort of like a traffic jam.
Behold: HTTP/3
HTTP/3 is the new kid on the block.
It’s based on QUIC, which was originally created by Google. Note that in HTTP/2, the core upgrade of that version was SPDY, which was also originally a mostly-Google invention.
Anyway, HTTP/3 uses a single connection (with faster initial handshakes) to send multiple streams such as HTML, JavaScript, CSS, and image files.
It does this using UDP with some QUIC magic added on top: each packet contains an “ID” of sorts that provides ordering and some other goodies like TCP, but it’s WAAAAY more efficient than TCP.
With HTTP/3, that corrupt JavaScript file packet won’t stop the CSS and images from downloading. Everything will keep on shooting down the pipe.
HTTP/3 is a bit like an accident-resistant highway. If 2 cars are involved in an accident, normally all of the vehicles behind them slow to a crawl.
With HTTP/3, the damaged cars are instantly teleported to the side of the road, and the rest of the traffic can continue flowing unimpeded.
Dang, that’s great! How do I get it?
Do nothing! Just wait…
At the moment, several large internet companies like Google and CloudFlare are enabling HTTP/3.
In the very near future, your browser (Chrome or Firefox, anyway) will get an update that enables HTTP/3 on your end.
All you have to do is keep your web browser up to date. As more and more web sites out there on the net upgrade to HTTP/3, the entire internet will get a lot zippier.
You will probably not see download times cut in half or anything quite so dramatic, though.
The HTTP/3 upgrade is more like a smaller speed-up for A LOT of internet users. Given the sheer number of users out there, this protocol upgrade ultimately will result in lower web server loads, shorter wait times to download pages on the internet, and less bandwidth used overall.
So, over time, we will all get a free effective increase in internet speed. Which is nice.
Nice explanation Scottie! 👍