Client- vs. Server-Side Routing

☕ 4 minutes readLast updated 1/2/2020

So I started a new side-project and thought about this question once again. Starting from scratch has this curse to it - that you want to build something great and question everything. This time I will document my thoughts and hope some people will find value in it.

Motivation

I started off by creating some static HTML pages because we all know bare-bones HTML is unchallenged in performance. Don’t overdo it with JavaScript. Crawler of any kind will understand your content better even when google bot is evergreen there are plenty other bots besides google. Not everyone runs JavaScript but those people should nevertheless expect a shitty experience on the web these days.

Problem

Serving mixed content for a URL or map multiple URLs to the same HTML document. Well, this can either be done with a smart web server or in JavaScript land. Why you don’t want to provision your own web server deserves its own blog article.

Paint Flashing (Blinking) who doesn’t despise those hard reloads while routing. The browser has to unload the current page while it simultaneously downloads the new page. Unless we have paint holding (chrome://flags/#enable-paint-holding) enabled we will see a blank page for a few milliseconds. Once downloaded the browser will render the page - a new rendering thread will be spun up due to spectre and site-isolation. Many little things happen blazingly fast in succession until we see our page.

We can opt-out of this by simply never “changing the page” at least out of the view of the browser. By patching it up with some prevent default on the global click handler and voila we have opted-out. Now we only need to sprinkle some history API and we’re done, right?

HTML vs JavaScript

You might ask why is the question now HTML vs JavaScript bear with me. This is what the question boils down to right? If we take the responsibility for routing we need to do all manually what normally the browser does and all we got to do this with is JavaScript. Of course, you need to wrap your JavaScript in HTML but we don’t want to be picky by all I simply mean internal routing.

By the way, setting the window.location to some URL we will still consider server-side routing by a detour. Client-side routing is really when you prevent the browser from actually doing its work of routing.

Why HTML & JavaScript is crafty

After stopping the browser from doing its natural thing by intercepting the click on an anchor tag and preventing its default action of firing a GET request and parsing the document we might think that we can at least reuse our precious HTML pages. So we go ahead and fetch them, but at this point, all that the HTML is is text for the browser. Well, it’s a text that the DomParser Object and friends can understand so we get some help parsing it, but that’s it. No stream parsing. No look-ahead resource loading. Nothing just text.

You eventually end up having some DOM nodes which you can replace the document.body with. Add some scripts just to see that the scripts are not being executed. Well yeah opting out also means the browser won’t execute scripts for you that’s on you now. At this time you will discover all sorts of problems. Like having to merge the head nodes, changing the title and stuff. At some point, you’ve rebuild turbolinks.

If you want to add preloading resources on hover, you would’ve to rebind a hover listener on every link tag that comes with new HTML from here on it just gets more and more crafty. I think it is needless to say that sooner or later dropping that adventure becomes really appealing. Even if you get this all to work I don’t know what kind of wacky problems are still lurking.

How do you create DOM if the browser is not doing it?

If you chew long enough on that problem it becomes: How do you create DOM if the browser is not doing it right? You either stick to fetching text and interpreting it as we did before or you go the established way of JavaScript lands rendering libraries, DOM-diffing and template markup alchemy there are plenty libraries around that help you with this kind: lit-html, hyperHTML, htm, preact, react, nanomorph just to name a few however all resort to load JavaScript in the end.

Conclusion

I love simple HTML with some little jQuErY convenience keeping stuff simple has always proven more solid right?

In the end, it depends. Going client-side means much more than just handling routing you also get the burden of rendering alongside it which creates its own fascinating problems.

Still, I think client-side routing beats server-side routing because you are getting a lot of performance by decoupling components and preloading opportunities.

Keep in mind it is not all black and white, we can still mix things and with paint holding, we won’t even get brain damage.

Creating DOM from Javascript is pretty fast these days. We get some fancy page transitions. Not at last is the platform evolving (for example a native scheduler) so the burden of loading too much JavaScript gets less and less, but it still is not free.

Let’s see where it carries us!