- Fetch the bootstrapping document
- Fetch the JS renderer
- Fetch the content
You can mitigate this in various ways, but best-case scenario is at least two round-trips, paying the latency cost twice. That doesn't sound bad until you see the latencies users put up with — the bottom 10% can only be described as pitiful. And filesize, well...
Frameworks know this. They all have "isomorphic rendering" efforts, where the same code renders on both the client and the server. But it won't be popular until they require it. And their current implementations run into the "Interface Uncanny Valley" problem.
This is why I prefer Progressive Rendering + Bootstrapping. I’d love to see more frameworks support this approach:
Our frameworks are a decade behind
The Ember framework's official stance on performance is that browsers will catch up. Indeed, now that Apple devices are "fast enough" for Ember, it's really caught on.
I do agree with Ember's head, Tom Dale, when he says without some sort of unifying framework, it's too easy to grow your own that performs even worse. But our current choices aren't enough, either. They're always a decade behind the next thing for the Web. Once a platform catches up, it's the next highly-limited platform that can't handle them.
Some computers and networks get faster, but others just get more widespread. Now that Ember's almost fast enough for mobile, the Internet of Things, Physical Web, and other exciting developments are poised.
For example, the proposed Physical Web "fat beacons" are maximum 40 kilobytes. Technologies like Service Worker are shaking up web thang architecture as we know it. Heck, Mozilla's got a new browser engine that turns performance knowledge on its head — HTML+CSS is faster than
<canvas>! The golden path shifts under our feet.
No-JS is future-friendly
The only performance strategy that holds true across every platform and network is "do as little as possible". The ability to exclude JS from those that can't handle it is critical. What else could work across:
- Game consoles with hyperspecialized capabilities (games never looked better, but their browsing stutters)
- Cheapo tablets and smart TVs, with scads of pixels but a wimpy chip to paint them all with
- Outdated hardware, which grows a longer tail every day
- One core, instead of multiple
- Many slow cores, instead of a few fast ones
- Devices lacking hardware acceleration
- Smart refrigerators and other blasted "Internet of Things" fripperies
- CPU architectures that aren't x64 or ARM
- New devices we've only begun to imagine
- Crowded public WiFi with unpredictable latency
- Fallback data connections because carriers exaggerate their coverage for some reason
- Shaky connections and their Lie-Fi
- Network snags like retransmitted packets, bad routing, reception changes, moving around with your phone, etc.
The ability to cut the mustard is critical. If your site works with just HTML, then you can choose to send bad performers only that HTML on the fly.
Whether or not such devices "should" include the Web, they will. People will visit with whatever bizarro browser is nearest, whether they're supposed to or not. As device diversity accelerates, "normal" browsers will no longer be a majority, and the average experience becomes less and less of a useful metric. See Tom's "Unpredictable Performance" heading.
We can't rely on browsers, HTTP/2, CDNs, or anything else to make our web fast. (Indeed, devs get angry if they do, if Opera Mini and Google Web Light are any sign.) And we have to start from the bottom up, with the simplest thing that could possibly work.
[This post is written for #startYourShift's August theme of performance. And the previous month's theme, really.]