Speed is the first competency test
6th March, 2026

Your website is slow. And that’s costing you money.
But the commercial case isn’t the interesting part. Slow websites are usually a symptom of something deeper: organisations that have lost the ability to reason about their own systems. Speed is one of the few places where that dysfunction becomes measurable.
This isn’t new
That speed affects revenue and reflects organisational challenges shouldn’t be controversial anymore. The relationship between page speed and commercial performance has been measured so many times that it’s barely interesting. Faster websites convert better. Users abandon fewer journeys. Revenue goes up. Milliseconds matter. But speed also matters for a reason most dashboards never capture.
We’ve known this for years. And yet, most websites are still slow.
Not just the small ones. The big ones, too. The companies with the resources, the engineers, the tooling, and the budgets to get this right are often shipping pages that take several seconds to become usable.
If speed were purely a technical problem, this would be puzzling. The web platform certainly isn’t holding anyone back. Modern browsers are extraordinary pieces of software. They speculate, prioritise, stream, cache, and optimise in ways that would have seemed implausible not long ago. The techniques required to build fast sites are well understood, widely documented, and increasingly baked into the platform itself — and by people willing to contribute upstream.
And yet, still slow.
Which suggests that the interesting question isn’t how to make websites fast. We already know how to do that. The interesting question is why organisations consistently fail to do it.
Part of the answer is that performance is one of the few signals in a business that’s brutally honest. You can tell comforting stories about almost any other metric. Attribution models can be massaged. Brand surveys can be interpreted generously. Dashboards can always be framed to emphasise the narrative leadership prefers.
Latency doesn’t really care about narratives. A page either loads quickly or it doesn’t. Users either wait or they leave. The browser simply executes the system it’s given.
That makes site speed unusually revealing. It’s where the architecture of a system becomes visible. It’s where the consequences of technical debt show up. It’s where years of small compromises, convenient shortcuts, and organisational drift quietly accumulate into seconds of delay.
In other words, speed isn’t just a technical metric. It’s one of the clearest places where the health of an organisation’s digital systems becomes measurable.
Despite two decades of evidence
The speed – revenue relationship isn’t a new discovery.
The web industry has been producing the same graph for the better part of twenty years. Reduce page latency and conversion rates improve. Users abandon fewer sessions. Basket sizes increase. Lead forms get completed more often. Engagement goes up.
Different sectors, different audiences, same pattern.
Retail sites routinely see measurable increases in conversion when pages load faster. Travel sites reduce bounce rates. Lead generation flows complete more often. Even luxury brands, where you might expect patience, see meaningful changes in add-to-basket behaviour when latency drops.
Milliseconds matter.
What’s strange is not that speed affects business outcomes. That part is well understood. The strange part is that, despite two decades of evidence, most organisations still treat performance as a peripheral concern.
Security is taken seriously. Reliability is taken seriously. Data governance is taken seriously. Entire departments exist to manage those risks.
Speed, by contrast, tends to appear in short bursts of enthusiasm. A performance sprint here. A Lighthouse audit there. Maybe a dashboard widget that gets glanced at occasionally and ignored the moment something more urgent lands on the roadmap.
That raises an awkward question. If faster websites reliably produce better business outcomes, why is performance almost never treated as a core organisational capability?
The answer, it turns out, has very little to do with JavaScript.
Speed is a capability filter
Part of the reason performance never quite becomes a priority is that it rarely belongs to anyone.
Security has owners. Reliability has owners. Compliance has owners. There are teams, processes, and budgets dedicated to making sure those things happen. Speed sits awkwardly between departments.
Marketing teams care about campaigns, measurement, and acquisition. Product teams care about shipping features. Engineering teams care about developer velocity and system stability. Each of those goals is perfectly reasonable, but none of them naturally reward restraint.
And restraint is exactly what performance requires. Every dependency, every framework, every analytics script, every convenience library adds a little weight to the system. Individually, they look harmless. Collectively, they turn a simple document request into a sprawling execution pipeline.
Modern websites often pass through an extraordinary number of transformations before anything reaches the browser. Templates become components. Components become bundles. Bundles become chunks. Chunks are compiled, transpiled, minified, optimised, sanitised, and piped through layers of tooling designed to make the development experience smoother and the deployment process safer.
All of which is well-intentioned. But somewhere along the way, the system becomes so abstracted that it’s almost impossible for any individual developer to understand the performance consequences of their work. A small change in one component might ripple through half a dozen build stages, alter the shape of the dependency graph, and quietly add several hundred milliseconds to the critical rendering path.
By the time it reaches the browser, the code barely resembles what the developer originally wrote.
This is one of the less discussed trade-offs of modern web engineering. Over the past decade, we’ve become extremely good at optimising for developer experience. Tooling, frameworks, and build systems have evolved to make software easier to write, easier to refactor, and easier to ship.
But those same systems often hide the real cost of the software they produce. Developers experience clean abstractions and rapid iteration. Users experience megabytes of JavaScript and seconds of delay.
That disconnect matters – because once the system reaches a certain level of complexity, performance stops being something individuals can manage locally. No single engineer sees the whole pipeline anymore. No single team owns the outcome. The system simply grows heavier with each release.
And even when someone does see the problem, fixing it is rarely within their control.
Performance problems almost always cross team boundaries. The page might belong to Team A, but the design system belongs to Team B, the analytics stack is owned by Team C, and the build pipeline lives somewhere inside the platform team. Improving one part of the system often means asking three other teams to change theirs.
Which means roadmaps have to shift. Budgets have to move. Engineers have to work on something that doesn’t advance their own backlog. And, of course, almost nobody is incentivised to do that.
So the performance issue survives another quarter. Then another. Each team has perfectly reasonable priorities, and none of them involves spending time helping somebody else make their thing faster.
Which is why speed behaves less like a tuning exercise and more like a capability filter.
Fast websites tend to emerge from organisations that understand their systems end-to-end. Where teams are aware of the consequences of their decisions. Where architecture is deliberate, dependencies are scrutinised, and performance is treated as a property of the whole system rather than an afterthought.
Slow websites, more often than not, reveal the opposite.
The myth that performance is difficult
Once you start thinking about speed as a capability filter, another common explanation quickly falls apart.
Teams often argue that performance is difficult. That modern web applications are inherently complex, and that making them fast requires specialist expertise and heroic engineering effort.
That story used to contain some truth.
Fifteen or twenty years ago, the web platform was inconsistent and unpredictable. Browsers behaved differently, tooling was primitive, and building a reliably fast experience often meant fighting the platform itself. But that’s no longer the world we’re operating in.
Today, the fundamentals of web performance are almost embarrassingly simple. Send less data. Avoid blocking the browser. Cache aggressively. Render something useful as early as possible. Don’t ship work the user didn’t ask for.
None of this is obscure knowledge. The web platform has spent the last decade making these outcomes easier, not harder. Browsers prioritise resources intelligently. Images can be natively lazy-loaded. Content can stream while it’s still being generated. Navigation can be speculatively prefetched or even pre-rendered before the user clicks. Simple HTML and CSS can now handle entire classes of interactions that once required substantial JavaScript.
In many ways, the platform has quietly solved the hardest parts of performance. In many cases, building a fast website today is simply the natural outcome of using the platform as intended. And yet the web keeps getting slower.
Part of the explanation lies in habits the industry formed while the platform was still catching up. Developers learned to rebuild missing browser capabilities in JavaScript. Routing systems, rendering layers, state managers, and component frameworks. Entire application platforms running inside the page.
At the time, that made sense.
Today, the platform can do much of what those frameworks were originally invented to simulate. But the ecosystem didn’t revert. Engineers keep using the tools they learned, teams copy the patterns they see elsewhere, and projects inherit stacks that were designed for a very different era of the web.
The result is a kind of architectural inertia. Frameworks on frameworks. Dependencies pulling in other dependencies. Toolchains that transform simple components into sprawling bundles of runtime logic.
And from inside the development environment, sometimes, everything feels tidy. Abstractions can hide the machinery. Tooling optimises for developer productivity and rapid iteration, but at the cost of reduced visibility and comprehension of the output.
So, somewhere along the way, the system accumulates overhead and debt that nobody quite intended.
Developers write one component. The build system compiles it, bundles it, hydrates it, wraps it in a runtime, pulls in dependencies, and ships the result to the browser as a large block of code that has to be downloaded, parsed, executed, and coordinated before the page becomes usable.
The browser never sees the elegant abstractions. It just receives the output. And so, performance appears mysterious from inside large systems. Not because the underlying principles are complex, but because the machinery producing the problem has grown too complicated to see clearly.
The web platform keeps getting faster, but the systems we build on top of it keep getting heavier.
Another subtle problem is that many teams have simply lost sight of what “fast” actually looks like.
Spend enough time working inside modern web stacks, and the baseline drifts. Pages that take several seconds to become usable start to feel normal. Large bundles feel expected. Waiting becomes part of the mental model.
Engineers learn from existing codebases, tutorials, frameworks, and increasingly from AI tools trained on the public web. Those systems faithfully reproduce the patterns they see most often, which means they also reproduce many of the same architectural mistakes.
The ecosystem quietly reinforces its own habits. What should feel heavy becomes the norm. What should feel unnecessary becomes standard practice.
Speed is a surprisingly good test
This is why site speed is such a revealing signal.
Not because milliseconds are magical, but because fast websites require a surprising number of things to go right at the same time.
You need sensible architecture. You need teams that understand the platform they’re building on. You need restraint when choosing dependencies. You need engineers who can reason about the consequences of their work beyond the component they’re currently editing. And you need coordination.
Performance is one of the few properties of a system that cuts across everything: product decisions, engineering choices, infrastructure, content, and organisational structure. It is shaped by every team that touches the system.
Which means it behaves less like a feature and more like a test.
Site speed is an unusually honest signal of organisational health – not because milliseconds are magical, but because consistently fast websites require the right conditions across the whole organisation.
If an organisation can consistently ship fast websites, it usually means the underlying system is healthy. Engineers understand the platform. Teams can collaborate across boundaries. Architectural decisions are deliberate rather than accidental. Complexity is managed rather than allowed to accumulate.
If the site is slow, the opposite is often true. Not because the engineers are incapable, but because the organisation has lost the ability to reason about the system as a whole. Too many layers, too many dependencies, too many teams making local decisions that nobody is responsible for reconciling globally.
Speed simply makes that visible.
Browsers are brutally literal environments. They execute exactly what they’re given. Every abstraction layer, every dependency chain, every unnecessary kilobyte eventually becomes latency.
Which is why performance is such a useful diagnostic. It’s the smallest possible test of digital competence.
If an organisation struggles to pass this one, it’s rarely the only system that’s struggling.
Speed is a system property
One of the reasons performance causes so much confusion is that teams tend to treat it as a tuning exercise.
A slow site gets an audit. Engineers identify a few obvious bottlenecks. Images get compressed, some JavaScript is deferred, a Lighthouse score improves, and everyone moves on.
Occasionally, those fixes make a meaningful difference. More often, they shave a few milliseconds off a system that was architecturally slow to begin with.
Because performance is rarely determined by a single mistake. It’s determined by the shape of the system.
If the architecture assumes large bundles of client-side code, the browser will spend time downloading and executing them. If every page depends on a complex chain of runtime logic, the rendering pipeline will wait for that logic to complete. If data fetching, layout, and interaction all happen in the browser after the page loads, users will feel that delay.
You can optimise around those constraints, but you can’t eliminate them.
Fast systems tend to make different architectural choices from the beginning. They deliver meaningful content early. They minimise the amount of work the browser has to perform before the page becomes useful. They treat JavaScript as an enhancement rather than the foundation of the experience.
Those decisions are made long before anyone opens DevTools.
Which is why performance is so hard to retrofit. By the time a system becomes visibly slow, the underlying architecture has often already committed to the behaviour that causes the delay.
Trying to “optimise” that system afterwards is a bit like trying to make a cargo ship accelerate like a speedboat. You can polish the hull, but the shape of the vessel determines what it can do.
Fast websites rarely emerge from optimisation sprints. They emerge from systems that were designed to be fast in the first place.
For a long time, that mostly affected human users. But the web is no longer consumed only by people.
And designing systems that way requires something most organisations struggle with: the ability to think about the web as a whole system, rather than a collection of independent parts.
Why this matters more than ever
For most of the web’s history, the consequences of slow websites were mostly “just” commercial.
Pages loaded slowly, users got impatient, and some proportion of them disappeared before completing whatever journey the business had hoped for. Conversion rates dipped, engagement softened, and revenue quietly leaked away. Annoying, certainly, but rarely catastrophic.
Humans are remarkably tolerant of friction. We sigh, we wait, we refresh the page, we try again later. A slightly sluggish interface is irritating, but it’s rarely enough to stop us entirely. Over time, we’ve become quite good at working around the web’s imperfections.
The systems beginning to interact with the web now are much less forgiving.
Increasingly, the web is being navigated not just by people but by software. Search engines, assistants, automation systems, crawlers, and AI tools that discover, retrieve, analyse, and combine information across large parts of the internet.
Those systems don’t browse in the way humans do. They operate as chains of tasks. A resource is discovered, fetched, parsed, analysed, and often used to trigger the next request somewhere else. A single answer might require dozens of these steps, sometimes hundreds.
Latency compounds as those chains grow.
A slow website in that environment doesn’t just inconvenience a user. It slows the entire chain of retrieval and reasoning that depends on it. Multiply that across thousands of documents, APIs, and endpoints and performance stops being a minor UX detail. It becomes a structural property of the system the machine is interacting with.
Fast websites are easy to consume. Their structure is predictable, their responses arrive quickly, their content can be processed without unnecessary overhead. They behave like reliable infrastructure.
Slow ones behave like obstacles.
And as more of the web becomes mediated by automated systems, that distinction becomes increasingly important. Systems that are fast, structured, and predictable are easy for machines to retrieve, parse, and combine. Systems that are slow or cumbersome simply become less attractive building blocks in that ecosystem.
Which means performance stops being a minor front-end concern.
It becomes a property of whether your systems are usable infrastructure or friction for everything built on top of them.
The simplest test
If you want to know whether an organisation understands the web, there are plenty of complicated ways to find out.
You can review their architecture diagrams. Audit their infrastructure. Interview their engineering teams. Map the dependency graph of their front-end stack and try to reason about how it evolved.
Or you can just load their homepage.
Web performance is unusual in that it collapses a huge amount of organisational behaviour into a single observable outcome. Architecture choices, tooling decisions, team structures, dependency management, development culture, platform knowledge, and operational discipline all eventually pass through the same narrow bottleneck: the code that reaches the browser.
If that system is coherent, the result tends to be fast. If it isn’t, the result usually isn’t.
The platform itself isn’t the limiting factor anymore. Modern browsers are extraordinarily capable, and the techniques required to build fast sites are neither obscure nor experimental. In many cases, they’re simply the natural outcome of building for the web as it actually exists.
Which is what makes speed such a revealing test.
It’s very difficult to ship a consistently fast website without understanding the platform, controlling complexity, and coordinating the many teams that contribute to the system. Those capabilities tend to show up in other parts of the organisation too.
Slow websites often indicate the opposite. Not because the engineers are incompetent, or because the framework is wrong, but because the organisation has drifted away from the shape of the platform it’s building on.
And that’s the uncomfortable implication. When a website is slow, the problem usually isn’t the website. It’s the organisation behind it.
