Thursday, June 13, 2024
HomeCSSMeasure What You Affect, Not What You Affect – CSS Wizardry –...

Measure What You Affect, Not What You Affect – CSS Wizardry – Net Efficiency Optimisation

Written by on CSS Wizardry.

Desk of Contents
  1. Issues When Measuring Efficiency
  2. Oblique Optimisation
  3. Isolate Your Affect
  4. Sign vs. Noise
  5. Ultimate Ideas

A factor I see builders do time and time once more is make performance-facing
adjustments to their websites and apps, however errors in how they measure them typically
result in incorrect conclusions concerning the effectiveness of that work. This will go
both means: under- or overestimating the efficacy of these adjustments. Naturally,
neither is nice.

Issues When Measuring Efficiency

As I see it, there are two important points with regards to measuring efficiency
adjustments (notice, not enhancements, however adjustments) within the lab:

  1. Website-speed is nondeterministic. I can reload the very same web page
    below the very same community circumstances time and again, and I can assure
    I cannot get the very same, say, DOMContentLoaded every time. There are
    myriad causes for this that I received’t cowl right here.
  2. Most metrics usually are not atomic: FCP, for instance, isn’t a metric we are able to
    optimise in isolation—it’s a fruits of different extra atomic metrics resembling
    connection overhead, TTFB, and extra. Poor FCP is the symptom of many causes,
    and it is just these causes that we are able to really optimise. That is
    a delicate however vital distinction.

On this submit, I need to take a look at methods to assist mitigate and work round these
blind spots. We’ll be trying largely on the latter state of affairs, however the identical
rules will assist us with the previous. Nevertheless, in a sentence:

Measure what you influence, not what you affect.

Oblique Optimisation

One thing that just about by no means will get talked about is the indirection concerned in
numerous efficiency optimisation. For the sake of ease, I’m going to make use of
Largest Contentful Paint (LCP) as the instance.

As famous above, it’s not really doable to enhance sure metrics of their
personal proper. As a substitute, we’ve got to optimise some or the entire part elements that
would possibly contribute to a greater LCP rating, together with, however not restricted to:

  • redirects;
  • TTFB;
  • the vital path;
  • self-hosting property;
  • picture optimisation.

Bettering every of those ought to hopefully chip away on the timings of extra
granular occasions that precede the LCP milestone, however each time we’re making these
sorts of oblique optimisation, we have to assume rather more rigorously about how
we measure and benchmark ourselves as we work. Not concerning the final consequence,
LCP, which is a UX metric, however concerning the technical metrics that we’re impacting

We’d hypothesise that decreasing the quantity of render-blocking CSS ought to assist
enhance LCP—and that’s a smart speculation!—however that is the place my first level
about atomicity is available in. Attempting to proxy the influence of decreasing our CSS from
our LCP time leaves us open to numerous variance and nondeterminism. After we
refreshed, maybe we hit an outlying, large first-byte time? What if one other
file on the vital path had dropped out of cache and wanted fetching from the
community? What if we incurred a DNS lookup this time that we hadn’t the earlier
time? Working on this method requires that every one issues stay equal, and that
simply isn’t one thing we are able to assure. We will take affordable measures (at all times
refresh from a chilly cache; throttle to a continuing community pace), however we are able to’t
account for every thing.

That is why we have to measure what we influence, not what we affect.

Isolate Your Affect

One of the crucial helpful instruments for measuring granular adjustments as we work is the
Person Timing
. This
permits builders to trivially create excessive decision timestamps that may be
used a lot nearer to the metallic to measure particular, atomic duties. For instance,
persevering with our activity to scale back CSS dimension:



  <script>efficiency.mark('CSS Begin');</script>

  <hyperlink rel="stylesheet" href="app.css" />

    efficiency.mark('CSS Finish');
    efficiency.measure('CSS Time', 'CSS Begin', 'CSS Finish');
    console.log(efficiency.getEntriesByName('CSS Time')[0].period)



It will measure precisely how lengthy app.css blocks for after which log it out to
the console. Even higher, in Chrome’s Efficiency panel, we are able to view the
Timings monitor and have these measures (and marks) graphed routinely:

Chrome’s Efficiency panel routinely picks up Person Timings.

The important thing factor to recollect is that, though our purpose is to finally enhance
LCP, the one factor we’re impacting instantly is the scale (thus, time) of our
CSS. Due to this fact, that’s the one factor we needs to be measuring. Working this fashion
permits us to measure solely the issues we’re actively modifying, and ensure
we’re headed in the correct path.

For those who aren’t already, you must completely make Person Timings part of
your day-to-day workflow.

On the same notice, I’m obsessive about head
. Like,
obsessed. As your head is totally render blocking, you can proxy
your head time out of your First Paint time. However, once more, this leaves us
vulnerable to the identical variance and nondeterminism as earlier than. As a substitute, we lean
on the Person Timing API and efficiency.mark and efficiency.measure:


  <script>efficiency.mark('HEAD Begin');</script>


    efficiency.mark('HEAD Finish');
    efficiency.measure('HEAD Time', 'HEAD Begin', 'HEAD Finish');
    console.log(efficiency.getEntriesByName('HEAD Time')[0].period)


This fashion, we are able to refactor and measure our head time in isolation with out additionally
measuring the numerous different metrics that comprise First Paint. In truth, I try this
on this

Sign vs. Noise

This subsequent instance was the motivation for this complete article.

Engaged on a consumer website just a few days in the past, I wished to see how a lot (or if)
Precedence Hints would possibly enhance their LCP time.
Utilizing Native Overrides,
I added fetchpriority=excessive to their LCP candidate, which was a easy <img
component (which is of course fairly quick by

I created a management, reloaded the web page 5 occasions, and took the median LCP.
Regardless of these two defensive measures, I used to be stunned by the variance in outcomes
for LCP—as much as 1s! Subsequent, I modified the HTML so as to add fetchpriority=excessive to the
<img />. Once more, I reloaded the web page 5 occasions. Once more, I took the median.
Once more, I used to be stunned by the extent of variance in LCP occasions.

The rationale for this variance was fairly clear—LCP, as mentioned, features a lot
of different metrics, whereas the one factor I used to be really affecting was the
precedence of the picture request. My measurement was a unfastened proxy for what I used to be
really altering.

With a view to get a greater view on the influence of what I used to be altering, one wants
a bit of understanding of what priorities are and what Precedence Hints do.

Browsers (and, to an extent, servers) use priorities to determine how and once they
request sure recordsdata. It permits deliberate and orchestrated management of useful resource
scheduling, and it’s fairly good. Sure file sorts, coupled with sure
areas within the doc, have predefined
, and builders
have restricted management of them with out additionally probably altering the behaviour of
their pages (e.g. one can’t simply whack async on a <script> and hope for the

Precedence Hints, nonetheless, provide us that management. Our choices are fetchpriority=

  • excessive: units preliminary precedence to Excessive;
  • auto: successfully redundant—it’s the identical as omitting the attribute
  • low: units preliminary precedence to Low.

Now comes the important thing perception: modifying a file’s precedence doesn’t change how quickly
the browser discovers it—that’s not how browsers work—but it surely does have an effect on how
quickly the browser will put that request out to the community. In browserland, this
is understood Queuing. Modifying a file’s precedence will influence how lengthy it’s spent
queuing. That is what I must be measuring.

Let’s check out the earlier than and after:

Earlier than, with out Precedence Hints:

With out Precedence Hints, the LCP candidate spent an entire 1.7s queuing!

After, with Precedence Hints:

The addition of Precedence Hints dropped queuing time to a mere 1.21ms!

Keep in mind, the one factor that Precedence Hints impacts is Queuing time, but when we
take a look at the 2 screenshots, we see large variance throughout virtually all useful resource
timing phases. Judging the efficacy of Precedence Hints on total time could be
fairly inaccurate (we’d nonetheless arrive on the identical conclusions—Precedence Hints do
assist enhance LCP—however through the unsuitable workings out).

Ultimate Ideas

There may be numerous oblique work with regards to optimising sure metrics.
In the end, particular person duties we undertake will assist with our total targets, however
whereas working (i.e. writing code) it’s essential to isolate our benchmarking
solely to the granular activity at hand. Solely later ought to we zoom out and measure the
affect these adjustments had on the top purpose, no matter that could be.

Inadvertently capturing an excessive amount of knowledge—noise—can obscure our view of the progress
we’re really making, and though we’d find yourself on the desired consequence,
it’s at all times higher to be extra forensic in assessing the influence of our work.

It’s important to know the remit and extent of the issues we’re altering.
It’s important to benchmark our adjustments solely on the issues we’re altering.
It’s important to measure what you influence, not what you affect.

☕️ Did this assist? Purchase me a espresso!



Please enter your comment!
Please enter your name here

Most Popular

Recent Comments