How Do We Measure Website Speed?
In this text, you will learn exactly how we download and process website speed data in the monitoring app.pagespeed.cz.
Before diving in, ensure you're familiar with the differences between various speed measurement types (CrUX, synth, and RUM) and the basics of our speed monitoring.
Data from Google Users
We gather Core Web Vitals metrics (LCP, INP, CLS) from the Chrome UX Report (CrUX) for display in our reports as follows:
- In PLUS tests, we download and add them to reports every night. This applies to both Page Reports and Domain Reports, and all instances of CrUX data across the application. This is true for both mobile and desktop.
- In free tests, we don't download page data at all. We fetch domain data once every two days, alternating between mobile and desktop.
You can see data from the Chrome UX Report, for example, on your team's main dashboard.
Data for monthly metric trend graphs (in the Domain Report) are downloaded once a month. The data is released every second Wednesday of the month, and we process it within a few days afterward.
Monthly metric trends in the Chrome UX Report, which we display in the Domain Report.
Synthetic Data
We obtain data from synthetic Lighthouse tests in two ways:
- In PLUS tests, we run Lighthouse on our own infrastructure several times a day. More details are provided below.
- In free tests, we download less precise data from the PageSpeed Insights API once every two days for both desktop and mobile.
Let's now look at how we conduct synthetic measurements in PLUS tests.
Synthetic data is used for daily information gathering, for instance, for the Speed Watchdog.
Synthetic Measurements in PLUS Tests
🔐 This type of measurement is carried out in PLUS tests.
Based on our experience with other tools during web speed consulting and numerous experiments conducted during the development of our monitoring, we arrived at the following testing method for each URL.
We test during night hours, five times in quick succession, and run the whole process once a day.
Night Hours
For short-term (days) and longer-term (months) data collection of Core Web Vitals metrics, we consider nighttime testing to be a best practice.
At night, your servers aren't under as much load, allowing us to test more calmly and observe long-term trends of speed improvement or degradation. Server response speed (TTFB metric) affects user metrics we're interested in, like LCP or FCP.
We've found that nighttime results are much more stable and provide better insights into metric trends over time.
If nighttime hours are inconvenient for you, such as clashing with website maintenance, you can change the test time in the test settings.
Five Times in Quick Succession
We know that one-off tests, like those performed with PageSpeed Insights in our free tests, can show inaccurate results.
Through experimentation, we've found that conducting five tests minimizes inaccuracies and maximizes the stability of the numbers. Tests are run a few minutes apart, and the exact times are always visible in the Lighthouse test details.
Once a Day
Each URL is thus tested within a few minutes, and these tests are run once a day, always at night.
Why Do We Test So "Little"?
We occasionally get asked why testing occurs only within a short daily time frame. Why not test synthetically every minute?
It's crucial to clarify that our monitoring is for testing user metrics like Core Web Vitals and others, not server availability or load monitoring. Other tools like Uptimerobot.com or Updown.io are designed for that purpose.
Synthetic measurements should not replace user data, which provides precise performance information across all daily times and user segments.
To track user changes, we collect data from the Chrome UX Report and deploy SpeedCurve RUM for larger websites.
What Do We Test On and How Is Measurement Set?
Testing is conducted on European infrastructure from Amazon Web Services (AWS), currently from Frankfurt.
We carefully select test machines to minimize fluctuations, especially for JavaScript metrics like Total Blocking Time (TBT).
Even so, occasional fluctuations due to measuring infrastructure may occur. In such cases, we inform about these situations and add automatic notes to the charts.
And how is the measurement set? From October 2024, the throttling for both measurements is as follows:
| Device | Download | Upload | RTT (round-trip time) |
|---|---|---|---|
| Mobile | 1.6 Mbit/s | 0.75 Mbit/s | 100 ms |
| Desktop | 10 Mbit/s | 10 Mbit/s | 40 ms |
How to Let Our Bot Through to Your Website?
You might want to test an unfinished version of your website, or perhaps a preview (such as beta, test, staging, pre-production) that's behind some form of protection.
Sometimes speed testing fails even on production websites. Our testing robot might become a "victim" of bot blocking on your infrastructure.
However, you can allow our robot through one of these methods:
- Detect the user-agent string. Our robot includes words like
Pagespeed.czorChrome-Lighthouse. - Detect the IP address. Our robot comes from the address
18.192.177.19.
If your infrastructure uses a WAF (Web Application Firewall), you'll need to add a new rule to bypass blocking for access from our IP address, as mentioned above. This can happen, for instance, with Cloudflare, Azure, or other providers.
How to Measure Websites with HTTP Authentication?
HTTP authentication (often in the form of HTTP Basic Auth) is a simple way to protect a website, requiring a username and password immediately upon page load.
It's mainly used for staging or development versions of a website, and you can utilize it similarly during synthetic testing in PLUS monitoring.
However, be aware that a website behind a password tends to be slower. Due to HTTP auth, you might see metrics like TTFB, FCP, or LCP worsen by tens of percent. Here's why.
How to Set HTTP Basic Auth in Measurements?
Our tests can handle HTTP authentication—just enter the URL with login credentials in the tester. Access works through standard Basic Auth, just as a regular browser would log in.
Add URLs with login credentials to the test settings:
https://username:password@test.example.com/url
Impact on Measurement Results
Websites running on staging environments often have unstable server responses. They usually run without cache, without CDN, sometimes in debug mode. The result is a slowdown of all loading speed metrics (TTFB, FCP, LCP).
HTTP authentication itself adds approximately 450–500 ms of delay based on our measurements, again impacting loading speed metrics.
You can see the impact of HTTP authentication in HAR or Tracy files in the test run details in the Stalled or Request sent phases.
How to Measure More Accurately on a Staging Server?
If you need to measure a staging or non-public version without distortion:
- Use a backdoor based on IP address, as mentioned above, or User-Agent string, as mentioned above.
- Set an exception for our bot, as mentioned above.
This way, you'll get cleaner data that isn't delayed due to HTTP authentication.
Try our PLUS speed monitoring.
Tags:MonitoringMonitoring PLUSCrUXSyntheticLighthouseDevelopers
