After intensive development, we present version 4.10 packed with history tracking, manual measurements, and custom times. (more).

How Do We Measure Website Speed?

In this text, you'll find out exactly how we collect and process website speed data in the pagespeed.one monitoring system.

Before you proceed, ensure you're familiar with the differences between various types of speed measurements (CrUX, synth, and RUM) and understand the basics of our speed monitoring.

Data from Google Users

We collect Core Web Vitals metrics (LCP, INP, CLS) from the Chrome UX Report (CrUX) for display in our reports as follows:

  • In PLUS tests, we download and add them to reports daily during nighttime hours. This applies to both Page Report and Domain Report and all occurrences of CrUX data across the application. This holds true for both mobile and desktop.
  • In free tests, we do not download page data at all. Domain data is downloaded every two days, alternating between mobile and desktop.

Speed Dashboard Chrome UX Report data visible in your team's main dashboard.

Data for monthly metric development graphs (in the Domain Report) are downloaded once a month. They are released every second Wednesday of the month, and we process them within a few days afterward.

Monthly metric development in Chrome UX Report Monthly metric development in Chrome UX Report, displayed in the Domain Report.

Synthetic Data

We obtain data from synthetic Lighthouse tests in two ways:

  • In PLUS tests, we run Lighthouse on our infrastructure multiple times daily. More details below.
  • In free tests, we download less precise data from the PageSpeed Insights API every two days for both desktop and mobile.

Now, let's look at how we conduct synthetic measurements in PLUS tests. As of version 4.10, you can also run manual measurements (beta), set multiple measurement times (beta), and test execution is significantly faster – details in the changelog for release 4.10.

Speed Watchdog Synthetic data are used for daily information collection, for example, for the Speed Watchdog.

Synthetic Measurements in PLUS Tests

🔐 This measurement type is performed in PLUS tests.

Based on our experience with other tools during web speed consulting and numerous experiments conducted during the development of our monitoring, we've arrived at the following method for testing each URL.

We test during nighttime hours, five times in quick succession, and the entire process is run once a day.

Nighttime Hours

For short-term (days) and longer-term (months) data collection of Core Web Vitals, we consider nighttime times best practice.

At night, your servers are under less load, allowing us to test them more calmly and observe long-term trends or speed deteriorations. Server response speed (TTFB metric) affects the user metrics we focus on, such as LCP or FCP.

We have found that nighttime results are much more stable and provide more insight into metric trends over time.

If nighttime hours are inconvenient for you, for example, because tests coincide with ongoing web maintenance, you can change the test time in the test settings.

Five Times in Quick Succession

We know that single tests, like those conducted using PageSpeed Insights in our free tests, can show inaccurate results.

Through experimentation, we've determined the need to conduct five tests to eliminate most inaccuracies and achieve maximum stability in numbers. The tests are conducted a few minutes apart, with exact times always visible in the Lighthouse test details.

Once a Day

Each URL is tested over a span of several minutes, and these tests are conducted once a day, always during the night.

Why Do We Test So "Little"?

We occasionally get asked why testing is conducted only within a short daily time frame. Why don't we test synthetically, say, every minute?

It's quite important to clarify that our monitoring is designed to test user metrics Core Web Vitals and others. It's not about server availability monitoring or its load monitoring. Other tools, such as Uptimerobot.com or Updown.io, serve those purposes.

Moreover, synthetic measurements should not replace user data, which provide precise information about web performance, not only across all daily times but also across all other user segments.

To track user changes, we collect data from the Chrome UX Report, and for larger websites, we deploy SpeedCurve RUM.

What Do We Test On and How Is the Measurement Set Up?

Testing is conducted on European Amazon Web Services (AWS) infrastructure, currently from Frankfurt am Main.

We carefully select test machines to minimize fluctuations, particularly for JavaScript metrics like Total Blocking Time (TBT).

Even so, rare fluctuations caused by the measuring infrastructure can occur. In such cases, we inform about these situations and add automatic notes to the graphs.

And how is the measurement set up? Since October 2024, the slowdown for both measurements is as follows:

DeviceDownloadUploadRTT (round-trip time)
Mobile1.6 Mbit/s0.75 Mbit/s100 ms
Desktop10 Mbit/s10 Mbit/s40 ms

What URL Do We Actually Test?

The URL you enter in the test settings might not match the URL we ultimately test. We call this the final URL.

Why does it differ? Web servers commonly perform redirects – from example.com to www.example.com, from HTTP to HTTPS, or add a slash at the end of addresses. Our tool (we call it the "redirector", more in the changelog for release 4.10) follows these redirects and finds the actual final URL, on which the test is then conducted.

Where can you find the final URL? In the application, you'll see it in tooltips over metric icons in tables (under the description "Tested URL") and in the test run details.

Why might the final URL differ between synthetic and user data? Each data source returns the final URL in its own way:

  • Lighthouse (synth)
    returns the URL where the test actually ended after all redirects. This URL usually matches the redirector output, including any query parameters.
  • CrUX API (user data)
    returns the URL for which it has real user data. It may remove some query parameters if it doesn't have sufficient data volume and performs its own redirections.

Thus, it may happen that for the same page, you see a different tested URL in the user data table (CrUX) than in the synthetic measurement table (Lighthouse).

How to Allow Our Bot on Your Website?

You might want to test an unfinished version of your website or a preview (e.g., beta, test, staging, pre-production), which is hidden behind some form of protection.

Sometimes, speed testing may not succeed even on production websites. Our testing bot might become a "victim" of bot blocking on your infrastructure.

However, it's possible to allow our bot in one of these two ways:

  • Detect the user-agent string. Our bot contains words like Pagespeed.cz or Chrome-Lighthouse.
  • Detect the IP address. Our bot comes from the address 18.192.177.19.

If your infrastructure uses a WAF (Web Application Firewall), you'll need to add a new rule that bypasses blocking when accessing from our IP address, as mentioned above. This can happen, for example, with Cloudflare, Azure, or other providers.

How to Measure Websites with HTTP Authentication?

HTTP authentication (often in the form of HTTP Basic Auth) is a simple way to protect a website that requires a username and password right when the page loads.

It's mainly used for staging or development versions of a website, and you can use it in the same way for synthetic testing in PLUS monitoring.

However, be aware that a password-protected website tends to be slower. Due to HTTP auth, you might see metrics like TTFB, FCP, or LCP worsen by several percentage points. We'll explain why.

How to Set HTTP Basic Auth for Measurements?

Our tests can handle HTTP authentication – simply enter the URL with login credentials in the tester. Access works via standard Basic Auth, just as a regular browser would log in.

Add the URL addresses with login credentials to the test settings:

https://username:password@test.example.com/url

Impact on Measurement Results

Websites running on staging environments often have unstable server responses. They usually run without cache, without a CDN, sometimes in debug mode. The result is a slowdown in all loading speed metrics (TTFB, FCP, LCP).

According to our measurements, HTTP authentication itself adds approximately 450–500 ms of delay, again affecting loading speed metrics.

The impact of HTTP authentication can be seen in HAR or Tracy files in the test run details in the Stalled or Request sent phases.

How to Measure More Accurately on a Staging Server?

If you need to measure a staging or non-public version without distortion:

  • Use a backdoor based on the IP address, as mentioned above, or the User-Agent string, as mentioned above.
  • Set an exception for our bot, as mentioned above.

This way, you'll get cleaner data not delayed by HTTP authentication.


Try our PLUS speed monitoring.

Schedule a monitoring demo