Howdy, perfers! This week is short and sweet: a duo of videos for WebPageTest power users. It was presented at Velocity 2014 by Patrick Meenan, the creator of WebPageTest.org. Let me highlight some of the great tidbits I learned while watching these videos.
Running multiple tests per URL was highly recommended, as was choosing an odd number of runs. I never thought about it too hard but this is just basic math; the median is easier to isolate when there’s an odd number of runs. 1, 3, 5, 7, 9 are the supported options.
There is a
medianMetric parameter that any multi-run results page on webpagetest.org can accept. If your test was run multiple times, the param changes how your median run is computed. The
SpeedIndex metric was demoed as a useful alternative to the default
onLoad. Here’s an example test with two different
When viewing a filmstrip, the yellow-bordered frames indicate visual changes. This is another thing I’ve constantly overlooked, as there was no obvious indication of what they meant. But that’s very useful to know.
Staying logged in to webpagetest.org reveals your testing history. Especially useful in a team, since a new login could be created for each project or team. Then all tests can be saved and grouped together conveniently. You can also edit labels after the fact (especially useful if you didn’t set one to begin with).
Once you’re looking at your test history, it’s possible to compare old runs using the
testId. Perhaps you want to do a side-by-side comparison of a page before and after optimizations, or do competitive analysis between different sites. Any
testId in your history can be used to create a comparison.
For a given
testId, you can specify how its data is compared to another test. Pat gave an interesting use-case of wanting to compare pages with the same advertisement loading. He would take multiple runs and then use whichever one loaded the same ad as the previous test. Here’s the full list of modifiers he revealed:
-l:[label]— Label: add or edit how the test is labeled in a given comparison
-r:N— Run: if this was a multi-run test, choose which run to use as a comparison
-c:N— Cached: set to 0 for First View or 1 for Repeat View
e:[end]— End: set the end time by using an integer for number of seconds, or an event name
doc— equivalent to
full— fully loaded
visual— visually complete
all— last visual change
Here is an example URL to give the options some context. I’ve added a label after the fact, used
visual as the end milestone, and specified
SpeedIndex as the
If you’d like to see the full video, go ahead and watch it on YouTube:
The Block tab is a quick way to simulate the resources not being present on the page at all. These resources fail fast, which means they instantly return nothing. A good example is simulating no JS, which can also be an indicator of how fast the page initially loads with no render-blocking JS. Or perhaps you’d like to measure the difference between loading ads and blocking them.
Similarly, the SPOF tab allows you to simulate a timeout or complete failure. Unlike the Block tab, SPOF times out slowly. This makes it great for testing resilience to third-party content, such as an interactive social widget (Facebook, Twitter, etc), or hosted web fonts (TypeKit). Find out how your pages perform when the service is slow or completely unavailable.
Performance timing is as easy as setting
performance.measure('label') within your application code, as documented in the User Timing spec. The custom milestones will show up in the WPT waterfalls. You need to polyfill this in older browsers.
WebPageTest “data center” tour
As a bonus, maybe you’ll enjoy this WebPageTest “data center” tour I found while googling the information mentioned in this post. If nothing else, it’s proof that WebPageTest is a labor of love for web performance.