The Performance Paradox

Just because a test is good at measuring performance for one metric, doesn’t mean that it’s good for all metrics. The other day I posted about some JavaScript Library Loading Speed Tests that were done by the PBWiki team. I made some conclusions about JavaScript Library Loading speed that, I think, were pretty interesting – however, I mentioned some browser load performance results (at the end of the post) which were especially problematic. This brings up an important point from the performance results:

User-generated performance results are a dual-edged sword.

Assuming that there’s no cheating involved (which is a big assumption) then quietly collecting data from users can provide interesting results. HOWEVER – how that data is analyzed can wildly effect the quality of your results. Analyzed correctly and you can start to get a picture for how JavaScript libraries perform on page load, incorrectly and you might assume that specific browsers are broken, slow, or providing incorrect results.

There’s a ton of examples of misinformation, relating to browsers, within the “Browser Comparison” results. I’m just going to list a bunch of issues – showing how much of a problem user-generated browser performance data can be.

  • The results show the numbers for Opera being heavily skewed. At first glance one might assume “oh, that’s because Opera is slower at loading JavaScript files” – however this is not the case at all. Instead, a more-plausible answer is that users were testing this site from a copy of Opera Mobile (which performs poorly, compared to a desktop browser).
  • Both Safari 2 and Safari 3 are grouped together, which is highly suspect. By a number of measurements Safari 3 is much faster than Safari 2, so having these two merged doesn’t do any favors.
  • Firefox 3 only has two results. A commenter mentioned that this was because they were being grouped into the “Netscape 6” category – which, in and of itself, is a poor place for conglomeration.
  • IE 7 is shown as being faster as IE 6. This may be the case, however it’s far more likely that users who are running IE 7 are on newer hardware (think: A new computer with Vista installed), meaning that, on average, IE 7 will run faster than IE 6.
  • Firefox, Opera, and Safari for Windows users are, generally, early adopters and technically savvy – meaning that they’re, also, more likely to have high performance hardware (giving them an unnecessary advantage in their results).
  • No attempt at platform comparison is given (for example, Safari Window vs. Firefox Window and Safari Mac vs. Firefox Mac). Having the results lump together provides an inaccurate view of actual browser performance.

There’s one message that should be taken away from this particular case: Don’t trust random-user-generated browser performance data. Until you neutralize for outstanding factors like platform, system load, and even hardware it becomes incredibly hard to get meaningful data that is relevant to most users – or even remotely useful to browser vendors.

Posted: February 7th, 2008


Subscribe for email updates

14 Comments (Show Comments)



Comments are closed.
Comments are automatically turned off two weeks after the original post. If you have a question concerning the content of this post, please feel free to contact me.


Secrets of the JavaScript Ninja

Secrets of the JS Ninja

Secret techniques of top JavaScript programmers. Published by Manning.

John Resig Twitter Updates

@jeresig / Mastodon

Infrequent, short, updates and links.