Why Are Testing Metrics So Important?

Why Are Testing Metrics So Important?

You’ve probably been in this scenario before: you’re in a meeting where someone has asked  questions about the metrics you use to assess value in your testing work. Questions, such as, how do you define metrics (and therefore achieve “ROI”) from all the testing activities you’ve implemented? How are these activities attached to business outcomes? Where are the cold-hard numbers?

Why are testing metrics important?

It helps teams clearly define your testing goals and attach a quantifiable number to success (or failure).


You want to be able to measure and monitor your testing activities, as well as having a snapshot of your team’s test progress, productivity and the quality of testing to ensure you’re hitting your goals. Metrics give better answers to the question, “What have we tested?”.


Test metrics give you the data you need to track and improve your test process, as well as finding tangible answers to questions like:

  • How long did it take us to release?
  • How many P1’s (or Critical-Level) bugs have been found by the team in the last three months?
  • How many bugs did the support team receive from customers on the last two releases?
  • What is the biggest bottleneck in our testing process?

What metrics are important?

This answer will depend on the type of organisation you are and what your business goals are? Our customers tie their QA and development goals to their larger business (or revenue-generating) goals to ensure each department is aligned with their long-term goals. In this scenario, a different answer will apply to a photo-sharing app targeting 16-25 year olds, compared to a B2B content management system for publishers to manage their digital assets.


Below are our top five metrics we’ve helped our customers implement which we’d recommend you evaluate for your own needs:


Test Execution Coverage %:

The number of test cases run compared to the total number of test cases.
This will help you get visibility on your ‘coverage’ compared to ‘total possible coverage’ - this can be interpreted as browser/device coverage, application or feature coverage and more.

Number of System Outages and Length of Downtime Due to Product Errors: Measuring outages and tracking downtime periods because of software bugs found can provide insight into the quality of the application. This metric has a strong correlation with the end user’s experience of your product. A post-event analysis can also determine the severity of the issue (which you can associate with $$). 


Number of Customer Complaints Directly from Product Errors; (via support channels): A lot of customers we work with tend to report this as a key metric they want to reduce over a fixed period of time. This might be affecting their team’s morale and they want to address this immediately via an on-demand solution which can provide them with better test coverage compared to their internal testing.


Mean Time to Detect (MTTD): A lot of teams who move toward continuous delivery/deployment have used this metric to gauge the effectiveness of their tools and processes. There’s also an argument that this metric goes hand-in-hand with Mean Time to Resolve (MTTR)

Time Between Development and Release: If your product is being released every few months, the market has likely already moved on. This is an important metric for your team to consider because if you’re releasing faster, this allows you to improve your product quality over the long-term.

One final thing - while we recommend measuring your key metrics, there’s a slippery slope when you start measuring too much. This can be detrimental to your team and you can lose focus on the end-goal. We’ve heard of companies who’ve made decisions based on the wrong observations - my advice: keep it simple, and don’t give people too much to focus on.


What’s been your experience with metrics being used to show value?