Sunday, August 1, 2010

Tracking Test Metrics

Tracking test metrics is really an important and integral part of a test lead's job. It helps track testing progress, testing trend and helps with telling a story with data. The data is always there. The leads responsibility really lies in determining what to collect and how to present it so as to tell the testing story to the stakeholders, management, project team, etc.

What to collect?
This question is not as complicated as it looks. It boils down to 3 things

What did you plan for the release and where you are in a weekly basis. This is based on two things: Test case creation and test execution. Number of test cases planned versus number of actual test cases created. Number of test cases planned to be executed and number of actual test cases executed with their status (pass/fail).

Some examples of metrics for gathering Progress are
  • Planned Test Cases Created versus Actual Test Cases Created
  • Planned Test Cases Executed versus Actual Test Cases Executed
Trend is more along the lines of the direction testing is taking and what you can predict from the metrics. What does the weekly execution data tell you as a lead? Do you see more failures for a new feature or an existing feature? Do you see more test case execution (productivity) occurring when you get a build early in the week than when you get a build mid week? Its all about letting the data talk to you. At times you wont even see a trend till you see the same data with different context.

Some examples of metrics for gathering Trend are
  • Weekly Test Cases Executed
  • Weekly Test Cases Created
  • Cumulative Defect Density 
  • Weekly Defect density
  • Open and Closed Defects Per Week
  • Defect Age
This boils down to the defects that testing team finds. Defect data can be sliced and diced in several different ways and each time it really talks about the quality of the product. Based on the data that is gathered, a lot can be said about the requirements, code, design, product, test cases, testing process, customer, etc. Analysis of defects can really tell the organization a lot about how well we are doing our job and also expose our weakness that can then be easily rectified before the product goes out the door. Finding defects is in no way a negative to any one team or department. Its just a way to judge how we are doing.

Some examples of metrics for gathering Quality are
  • New Defects found per week
  • Defects closed per week
  • Defects found per feature
  • Defects found per build
  • Defects found by Severity
  • Defects found by Priority


Dave Doble said...

How do you define defect density?

shilpa said...

The standard industry definition for defect density is number of defects divided by the code size. The goal of this metric is to mesure the amount of rework required. Since testers usually dont have the access to the code to determine the number of defects found by line of code they have to be a bit more creative in gathering this data. Some examples would be
1. defects found per build/test cases executed per build
2. defects found per feature/test cases per feature

Post a Comment