Saturday, March 23, 2013

Software “Test Confidence Report” [Test Sign-Off]

I would like to share something new that I learnt from one of my recent engagement. Thanks to Anand Prabhala, my Project Manager, who triggered these chain of thoughts by asking one simple question, “Raj, keeping all the bugs metrics and test executions reports aside, as a test lead, what is your ‘test confidence’ to release this application into UAT'”.

If you are in testing, I am sure you would have found yourself in similar situations many times before. That was a déjà vu moment and I found myself saying something like ‘well, it depends. I can’t tell you that as it’s very subjective and it totally depends on individual’s perception of quality and whatever I might say might be my own opinion not necessarily the opinion of my test team. You should look at my test execution report and bug metrics as true indicators’.

I knew that he had a point, Metrics are good but we should be able to convert them into something really meaningful and actionable to be able to take a decision with confidence and conviction. As a test lead, providing any number of test metrics is not enough, if you can’t take a confident decision by looking at it. That’s like just half the job done.

I thought it’s going to be a no-brainer. Call for a meeting and ask my test team to vote and we will know our test confidence. Funny though it may sound but you may get even more confused as confidence is highly subjective and can’t be arrived at by just doing a poll. Beside it won’t be fair as confidence is just like temperature which can fluctuate drastically based on circumstances, moods, emotions, pressure and state of mind on a given day.

Most of the times there is a strong correlation between tester’s confidence and metrics like failed test and active bugs but there can be exceptions. Imagine that there is a functionality that is working well but there are few eye spoilers that’s bothering her for quite some time and your test execution and bug report doesn’t get captured as alarming by the standard metrics like % of test cases passed, no .of critical/high severity bug etc. The test confidence could also be low if tester believes that end user is going to hate it and this must be fixed. We are saying lot of times test confidence can’t be used concluded as “high” by saying if 90 % test cases passed or if there are not s1 / s2 issues and that’s where we need to give weightage to tester’s feeling about the quality of that feature. On the contrary, we could also have scenarios where more test cases failed in some cases and metrics look terrible but we know that they are not coming in the way to do UAT testing and hence test confidence could be high or medium but not low.

Lot of times such stories doesn’t come out by just looking at plain numbers. Remember, now you are talking about an application that typically consist of myriad of features. Brain can’t be expected to accurately take into account all your test confidences against each feature and do an intelligent summation of test confidence for you. This will only get complicated when those features are owned by a team of testers and now test lead’s job get tougher to get test confidences from all the testers and decide on overall test confidence level of the team.

In reality, it might be just a matter of finding the real culprits that are bringing down your test confidence and targeting them and you will be surprised that those issues might get fixed very swiftly, once identified and prioritized but the trick is to identity them.

I thought why we don’t add subjectivity to objectivity as that’s was the missing ingredient. Let’s start measuring test confidence for each feature by looking at real metrics for that feature. We took the TFS out-of-the-box requirement traceability report and added just a simple field called” test confidence”. We got these test confidence indicators (high, medium, low) from testers, who were owning these feature and started assigning test confidence as high, medium or low. This is a marriage of test confidence and industry standard requirement traceability report J

clip_image001[6]

Finally, you can just add the number. of features that had high confidence, medium confidence and low confidence and calculate %. E.g. Now I can say I am 87 % highly confident and 12 % medium confident and 1 % low confident.

Note: This way I am not saying I am confident or not. My test confidence is not a boolean value anymore.

clip_image002[6]

We passed on this report to our developers to prioritize bug fixing with the aim to increase test confidence where it was low or medium. Now our development team didn’t really have to worry about lot of things like severity, priority, stack ranking, usability issues etc. We could make their life easy by giving a single indicator. This also helped our customers prioritize their testing and know the features that are not ready yet.

Do you want to use it in your next engagement? Let me know by posting your comments.