I would like to share something new that I learnt from one of my recent engagement. Thanks to Anand Prabhala, my Project Manager, who triggered these chain of thoughts by asking one simple question, “Raj, keeping all the bugs metrics and test executions reports aside, as a test lead, what is your ‘test confidence’ to release this application into UAT'”.
If you are in testing, I am sure you would have found yourself in similar situations many times before. That was a déjà vu moment and I found myself saying something like ‘well, it depends. I can’t tell you that as it’s very subjective and it totally depends on individual’s perception of quality and whatever I might say might be my own opinion not necessarily the opinion of my test team. You should look at my test execution report and bug metrics as true indicators’.
I knew that he had a point, Metrics are good but we should be able to convert them into something really meaningful and actionable to be able to take a decision with confidence and conviction. As a test lead, providing any number of test metrics is not enough, if you can’t take a confident decision by looking at it. That’s like just half the job done.
I thought it’s going to be a no-brainer. Call for a meeting and ask my test team to vote and we will know our test confidence. Funny though it may sound but you may get even more confused as confidence is highly subjective and can’t be arrived at by just doing a poll. Beside it won’t be fair as confidence is just like temperature which can fluctuate drastically based on circumstances, moods, emotions, pressure and state of mind on a given day.
Most of the times there is a strong correlation between tester’s confidence and metrics like failed test and active bugs but there can be exceptions. Imagine that there is a functionality that is working well but there are few eye spoilers that’s bothering her for quite some time and your test execution and bug report doesn’t get captured as alarming by the standard metrics like % of test cases passed, no .of critical/high severity bug etc. The test confidence could also be low if tester believes that end user is going to hate it and this must be fixed. We are saying lot of times test confidence can’t be used concluded as “high” by saying if 90 % test cases passed or if there are not s1 / s2 issues and that’s where we need to give weightage to tester’s feeling about the quality of that feature. On the contrary, we could also have scenarios where more test cases failed in some cases and metrics look terrible but we know that they are not coming in the way to do UAT testing and hence test confidence could be high or medium but not low.
Lot of times such stories doesn’t come out by just looking at plain numbers. Remember, now you are talking about an application that typically consist of myriad of features. Brain can’t be expected to accurately take into account all your test confidences against each feature and do an intelligent summation of test confidence for you. This will only get complicated when those features are owned by a team of testers and now test lead’s job get tougher to get test confidences from all the testers and decide on overall test confidence level of the team.
In reality, it might be just a matter of finding the real culprits that are bringing down your test confidence and targeting them and you will be surprised that those issues might get fixed very swiftly, once identified and prioritized but the trick is to identity them.
I thought why we don’t add subjectivity to objectivity as that’s was the missing ingredient. Let’s start measuring test confidence for each feature by looking at real metrics for that feature. We took the TFS out-of-the-box requirement traceability report and added just a simple field called” test confidence”. We got these test confidence indicators (high, medium, low) from testers, who were owning these feature and started assigning test confidence as high, medium or low. This is a marriage of test confidence and industry standard requirement traceability report J
Finally, you can just add the number. of features that had high confidence, medium confidence and low confidence and calculate %. E.g. Now I can say I am 87 % highly confident and 12 % medium confident and 1 % low confident.
Note: This way I am not saying I am confident or not. My test confidence is not a boolean value anymore.
We passed on this report to our developers to prioritize bug fixing with the aim to increase test confidence where it was low or medium. Now our development team didn’t really have to worry about lot of things like severity, priority, stack ranking, usability issues etc. We could make their life easy by giving a single indicator. This also helped our customers prioritize their testing and know the features that are not ready yet.
Do you want to use it in your next engagement? Let me know by posting your comments.
6 comments:
How the Test Points is calculated?
Is the Based on The number of test cases ?
Hello,
Thanks for sharing such information with everyone. Your writing gives an overall idea about the software testing. Good to see such type of posting. I really like to read it. Keep on posting some new information…!
As always expected from your side...this is known as out of box thinking...and yes it makes a lot of sense to have this additional field and quantify open ended question
Looking at these i regret some of decisions i made in past :( ...Missing it now
What's up to all, the contents existing at this site are really awesome for people knowledge, well, keep up the good work fellows....
software testing in indore
Could you kindly share your template. What is the use of test points. Will that affect the calculation.
kindly email at joandioquino@yahoo.com
Could you kindly share your template. What is the use of test points. Will that affect the calculation.
kindly email at joandioquino@yahoo.com
Post a Comment