Wednesday, November 12, 2008

Reduce number of invalid defects -> Improve Test Productivity and Efficiency & Let your Developers be happy

Just the other day we were discussing that how can we (testing team) reduce number of invalid defects. I was in deep thinking that why is it really important to reduce  the number of invalid defects.

    • Isn't it a tester's fundamental job to log every potential defect and let it go through normal defect life cycle and let business and management take call if it is a valid defect or not?
    • Isn't it right that tester shouldn't assume that it is not a bug and then regret later because of  a false assumption made?
    • Isn't a tester taught to think negative and always be suspicious and uncover what is not seen by someone like dev?

 

The point here is what is a big fuss if testing team raises 'invalid' defects unknowingly. At least they don't leave anything to assumptions which is far more dangerous. There primary job is to find defects, whether that is a 'valid' defect or not is a secondary question.

One of the tester I am mentoring complained that his testing team had found 108 defects out of which 104  were valid and only 4 were invalid but his management didn't seem to appreciate them for the the number of valid defects found as they were expecting number of invalid defects to be zero.

My take on this:

Yes, it is important to reduce number of invalid defects.

Why?

-> Test Metrics gets screwed (Test Effectiveness or lets say Test Productivity goes down with no. of invalid defects)

            Test Effectiveness = No .of Valid defect  / (No. of Valid Defects + No of invalid Defects) X 100 %

            Example for above:  104  / (104 + 4) X 100 % = ~96.6 %

-> Time lost in tracking and logging invalid bug

When you raise a bug in your reporting tool like Test Director, it has to go through complete bug lifecycle. Say you raised a bug spending effort in recording it and then it turned out to be invalid, you developer rejects it.  Finally you have to close it.

-> Management don't like invalid bugs

You bet me that "invalid" bugs doesn't please any manager. It is a human behavior to criticize something that is not right.  It sets them off.

-> Developers stop taking you seriously

When they observe that you raise many invalid bugs, they start expecting that every time. They stop paying due attention to valid bugs considering them to be invalid. Quality over Quantity concept.

-> Time lost in triage meeting to discuss invalid bugs

When you log invalid bug, it not only your time getting lost, developers waste their time reading them, then testers and developers waste time arguing on that as it has been officially logged, and most importantly business waste their time in triage meeting to take call.

-> Spoiling terms with development team

Developers are under pressure to reduce number of defect found in their code by the test team. Now if you log it, they go defensive and try their best to prove your bug an invalid bug so that it doesn't spoil their commitments.

 

Now that we have a problem. Let me propose something which we successfully implemented

 

defect _review_cycle

 

Now with this process, every bugs gets verified online by the Development team even before we officially log it. They update the sheet saying that they are okay with so and so bugs and we log only those bugs in the Bug Management tool  and hence all "VALID" bugs.

Bugs which they update as "INVALID" or "REJECTED", we update the SHARED SHEET with more information like repro steps etc and they change the status in the sheet accordingly. Now if it was our fault and it was actually INVALID defect, we update the SHEET and close it there itself rather than logging it in bug management tool and going through the entire process.

Now our metrics always say  100 % valid bugs. We don't miss any bugs because we record it anyway in the Shared Sheet and triage it with development team online. Development team feels good as they get a chance to to repro bug and confirm before it is actually logged against them.  We don't have to waste time in the triage meetings discussing whether its a bug or not. Now business only take call on functional bugs which are more important to end user.

11 comments:

Shrini Kulkarni said...

Raj,

In my opinion, it would not be wise and fruitful to call a bug as "invalid". Classifying a bug in strict binary ways as valid and invalid - appears to me as a vague and not-so-useful idea.

What would be an invalid bug? Developers love to and mangers quickly jump on their seat to show stick to testers in logging "invalid" bugs. Let us face it world is not so perfect (including developers and managers) to call something valid and invalid.

Let us examine when a bug is called as invalid (pulling from my own experience at Microsoft IT)

1. Developers tend to call a bug as in valid if it is a feature.

2. Developers tend to call a bug as invalid if the behaviour of application is as "designed" - "Correct as per the design"

3. Bugs that are logged with insufficient details to reproduce

4. Corner cases - "no one would do" kind of bugs.

5. Intermittently reproducible bugs

6. Bugs due to incorrect configuration of application

7. Bugs due to incorrect data in the database.

8. Bugs due to "wrong" installation procedure used or "Wrong" build used.

If you analyse these cases, you will understand what could be happening with your team.

It is likely that testers due to lack of understanding of application features might confuse a future to a bug. Coaching and training on application features to the testers can solve this problem. But hold on .. do not jump to the conclusion that the tester has to fall in line. One way to look at such "not a bug - but a feature" or "Works as expected" kinds of bugs - is why did tester got confused? Can it happen with the end user? Is it a problem with application usability?

while it is easy to dismiss bug as invalid and show stick to the tester - every invalid bug can have a valid story or a reason that made the tester to believe that "it could be a problem".

As a test lead or a manager - it would be your responsibility to explore this untold story and divert the discussion to what can be improved in the application.

Then, there "non reproducible" bugs ... I would suggest every tester to read James Bach's blog post on "Investigating intermittent problems"

http://www.satisfice.com/blog/archives/34

Developers quickly can push such problems under the carpet saying "non reproducible". Pay attention to such so called "invalid" bugs.

Also, testers should defend bugs that are corner cases - "No one would do that" kind of bugs. Developers and PM's especially are irritable to such "not likely" bugs. Their view that "these are not worth fixing" - but that should not make these as invalid bugs right?

Your process of checking with developers before logging a bug appears to me a clear overhead. You seem to be over emphasizing the role of a developer. By formalizing the process where tester needs to take the permission of the developer to log a bug - appears like a "unhealthy" practice to me.

While developer can surely help testers to identify potential bugs and can add value to report the bugs - he/she can not DICTATE the bug logging process.

If your team is co-located dev and test together ... you can have this as informal hall way discussion. Tester can walk up to developers desk and say "Hey, here is a strange thing happening -- appears like a bug ... want to see?". Such conversation can happen even with test lead (test lead is a first and most important coach to testers).

If I were a test lead, I would not worry much about those "mindless" numbers called "metrics". Given a metric - I can make any story I want to picture a situation in a favourable way to me. It is like corporate accounting. By properly shuffling the numbers in income, expense heads - loss making companies can be shown as profit making. More than that as a test lead I would worry about credibility of test team, the value of information it provides and the way it helps the PM to ship reasonably "good enough" software in viable cost and schedule.

In short - developers are not GODs to pass a judgement on testers, bugs can not be narrowly classified as valid and invalid. Where such classification exists and is valued at -- I would look at patterns in the invalid bug list and attempt to get the real story behind them. I would not worry about test metrics being "skewed" instead I would worry about quality of information provided by the test team.

Finally - encourage informal communication between tester, test lead, developer, PM (in that order) ...

Hope this helps ...

Shrini

pratik said...

Well one thing that you missed out was a invalid bug eats into dev productivity also as many times the dev does much needed analysis on it

Raj said...

@Prathik: Can't agree more. you are damn right. Even i would go on saying that it doesn't reduces the productivity of dev and test alone but analysts, PM and business as well in triage and discussing when it is clearly not a bug

Raj said...

@Shrini: Loved the way you have put down your thoughts. And i would recommend everyone to go through james post http://www.satisfice.com/blog/archives/34.

I can't agree more to Srini that valid or invalid bugs are more of perception thing and many a times human percieve things differently and that makes the difference.

Analogy to Valid vs Invalid debate is similar to Right Vs Wrong argument. Given a right context, same thing which is wrong can become "right" as well. And thats why is it important for a tester to put themselves in end user's shoes and understand the behavior from end users point of view to before making a call.

But many times as Shrini suggested wrong understanding of a feature by tester can be thought upon as a bug and after logging it turns out to be invalid and alog with that its bring bad taste and less productivity for the entire team.

Now the post was trying to propose a mechanism which is more like a "Clarification" method to make sure that it is not your misunderstanding but an actual defect. Lets call it "review by dev" and "NOT-PERMISSION-BY-DEV"

Actually i have met many good developers who really care about the quality of the product and i feel that everytime a bug is found there is nothing wrong in cross validating it with them to ensure that its not your misunderstanding and my experience says that its not about "taking permissions from dev" but its more like working together as one team.

Not just that, it builds great trust between dev and test teams. When you start talking to them in a way like shrini suggested or the online bug triage (suggested in the original post), you sort of start forming a great understanding between the teams.

It happened to me that couple of times i uncovered couple of Severity 1 bugs in dev environment when developr was discussing his doubt about the functionality. he was a happy man as he could deliver a stable build and we were even happier that we dont need to find something later when we can do it sooner.

Now the last part, as per the process suggested in the post, if dev and test disagree on certain potential bug, we dont leave it as "invalid" bug but we rather shortlist such issues and clarify those with the busines and the end users. After their confirmation, we log it in the tool and hence more no. of valid defects.

Shrini Kulkarni said...

>>>Now the post was trying to propose a mechanism which is more like a "Clarification" method to make sure that it is not your misunderstanding but an actual defect. Lets call it "review by dev" and "NOT-PERMISSION-BY-DEV"

Fine ... then do not make the process formal (keeping it light weight) and have the first round of review by test lead. Unless the instances of invalid bugs is more than 80%, do not make "dev review" mandatory. It not only slows down tester also the developer. At times it can be a major distraction" the developer.

It is also important not to introduce "inertia" to the whole group in the name of process...Make sure everything that you do has a value. As agile manifesto says --- wherever possible prefer personal communication/Notes to formal documentation (minutes of meeting etc).

Shrini

Raj said...

@readers: Please take Shrini's suggestions very seriously. Can't agree more to it.

Ruturaj said...

Thanks Raj and Shrini for spreading light on Test productivity and ways to achieve it , I enjoyed reading both views equally , I do belive in following both approches in order to achive ultimate goal of less error prone software to be delivered.I know how simple it becomes when we belive in power of informal communication, at the same time I belive in having stakeholder/client involved in critical bugs which are kept as 'Not A Bug'.

In many cases Even client wont be able to understand value of such bug which could potentially a Future functionality or functionality which must be extended in order to use it.

Arjan Kranenburg said...

I don't think your definition of Test Effectiveness is a good one:
Test Effectiveness = No .of Valid defect / (No. of Valid Defects + No of invalid Defects) X 100 %

Was your test effective if you reach 100%? Or 10%? Does that mean you've tested enough or too much or too little?

I think this would reflect more Test Effectiveness:
Test Effectiveness = No .of defects found / (No. of Defects found + No of Defects not fount) X 100 %

Of course, the No. of Defects not found is difficult to determine, because they are not found. But if you apply this to the test phase, you can use the defects found in later fases for these metrics.
Some companies also include the defects found in the first X months of operation for this.

Further, try to base the metrics on Defects and not on Findings. There is a subtle difference.

Just my 2cents...

/Arjan

Raj said...

@Arjan: Thanks for bringing this up. I was wondering why i didnt recieve any comment on this part of the post.

Actually you are right and wrong at the same time. You are confusing Test Effectiveness with Test Efficiency. the definition you have provided is inudstry wide accepted definition of Test efficiency and not effectiveness.

Let me also tell you that there is no definition of test effectiveness or test efficiency which holds good in every scenario or is followed consistently across all the companies.

This version of test effectiveness says that you reduce the wastage in term of effort and time spent on invalid bug and you productivity goes high and your testing becomes more effective.

So having test effectiveness alone is not sufficient, test efficiency is equally important when you calculate the ratio of no. of bug found divided by total no. of bugs.

now how do we talk in numbers? if we say test effectiveness is 100 % it doesn't mean you have uncovered 100 % bugs. that just means that you effort is going in the right direction and you are making an impact.

if it is 10 % that implies 90 % of your testing effort is going into vain doing something which is not essential.

Anonymous said...

Amiable post and this mail helped me alot in my college assignement. Say thank you you on your information.

Anonymous said...

Hi
Very nice and intrestingss story.