Thursday, February 13, 2014

Visual Studio TFS - Bug Reactivation Count Report


Author: Ranjit Gupta & Raj Kamal

Background:

Visual Studio provides a nice reactivation report, out of the box, to help you determine how effectively team is fixing bugs. This report helps you answer questions such as “How many bugs have been reactivated in the current iteration?” or “Is the team resolving and closing reactivated bugs and stories at an acceptable rate?” but doesn’t go into details, if team wants to get reactivation report at work item level to answer a follow up question, such as, “How many times bugs have been reactivated and what are the bugs that have been reactivated more than X number of times? This kind of report can help the teams in taking corrective action as it’s quite obvious that bug reactivation is rework and wastage of effort, time and money.

Solution:

There is also a related thread on MSDN Discussion on a similar topic. If you came to this blog, finding solution of this very problem then the good news is that now there is add-in that we have published on the Visual Studio Gallery, which you can download and use to get this kind of a report in excel (CSV) format for further analysis.

In additional to that, this blog post will also go and explain the logic that is used to generate this report so you can go and customize it for your specific needs if you like it.

Details of the solution (Walkthrough)

Our add-in uses TFS Client Object Model to retrieve this information. The solution to the problem can be broken down in the following simple steps

1.  Retrieve bug id of all the bugs and store them in a work item collection. You need to provide TFS “TeamProject” name as user parameter. You can also specify the iteration path if you are specifically interested to get this report for a given iteration.

 

string wiqAllBugs = "SELECT [System.Id] FROM WorkItems WHERE     [System.TeamProject] = '{0}'   AND  [System.WorkItemType] = 'Bug' AND [System.IterationPath] UNDER'" + iteration + "'";

 

string tfsQuery = string.Format(wiqAllBugs, projName);                  WorkItemCollection wiBugCollection = store.Query(tfsQuery);

 

2.      For each bug iterate through the revision history and look for the text Edited (Active to Resolved") and  Edited (Resolved to Active") or Edited (Closed to Active")

The logic is to look for the bugs that are changed from Resolved state to Active state or from Closed state to Active state and count the no. of instances this has happened for a bug as bug reactivation count metric. We are also capturing transition from Active to Resolved state to capture information like “resolved by”, “resolved state” etc. that will help in further investigation

 

foreach (Revision revision in wItem.Revisions)

  {

     if (revision.GetTagLine().Contains("Edited (Active to Resolved"))

       {

 

//retrieve all the information required like resolved date,resolved by etc..

 

     }

 

else if (revision.GetTagLine().Contains("Edited (Resolved to Active") ||    revision.GetTagLine().Contains("Edited (Closed to Active"))

           {

 

       // increase your counter every time to get the total reactivation count

 

           }

                       

  }

 

3.      Finally check if your bug reactivation counter is greater than X (or 0 to find bug that are reactivated at least once) and print them to console/.csv / html

 

4.      Our add-in, generates a CSV report with this information (find below the sample)

 

You can go and tweak this logic as per your needs. We hope you would have find this quick workaround useful.

MTM ( Microsoft Test Manager) - Getting latest results as email notification


Author: Ranjit Gupta and Raj Kamal

Today, MTM (Microsoft Test Manager) doesn't provide an email report with the latest test results for a given test plan. Well, you can do this  by using the below snippets of code and configure it for your test team and other stakeholders. We hope you find this useful. Your comments are welcome.

1.      Query test plan

ITestPlanCollection mTestPlanCollection = testProject.TestPlans.Query((string.Format("Select * From TestPlan where PlanName = '{0}'", testPlan)));

2.      Getting the status of the desired Test Plan

 

ITestPointCollection teamtestPass = testplan.QueryTestPoints(string.Format("SELECT * FROM TestPoint where LastResultOutcome='Passed'"));

                ITestPointCollection teamtestFail = testplan.QueryTestPoints(string.Format("SELECT * FROM TestPoint where  LastResultOutcome='Failed'"));

                ITestPointCollection teamtestBlocked = testplan.QueryTestPoints(string.Format("SELECT * FROM TestPoint where  LastResultOutcome='Blocked'"));

                ITestPointCollection teamtot = testplan.QueryTestPoints(string.Format("SELECT * FROM TestPoint "));

                teampass = teampass + teamtestPass.Count;

                teamfail = teamfail + teamtestFail.Count;

                teamblock = teamblock + teamtestBlocked.Count;

                teamtotal = teamtotal + teamtot.Count;

 

teamtestPass contains all those testpoints which has Outcome as Passed. So above queries gives you the status of your test plan

 

3.      In your report you might want to list all test cases or all failed/blocked test cases and with some details

Below statement will give you the ID, title, configuration, assigned to, outcome and the duration for the test point

foreach (ITestPoint point in teamtot)

{

Console.WriteLine(point.Id + "-- " + point.TestCaseWorkItem.Title + "-- " + point.ConfigurationName + "-- " + point.MostRecentResult.Outcome.ToString() + "-- " + point.AssignedToName + "-- " + point.MostRecentResult.Duration);

}

PS: The testpoint which are in Active state “point.MostRecentResult.Outcome.ToString()” would throw null exception, you need to handle that in your code

4.      You can capture all this information into a html file for better reporting and send it as automated mailer

 

SmtpClient client = new SmtpClient("smtphost");

            client.UseDefaultCredentials = true;

 

 

            string fromAddress = Environment.GetEnvironmentVariable("USERNAME") + "@microsoft.com";

            MailAddress from = new MailAddress(

                fromAddress, ProjectSettings.Default.FromName, System.Text.Encoding.ASCII);

            List<MailAddress> to = new List<MailAddress>();

           

            string address = ProjectSettings.Default.toAddress;

            string[] toRecipent = address.Split(';');

 

            foreach (string add in toRecipent)

            {

                to.Add(new MailAddress(add));

            }

MailMessage message = new MailMessage();

       message.IsBodyHtml = true;

       message.From = from;

       to.ForEach(entry => message.To.Add(entry));

       message.Body = report.ToString();

            message.BodyEncoding = System.Text.Encoding.UTF8;

            message.Subject = ProjectSettings.Default.mailSubject;

            message.SubjectEncoding = System.Text.Encoding.UTF8;

         client.Send(message);

 

 

Distributing Coded UI Scripts (with Selenium add-in) for Cross Browser Automation using Visual Studio Lab Management & Environment variables

Author: Ranjit Gupta and Raj Kamal

Background

If you are not already aware, Coded UI now supports cross browser testing using Selenium components. The Visual Studio add-in can be found here, that works if you are running VS 2012 Update 2 or above. There is also an official blog that talk topic in detail, if you are interested.

Customer Story

A large utility service company in United Stated has retained us to implement its internet facing web presence   - USD 7 Million engagement. The website will provide timely, business driven information along with functionality to its customers to do their regular interactions, such as paying bills, checking historical usage data, turning on/off services, and others mentioned in the business requirements section. As it’s an external facing site, it needs to be supported on Firefox, Chrome as well as IE 8 & IE 9. With the size and the criticality of the application to customer’s core business, selective manual testing on non-IE browsers is not an acceptable approach.

The power of Coded UI + Selenium & Visual Studio Lab Management

We wrote Coded UI Tests used this Selenium add-in along with Visual Studio Lab feature to distribute our automated tests on multiple browsers. Our goal was to dedicate each OS + Browser combination a dedicated agent machine, so all our tests run in parallel on different browsers on pre-defined machines.

One challenge was that, there is no way in Visual Studio Lab settings to specify that a particular agent machine should be used for a specific browser e.g. Chrome, Firefox, IE 8, IE 9 etc.  We didn’t want to create multiple copies of coded UI test methods and hardcode them to run against specific configurations. We were looking for a solution that didn’t require us to write custom logic to solve this very problem and fortunately we could achieve this without any additional coding using the below proposed solution.

Solution – Simple yet elegant

To get past this issue, we made use of environment variables. The steps are explained below.

1.      Create environment variable on each agent machine and set its value as the desired browser name (IE8/IE9/Chrome/Firefox) against which you want to run your automated tests.

 

2.      Create a Test Initialize method and read the value of the environment variable before test starts on each agent machine. This will return the name of the browser that needs to be used for playing back automation.

 

 [TestInitialize]

        public  void Init()

        {

           App_Constants.browser= Environment.GetEnvironmentVariable("browser",

           EnvironmentVariableTarget.User);

           

 }

 

 

3.      Inside your test method, set BrowserWindow.CurrentBrowser value as App_Constants.browser which holds the browser name stored in environment variable

[TestMethod]

        public void CodedUITestMethod1()

        {

            BrowserWindow.CurrentBrowser = App_Constants.browser;

            this.UIMap.RecordedMethod1();

            this.UIMap.AssertMethod1();           

           

    }

4.      Depending on the value stored in the environment variable, Coded UI launches the browser and runs the test against launched browser.

 


 


 

You can set these configuration as default = Yes, if you want these to be used for the test cases to bed added to the test plan

 

6.      Associate your automated tests with test cases in MTM (Microsoft Test Manager)

 

7.      Setup test controller and test agents

                       

8.      Run your automated test against each agent

 

Now you can control which OS/Browser you would like to use to run your automated tests without making any changes in your Coded UI Tests or writing any extra code.

 

Thursday, September 12, 2013

Win 8/1/IE 11 - Cross Browser Testing, Device Testing and more cool features


I am sure many of you would have discovered these if you have installed Win 8.1 but I wanted to quickly share with others the cool features that IE 11 brings out the box as part of developer toolbar. Highlighting few of them below:

 

1.      Cross Browser/Platform testing: This is cool to see how you site renders on different browser (not just limited to different versions of IE) and  OS’s as well as different resolutions. This will be quite handy for testers.

 
           

 

 

 

2.      UI Responsiveness: Looks pretty nice for performance benchmarking and quick assessment of performance.

                                   



 

 

3.      Network:

 



 

4.      Memory:

               



 

5.      Profiler

 
              


 

 

Test Effectiveness, Test Efficiency, Defect Removal Efficiency (DRE) - Jargons :)



I was talking to a colleague today around difference among these test metrics and I am sharing it with you all if it helps :)
 

Test Case Effectiveness = Total defects found by test cases / Total defects**

e.g. if 80 defects were found by test cases and 10 were through ad hoc testing and 10 were leaked to UAT / Prod then TC effectiveness is 80 %

 

** This includes all defects found post build  phase (after test cases are designed) so if buddy testing was done then sure it should be added to total defects as we will use our test cases to do buddy testing.  Total defects will also include the defects found in UAT and post production as they are leakages.

 

Test Efficiency = No. of valid defects  / Total Defects found by test team (including invalid defects)

 

e.g. if 90 defects out of 100 were accepted then Test efficiency is 90 %

 

This gives us the rework and wastage due to invalid bugs that we rejected

 

Defect Removal Efficiency (DRE)  = Total defect found before delivery (through review, inspection, testing etc) / Total defect during project lifetime

 

Here we are saying doesn’t matter if we found them by test case or reviews, if we found them before UAT then its good and the defects that are found in UAT and Prod will be added to denominator and will show our DRE.