Friday, February 14, 2014

Future of Testing



Abstract

Someone recently asked me, when we have daily builds going into production, and we almost have daily/weekly OS, browser and platform updates that can break our applications and we need to support a multitude of devices (PCs, Tablet, Mobiles) with different form factors, “Can you do all your testing that is required every day?

And the next question, “If you need more time than that, then probably you are too slow to keep up with the pace that our ecosystem is evolving at and why don’t we just enable our developers to test it on fly? Why not automate everything? Can testers really add value anymore?”. 

AI is the new UI. In other words, the future devices might not have a traditional UI to run automation using QTP/Selenium. You future devices will not be just tablets and smartphones but also smart watches, glasses, cars and more. They will take multiple inputs like touch, voice as input.

Now we can look at this chasm and think about returning back or falling down or see the opportunities that lie ahead if we take a big leap forward. My response to these challenges is that “all” the testing can’t be commoditized as testing is an art and science both. You can’t automate everything and replace the intelligence of human testers. No doubt, we (testers) need to be much more agile and change the way we used to think about testing. However my question to him, “Would you go to office in a car that was just unit tested by a developer?”

Key Questions:

1.      De-mystifying the role of traditional testers in years to come and what’s expected from them

2.      Discussion around adapting to embrace daily releases and supporting the consumer devices that run apps. Support trends like Consumerization of IT and BYOD.

3.      New types of testing to test software that run on embedded devices, take voice inputs from humans.  Could we be working in hardware labs where hardware and software be tested together with future devices becoming more life and mission critical?

4.      Why developers would still need to focus on development and they can’t replace testers but can of course help.


Challenges with today’s testing

The most repeated question to development teams in reputed organizations today is, “Why do I, the client, need to spend additional money on testing efforts when I am already paying a premium for high-quality developers?”

Such questions are not without logic. Let us see where the client is coming from.

To the client, an organization taking up the project must have already developed similar projects in the past, have a highly-skilled pool of resources to accomplish the same and able supervisory resources to ensure there are no execution delays. However, as we understand, each project is like a new innings – you cannot simply replicate your past laurels but can draw from your experience.

The purpose of this paper is to precisely help the testing community answer such questions. One can even be faced with such questions from CTOs/CIOs with regards to justifying testing efforts. The entire existence of testing services then faces a question mark – do we have answers that ensure the need for testing in the future?

There are two parts to this answer and a larger implication for subsequent projects.

a)     Functional testing vs. Non-functional testing

The first part is that testing in parlance has ceased to remain merely confined to functional testing. With the increasing tribe of different devices, platforms and applications; testing has become an envelope of services covering functional and several non-functional types of testing. Therefore, for any forthcoming project, it will eventually become commonplace for the client to pay for a bouquet of testing services and not merely functional testing – which is commonly considered a plain-vanilla form of testing meant to unearth development pitfalls. We are at the verge of a paradigm shift where usability testing and user experience has assumed great significance to the extent that usability is considered a must-have and the associated functional testing is a given. This is a departure from the earlier industry trend of considering non-functional testing as a good-to-have but the focus remaining on functional testing as a must-have. Clearly, efforts are more pronounced in value added testing these days as opposed to traditional testing in the past. Therefore, testing services are no longer an added cost but an investment to achieve wow factor with value-added testing services.

b)     Speed of delivery

The second part to the answer goes further in-depth of what functional testing will comprise of. With the faster development sprints and release cycles – testing resources need to be one step ahead of the development folks to firstly detect a flaw and moreover, they need to do that with incessant regularity & speed. We would have to agree that higher the quality of development effort, higher should the testing abilities be for ensuring uncompromising quality. Additionally, customizations and related activities will also assume significance in a lot of projects – wherein testing of functionalities would be on the back-burner.

These answers, however, are not reasons alone to indicate how testing services will be offered but in fact, they point towards a larger implication, which is – testing services will be targeted for reducing release times as we move on and this will mean the need to firstly justify the entire bouquet of testing services that need to be offered to any codebase for providing quality assurance and at the same time, ensure that this is done in the least possible time. In fact, increasingly lesser time.

To add to this race against time, there lie greater pits such as ensuring controlled updates like updating an application with time are not broken. A bigger thing on the update space is uncontrolled updates though – which are passed on by OS manufacturers and come thick and fast these days across devices. The challenge is to therefore ensure that project-sponsors are aware of the need for elaborate testing and testing teams are adept enough to deliver these services in shorter spans of time – not compromising on release quality.


Impact of Future Trend

 Keeping these key elements in mind, we can attempt to identify the gap existing today. The future is already here – so are we ready for daily test cycles for apps, OS updates, daily controlled and uncontrolled updates to production impacting many apps and services or are we ready for services projects who are small in tenure and have early go live in market? The reason is we need to be ready to be fast enough to cater to the pace of the market appetite. We need to be ready to match the pace in which we are doing development now.

It’s not merely a classical time and work mathematics problem. It’s just not possible to shrink timelines on testing effort without touch resources or scope. At least that’s what the age-old scope-budget-time triangle told us in software engineering classes. Instead, the focus is now to strategize. Strategize on testing methodologies & practices that will enable us to adapt to the new era of daily releases, platform ubiquity and backward compatibility. In fact, strategizing on testing services has never been so important – we are now in a phase where the poster boy of testing – “automation testing”, faces a gripping test. Not only it is going to be very expensive to have automation in projects requiring daily releases and updates, it may also not be feasible.

a)     Need to modify the testing approach to accommodate the need of faster releases to production

Let us examine what the gap is today and what can be done to bridge this gap. To articulate, we see widespread prevalence of traditional functional testing services – in several cases with little or no automation inspite of emergence of such offerings as testing as a service and pure-play testing firms offering a bouquet of function & non-functional testing services. The key element here is the adherence to traditional ways of software testing. Despite all efforts by organizations to make their resources up to speed on readiness for adopting cutting-edge trends & services, there is clearly a mismatch in pace at which development & testing is carried out and the expected time to go live in market. We need to change our test strategy to accommodate the need to make the product available in production but maintaining the quality too.

It is essential to understand how this change can then be brought about. The truth is, there is no single way of planning out and working towards adapting our testing methodologies. Each organization is used to a different way of approaching a project, decomposing it into meaningful and testable bits and then running quality tests on the whole.

Minimum Viable Quality (MVQ)

There are many products/services like Bing.com that are shipped rapidly to production. To be in the same phase of such rapid ships to production, we need to twist our testing approaches now. We can’t continue with our traditional testing approaches which used to run for weeks for a build certified to go to production. An approach to certify such rapid builds, we should consider now concept of Minimum Viable Quality (MVQ). 

MVQ promises to help online service teams ship more frequently with less testing, lower (initial) quality, and more efficiently while actually exposing bugs to fewer real users and at a lower overall risk than traditional methods involving stabilization phases and orchestrated releases to production. The reality is that at Microsoft we have always been comfortable shipping unbaked software.  That is why we have dogfood and we have alphas and betas and developer previews.  All of these labels tell the user that they will find bugs but that we feel the quality of the version of the software is just good enough (the minimum) to make it worth their time to use it.  The only time we really ever hit ship quality is the final golden build and even then we've always released products with some known bugs.  After all, what is a Zero Day Patch if not a mulligan on signoff?

The problem you get into with modern services development is that you are never actually done shipping.  There is just the next feature, the next scenario, the next set of performance improvements, and the never ending list of bugs to fix.  A lot of Microsoft testers bring the ship quality mindset to services release instead of looking at each release as just another incremental step where some features are now ready for mass consumption and others may need to remain hidden and exposed only to those internal "dogfood" users or the external early "beta" adopters.  Perhaps the new code is pushed to a subset of users and if it is too buggy a quick fail back to last known good takes the code out of use with minimum negative user impact.

What is the advantage of taking an MVQ approach to services delivery?  The bottom line is that you start to get data about how the code is functioning in production with real users more quickly.  The key aspect is to balance the minimum such that the feature set and the quality of those features within the product is not so low that it is not used by the target user segment.  If it is too low, then you won't discover and learn the harder to find bugs because the code won't be exercised. 

The trick is to realize that it is the science behind this process that needs to change – not the process itself. For instance, features of a mobile OS that need a long time to be released into the market may end up being obsolete or get zero adoption from the app development community. Instead, the feature-set may be broken up into smaller sets of sub-features with periodic go-live to the market and at the same time get the app development community a continuous feed of feature-sets which they can leverage to build additional apps – which eventually also ensures further development when the entire feature set is released.

Reduction in test effort for activities like test environment preparation or test case writing.

A key element of course is how test teams will shape up for such scenarios. Will test team still spend 20% of effort in preparing the Test Cases and test environment? In order to test the builds daily, can we really afford to spend time in upgrading the Test Cases or debugging issues in test environment? Clearly, we would be moving from an era of end-to-end functional testing to an era of greater ownership of application modules, a more granular approach to scheduling of testing – for instance, passing results and unlocking test-scenarios from one testing resource to another in an assembly-line like setup. The eventual approach will depend on how the organization plans to setup testing teams in the near future, evolve its test organization and adopt various testing practices such as crowd-sourcing, testing-as-a-service etc.

Alpha/beta testing or Crowd Sourcing testing

A cursory look at these evolving practices highlight such trends as Testing-in-Production, which essentially translates to alpha/beta testing by releasing an MVQ application into production and awaiting for users to unearth issues with the same. This is widely practiced by Google for instance – whose applications have been known to remain in beta for years before finally coming out of it. A similar trend is, as mentioned earlier, crowd-sourcing. The only difference here being that a group of people who usually excel at testing are the ones the application is exposed to and not to the entire intended application audience. So why do we at all need testers then if we can throw it open to the audience in production or to a competent set of crowd-sourcing folks? The answer lies in controlled adoption of these trends – for instance, an application in beta ends up receiving frequent reports of issues with regards to features within a particular module of the application. This should prompt the project team to test intensively within the module and those affected by flows from this module.

So essentially the horizon of testing has gradually expanded from simply closing testing efforts with UAT to the extent of supporting even in production – and with a variety of variables on top it ranging from ensuring functioning even with controlled/uncontrolled updates, proper support for globalization, adapt to loss of internet connectivity scenarios by ensuring offline availability and even seamless experiences for users accessing the same application from one device to the other.


a)     Complexity of systems has increased. Does it impact our testing?

The testing strategy is not going to be revised just for today’s trend of rapid app development and pushing in to production without spending ‘n’ no. of testing cycles but also for trends which are going to impact largely the way we are doing the testing today. The complexity of applications has increased exponentially. We are not in an era where we developed an application for Windows XP Operating system and which will run only on Internet Explorer 8. Your testing is no longer limited to 1 OS or 2 browsers. The no. of variables have increased which are impacting the functionality of the application.
 

Let’s take an example of a Banking App designed for Windows 8 Operating system. Few years back, if we had to test a banking application, the use-cases would be very limited. One would verify the core functionality on any one OS plus any one browser. It was obvious to define that this application would work only with 1 particular OS along with one particular browser. The Usability testing was not a key part of the testing strategy.



Figure: 1 Showing how testing permutations and combination of various devices has increased the scope of testing

 

But, today, the realistic use-cases for such an app other than the pure functionality of the app would be -

-         Can the app run on Windows Phone 8?

-         Can the app handle if Operating System updates are pushed?

-         Can the app be used on multiple touch devices like Surface, iPad, tablets?

-         Can the app work on browsers like Safari, Chrome, Firefox, several version of Internet Explorer?

-         Can the app handle if app updates are pushed along with OS updates?

-         Can the app handle all types of input methods like mouse clicks, touch, keyboard, gestures, voice inputs etc.? Several permutations and combinations can come up out of this.

-         Can the app handle if any platform update is pushed on which the app is designed.

 

And the list does go on.
 

Application under test will run on “family of devices”


We also have such apps used by people that can be used when you are working from home, when you are in office network or when you are in public network. For example, on Lync, a user will start taking call using his laptop in the office environment, then he moves towards his car and continues the call over his phone using Lync app and when he reaches home, ultimately switches to his Surface to finish the call. Can our app support this type of usability?

The concentration of testing has moved from functionality to the usability. At any point, the app should not break irrespective of ‘n’ variables impacting it.


 

Testing in Production.
 
With so many variables to test with, how will one define the test coverage at the end of the cycle? You can define in your testing scope that we will test the app only for particular use-cases but from usability perspective, in today’s era, you can’t direct user to use the app only with internet explorer because we have done testing with IE only. To handle such situations, the answer is not to increase the test team or increase the test cycles to cover all combinations of devices. But, with use of crowd sourcing and testing in Production (TiP), we can take care of this. If we do our testing more with production data rather than dummy data, we are doing much more real-world testing and can uncover more production environment issues.

 

Will there be such a thing as Test environment?

Traditionally, we used to invest lot of time in preparing the test environment to make it a replica of production environment. Test leads used to keep one test resource dedicated for preparing the test environment and maintaining the environment. But, now we have our environments on cloud. It has reduced test environment setup time drastically. With this, test teams can dedicate more resources plus time for testing cycles. Also, having environment on cloud makes it more production ready in terms of configuration and database setup. In Microsoft Azure, when testing cycle is completed, same test environment is converted into production and production becomes test environment.

 

The Case for Human Intelligence

 
So far, we have talked about how apps are impacted with the presence of so many devices like touch-enabled devices, mobiles etc. But, future devices are not limited to touch devices. We are going to have devices like Smart Glasses, Smart Watches or may be, Artificial intelligence driven Cars. Testing an application on such devices using emulator is not going to be enough. Emulator can’t bring the human piece of testing to such devices. Let’s take an example of Artificial intelligence driven Cars, the car should apply automatic brakes if the sensor senses an obstacle at 30 Feet and should reduce the speed to 20kmph if an obstacle is sensed at 60 Feet. If such scenarios are just unit tested by the developer using emulator, will you feel safe in such car? Will you feel safe if you know that only the code is tested but not the car behavior when it is applied with several types of obstacles to verify the functionality? With Emulator, you can’t validate this and hence, human element of testing can’t be removed, even in future and even with various smart ways of development.

 

Importance of User Experience Testing


Definitely, testing strategy has to change and become smarter. We need to add a lot of usability verification flavor in our testing approach. Verifying the user experience is not just about checking tool-tips on mouse hover but various factors like app should be minimalistic, individuals should be able to use it with minimum clicks. It should be super responsive and self-explanatory to users. Future generation is going to be different from today and their expectation would be different from an application. We should consider that factor too when we are talking about user experience of an app. Concentration span is very low for future generation and if they do not find the app worthy enough in first 30secs of usage, probably they will never try the app again. And, that would not be just product failure but a failure of joint effort by test and devs. Hence, it is valid to say that verifying an application for user experience is equally important and it should be part of test strategy.

 

Test Effort Distribution:

In essence, it’s time to stop doing things right and instead do the right things!


 
 
Figure: 3 showing how testing strategy is going to change in future as compare to today




 
Figure: 4 showing how testing effort is going to change in future as compare to today
 
 
Conclusion

Taking overall stock of the situation, a few things stand out. Bluntly put, functional testing may witness a fall in test effort but this would eventually be transferred to other types of testing. From the days of writing test cases and executing them, we would be pretty much doing exploratory testing where more of the focus remains on intelligent testing and needs a lot of human interaction.

So while it would be a future where testing services are secured but the service catalog would be a completely re-wash. It is time to move to smart testing, controlling costs but delivering on higher expectations and ensuring the best user experience – not just testing plain-vanilla functionality, and therefore no just traditional testing tools.

Testing-in-Production and Crowd-sourcing are already here as promising ways of delivering the next generation of testing services while optimizing testing efforts with smart resourcing and environments using the power of cloud would pave the way for strategizing the way forward.


Author: Gunjan Jain and Raj Kamal

Thursday, February 13, 2014

MTM (Microsoft Test Manager) – Test Configuration for handling multiple test environments


Author: Ranjit Gupta & Raj Kamal
 
While creating test plan one thing you need to keep in mind the configuration against which your test cases needs to be run. Configuration can be anything like browser, OS, .net version, different version of your product or may be different environment i.e. test, staging etc.

So after your test execution you would like to have test result against each configuration.

Test configuration manager in MTM allows you to have as many test point of your test case depending upon the number of configuration you have.

Test point is paring of test case and configuration. If you have two test case TC1 and TC2 and configuration as C1, C2 and C3 then you will have totally six test point. Test point relate to the number of test case and configuration you create.

You can create your own configuration from the organize tab as shown below and assign the configuration to your test cases

 

Specifying Default test plan-

You can create a default configuration as shown below, so now every test cases which gets added to the test plan the default configuration gets applied to it. It gets applied even to the test cases which are copied from other test plan.



 

Making use of configuration to run automated test

You can make use of MTM configuration to run your automated test on different browser. Consider you want to run your test on chrome, firefox and IE. Create three configuration under your automation test plan named as “chrome” , “firefox” and “IE” and assigned them to your automated test cases.

When test is triggered from MTM TestContext exposes many information out of which “__Tfs_TestConfigurationName__property can be utilized to determine on which browser the test should run.

 

In your Coded UI test  test initialize method read the name of your configuration assigned to test cases as below

if (TestContext.Properties["__Tfs_TestConfigurationName__"].ToString() != null)

            {

                BrowserWindow.CurrentBrowser = TestContext.Properties["__Tfs_TestConfigurationName__"].ToString();

            }

 

So now your coded ui test runs on the browser you have specified in your MTM configuration.

Visual Studio TFS - Bug Reactivation Count Report


Author: Ranjit Gupta & Raj Kamal

Background:

Visual Studio provides a nice reactivation report, out of the box, to help you determine how effectively team is fixing bugs. This report helps you answer questions such as “How many bugs have been reactivated in the current iteration?” or “Is the team resolving and closing reactivated bugs and stories at an acceptable rate?” but doesn’t go into details, if team wants to get reactivation report at work item level to answer a follow up question, such as, “How many times bugs have been reactivated and what are the bugs that have been reactivated more than X number of times? This kind of report can help the teams in taking corrective action as it’s quite obvious that bug reactivation is rework and wastage of effort, time and money.

Solution:

There is also a related thread on MSDN Discussion on a similar topic. If you came to this blog, finding solution of this very problem then the good news is that now there is add-in that we have published on the Visual Studio Gallery, which you can download and use to get this kind of a report in excel (CSV) format for further analysis.

In additional to that, this blog post will also go and explain the logic that is used to generate this report so you can go and customize it for your specific needs if you like it.

Details of the solution (Walkthrough)

Our add-in uses TFS Client Object Model to retrieve this information. The solution to the problem can be broken down in the following simple steps

1.  Retrieve bug id of all the bugs and store them in a work item collection. You need to provide TFS “TeamProject” name as user parameter. You can also specify the iteration path if you are specifically interested to get this report for a given iteration.

 

string wiqAllBugs = "SELECT [System.Id] FROM WorkItems WHERE     [System.TeamProject] = '{0}'   AND  [System.WorkItemType] = 'Bug' AND [System.IterationPath] UNDER'" + iteration + "'";

 

string tfsQuery = string.Format(wiqAllBugs, projName);                  WorkItemCollection wiBugCollection = store.Query(tfsQuery);

 

2.      For each bug iterate through the revision history and look for the text Edited (Active to Resolved") and  Edited (Resolved to Active") or Edited (Closed to Active")

The logic is to look for the bugs that are changed from Resolved state to Active state or from Closed state to Active state and count the no. of instances this has happened for a bug as bug reactivation count metric. We are also capturing transition from Active to Resolved state to capture information like “resolved by”, “resolved state” etc. that will help in further investigation

 

foreach (Revision revision in wItem.Revisions)

  {

     if (revision.GetTagLine().Contains("Edited (Active to Resolved"))

       {

 

//retrieve all the information required like resolved date,resolved by etc..

 

     }

 

else if (revision.GetTagLine().Contains("Edited (Resolved to Active") ||    revision.GetTagLine().Contains("Edited (Closed to Active"))

           {

 

       // increase your counter every time to get the total reactivation count

 

           }

                       

  }

 

3.      Finally check if your bug reactivation counter is greater than X (or 0 to find bug that are reactivated at least once) and print them to console/.csv / html

 

4.      Our add-in, generates a CSV report with this information (find below the sample)

 

You can go and tweak this logic as per your needs. We hope you would have find this quick workaround useful.

MTM ( Microsoft Test Manager) - Getting latest results as email notification


Author: Ranjit Gupta and Raj Kamal

Today, MTM (Microsoft Test Manager) doesn't provide an email report with the latest test results for a given test plan. Well, you can do this  by using the below snippets of code and configure it for your test team and other stakeholders. We hope you find this useful. Your comments are welcome.

1.      Query test plan

ITestPlanCollection mTestPlanCollection = testProject.TestPlans.Query((string.Format("Select * From TestPlan where PlanName = '{0}'", testPlan)));

2.      Getting the status of the desired Test Plan

 

ITestPointCollection teamtestPass = testplan.QueryTestPoints(string.Format("SELECT * FROM TestPoint where LastResultOutcome='Passed'"));

                ITestPointCollection teamtestFail = testplan.QueryTestPoints(string.Format("SELECT * FROM TestPoint where  LastResultOutcome='Failed'"));

                ITestPointCollection teamtestBlocked = testplan.QueryTestPoints(string.Format("SELECT * FROM TestPoint where  LastResultOutcome='Blocked'"));

                ITestPointCollection teamtot = testplan.QueryTestPoints(string.Format("SELECT * FROM TestPoint "));

                teampass = teampass + teamtestPass.Count;

                teamfail = teamfail + teamtestFail.Count;

                teamblock = teamblock + teamtestBlocked.Count;

                teamtotal = teamtotal + teamtot.Count;

 

teamtestPass contains all those testpoints which has Outcome as Passed. So above queries gives you the status of your test plan

 

3.      In your report you might want to list all test cases or all failed/blocked test cases and with some details

Below statement will give you the ID, title, configuration, assigned to, outcome and the duration for the test point

foreach (ITestPoint point in teamtot)

{

Console.WriteLine(point.Id + "-- " + point.TestCaseWorkItem.Title + "-- " + point.ConfigurationName + "-- " + point.MostRecentResult.Outcome.ToString() + "-- " + point.AssignedToName + "-- " + point.MostRecentResult.Duration);

}

PS: The testpoint which are in Active state “point.MostRecentResult.Outcome.ToString()” would throw null exception, you need to handle that in your code

4.      You can capture all this information into a html file for better reporting and send it as automated mailer

 

SmtpClient client = new SmtpClient("smtphost");

            client.UseDefaultCredentials = true;

 

 

            string fromAddress = Environment.GetEnvironmentVariable("USERNAME") + "@microsoft.com";

            MailAddress from = new MailAddress(

                fromAddress, ProjectSettings.Default.FromName, System.Text.Encoding.ASCII);

            List<MailAddress> to = new List<MailAddress>();

           

            string address = ProjectSettings.Default.toAddress;

            string[] toRecipent = address.Split(';');

 

            foreach (string add in toRecipent)

            {

                to.Add(new MailAddress(add));

            }

MailMessage message = new MailMessage();

       message.IsBodyHtml = true;

       message.From = from;

       to.ForEach(entry => message.To.Add(entry));

       message.Body = report.ToString();

            message.BodyEncoding = System.Text.Encoding.UTF8;

            message.Subject = ProjectSettings.Default.mailSubject;

            message.SubjectEncoding = System.Text.Encoding.UTF8;

         client.Send(message);