Thursday, September 12, 2013

Win 8/1/IE 11 - Cross Browser Testing, Device Testing and more cool features


I am sure many of you would have discovered these if you have installed Win 8.1 but I wanted to quickly share with others the cool features that IE 11 brings out the box as part of developer toolbar. Highlighting few of them below:

 

1.      Cross Browser/Platform testing: This is cool to see how you site renders on different browser (not just limited to different versions of IE) and  OS’s as well as different resolutions. This will be quite handy for testers.

 
           

 

 

 

2.      UI Responsiveness: Looks pretty nice for performance benchmarking and quick assessment of performance.

                                   



 

 

3.      Network:

 



 

4.      Memory:

               



 

5.      Profiler

 
              


 

 

Test Effectiveness, Test Efficiency, Defect Removal Efficiency (DRE) - Jargons :)



I was talking to a colleague today around difference among these test metrics and I am sharing it with you all if it helps :)
 

Test Case Effectiveness = Total defects found by test cases / Total defects**

e.g. if 80 defects were found by test cases and 10 were through ad hoc testing and 10 were leaked to UAT / Prod then TC effectiveness is 80 %

 

** This includes all defects found post build  phase (after test cases are designed) so if buddy testing was done then sure it should be added to total defects as we will use our test cases to do buddy testing.  Total defects will also include the defects found in UAT and post production as they are leakages.

 

Test Efficiency = No. of valid defects  / Total Defects found by test team (including invalid defects)

 

e.g. if 90 defects out of 100 were accepted then Test efficiency is 90 %

 

This gives us the rework and wastage due to invalid bugs that we rejected

 

Defect Removal Efficiency (DRE)  = Total defect found before delivery (through review, inspection, testing etc) / Total defect during project lifetime

 

Here we are saying doesn’t matter if we found them by test case or reviews, if we found them before UAT then its good and the defects that are found in UAT and Prod will be added to denominator and will show our DRE.

Saturday, August 17, 2013

Mob Testing (inspired by Mob Programming)


Testers & QA's,

There is something interesting that we might want to try. I came across an upcoming agile practice called Mob Programming. In gist, this takes pair programming to next level by coding as a team. The idea is simple that you can write the best code when one person is actually typing the code and rest of the team is providing inputs. This has helped teams deliver builds without bugs. If you think, you are developing, unit testing, reviewing and testing the code all-at-once. How cool is that?

Read about it @ http://mobprogramming.org/mob-programming-basics/

Driver/Navigators

We follow a “Driver/Navigator” style of work, which I originally learned from Llewellyn Falco as a pair programming technique. Llewellyn’s approach is different from any other I have been shown, have seen, or have read about (and I think he invented it or evolved into it.)
In this “Driver/Navigator” pattern, the Navigator is doing the thinking about the direction we want to go, and then verbally describes and discusses the next steps of what the code must do. The Driver is translating the spoken English into code. In other words, all code written goes from the brain and mouth of the Navigator through the ears and hands of the Driver into the computer. If that doesn’t make sense, we’ll probably try to do a video about that or a more complete description sometime soon.
In our use of this pattern, there is one Driver, and the rest of the team joins in as Navigators and Researchers. One important benefit is that we are communicating and discussing our design to everyone on the team. Everyone stays involved and informed.
The main work is Navigators “thinking , describing, discussing, and steering” what we are designing/developing. The coding done by the Driver is simply the mechanics of getting actual code into the computer. The Driver is also often involved in the discussions, but her main job is to translate the ideas into code. Of course, being great at writing code is important and useful – as well as knowing the languages, IDE and tools, etc. – but the real work of software development is the problem solving, not the typing.
If the Driver is not highly skilled, the rest of the team will help by guiding the Driver in how to create the code – we often suggest things like keyboard short-cuts, language features, Clean Code practices, etc.  This is a learning opportunity for the Driver, and we transfer knowledge quickly througout the team which quickly improves everyones coding skills.

I would suggest that if we apply the same to testing how productive it can be. I know exploratory testing is cool but what would be even better is that the entire test team gets into a room and one person takes the responsibility of "driver"- he will follow the instructions of the team and document/execute the test cases. Rest of the team will play "navigators" and suggest scenarios and different test condition. This will be a great demonstration of collective IQ and things like mails, meetings, triage calls can be eliminated to reduce unproductivity.

Let me know your thoughts and if you find this interesting, we can pilot this in one of your projects and evangelize it.
 

Tuesday, May 7, 2013

TFS Live Configuration - TFS Login Details

 
If you are trying to use our TFS Live app for TFS Server 2010/2012 and finding it hard to configure the app then please refer to below details:
 
Details about tfs setting screen:
 
tfs server name: <<the name of your TFS Server>> e.g. xyz-tfs-server
 
path: <<the tfs path. By default it is “tfs” >> e.g. tfs
 
port: << the port that is configured for your tfs >> e.g. 8080 as default for http and 443 as default for https
 
protocol: << whether your tfs is http or https>> e.g. http / https
 
project collection name: <<the name of your project collection>> e.g. defaultcollection
 
 
username: <<your domain credential that you use to connect to your TFS >> e.g. abc\rajkamal
 
password:  ***** (your domain password
 
Sample #1. Enter the TFS details and press authenticate
 
 
image
 
Sample #2. Enter the TFS URL with all the details directly as “TFS Servername”. In this case, you don’t have to worry about entering path, port, protocol or project collection name as they are part of the URL itself
 
 
image
 
 

Enable Alternate Credentials for Hosted TFS / TFS Service

 
If you are trying to use our TFS Live app for TFS Service instance then you must login using “Alternate Credentials”. Essentially you will need to provide your windows live id and newly set password using “Alternate Credential”  when trying to login to our app. This is required due to limited TFS Services API support.
 
 
Details about tfs setting screen:
 
tfs server name: <<the base URL of you TFS Service account>>>  e.g. https://ranjitgupta.visualstudio.com/
 
project collection name: <<the name of your project collection. Most probably it will be defaultcollection>> e.g. defaultcollection
 
username: <<the windows live id that is used to create access>> e.g. raj.kamal@outlook.com
 
password:  ***** (password set using “Alternate credential”)   - Read the section below to enable this setting
 
 
image
 
 
Steps to enable “Alternate Credentials”
 
Step 1. Go to your TFS Server home page and login with your windows live id
 
Step 2. Go to My Profile setting as shown below (at right top)
 
 
image_thumb[2]

Step 3: Go to “Credentials” tab in “User Profile” window

Step 4: Select “Enable Alternate Credentials”

image_thumb[4]

Step 5: Set a new password (Note: This can be same password as of your windows live id so easy to remember)

 image_thumb[7]

Friday, April 5, 2013

Dont be afraid to take up a project that's failing

"Why you shouldn't worry about taking up engagements that are in Red? Because they can't become Dark Red now :) and will only become Yellow and then eventually Green."

Funny may it sound but things get worse but then they get better and the experience of turning a failure into success is unforgettable. You will learn things that you wont learn if you always play safe. A sailor who hasn't faced a storm is not really a true sailor. An escalated projects have high visibility and you have an amazing opportunity to fix things and you might be surprised in the end.

Testing in Production ?

Living life on the edge for normal people would be having a house near a cliff top or jumping off the plane with a parachute or something of that sort but for software engineers, it would be delivering a code to production without testing it and hoping that nobody runs into it ever or it works somehow magically :D These guys are real daredevils :D

I don't promote it but it does happen. We should let these developers know that they are living on edge with a time bomb ticking and its a matter of time before it explodes. Why take chance when you can test the code and then release it.
 

Testing - Recursive Features

There should be an another 'task manager' to kill the real 'task manager' when it gets hanged while trying to kill a 'process' that got hanged :)

Though my above comment on FB was on a  lighter side, I feel that there should be testability around such things where we generally confine our thinking to just one level and leave it there.

Saga of Unsung Heroes


‘Tester Testifies’ is my tribute to all the testers in the world – the unsung heroes, who save millions of dollars by finding bugs and still stay unheard.

Saturday, March 23, 2013

Software “Test Confidence Report” [Test Sign-Off]

I would like to share something new that I learnt from one of my recent engagement. Thanks to Anand Prabhala, my Project Manager, who triggered these chain of thoughts by asking one simple question, “Raj, keeping all the bugs metrics and test executions reports aside, as a test lead, what is your ‘test confidence’ to release this application into UAT'”.

If you are in testing, I am sure you would have found yourself in similar situations many times before. That was a déjà vu moment and I found myself saying something like ‘well, it depends. I can’t tell you that as it’s very subjective and it totally depends on individual’s perception of quality and whatever I might say might be my own opinion not necessarily the opinion of my test team. You should look at my test execution report and bug metrics as true indicators’.

I knew that he had a point, Metrics are good but we should be able to convert them into something really meaningful and actionable to be able to take a decision with confidence and conviction. As a test lead, providing any number of test metrics is not enough, if you can’t take a confident decision by looking at it. That’s like just half the job done.

I thought it’s going to be a no-brainer. Call for a meeting and ask my test team to vote and we will know our test confidence. Funny though it may sound but you may get even more confused as confidence is highly subjective and can’t be arrived at by just doing a poll. Beside it won’t be fair as confidence is just like temperature which can fluctuate drastically based on circumstances, moods, emotions, pressure and state of mind on a given day.

Most of the times there is a strong correlation between tester’s confidence and metrics like failed test and active bugs but there can be exceptions. Imagine that there is a functionality that is working well but there are few eye spoilers that’s bothering her for quite some time and your test execution and bug report doesn’t get captured as alarming by the standard metrics like % of test cases passed, no .of critical/high severity bug etc. The test confidence could also be low if tester believes that end user is going to hate it and this must be fixed. We are saying lot of times test confidence can’t be used concluded as “high” by saying if 90 % test cases passed or if there are not s1 / s2 issues and that’s where we need to give weightage to tester’s feeling about the quality of that feature. On the contrary, we could also have scenarios where more test cases failed in some cases and metrics look terrible but we know that they are not coming in the way to do UAT testing and hence test confidence could be high or medium but not low.

Lot of times such stories doesn’t come out by just looking at plain numbers. Remember, now you are talking about an application that typically consist of myriad of features. Brain can’t be expected to accurately take into account all your test confidences against each feature and do an intelligent summation of test confidence for you. This will only get complicated when those features are owned by a team of testers and now test lead’s job get tougher to get test confidences from all the testers and decide on overall test confidence level of the team.

In reality, it might be just a matter of finding the real culprits that are bringing down your test confidence and targeting them and you will be surprised that those issues might get fixed very swiftly, once identified and prioritized but the trick is to identity them.

I thought why we don’t add subjectivity to objectivity as that’s was the missing ingredient. Let’s start measuring test confidence for each feature by looking at real metrics for that feature. We took the TFS out-of-the-box requirement traceability report and added just a simple field called” test confidence”. We got these test confidence indicators (high, medium, low) from testers, who were owning these feature and started assigning test confidence as high, medium or low. This is a marriage of test confidence and industry standard requirement traceability report J

clip_image001[6]

Finally, you can just add the number. of features that had high confidence, medium confidence and low confidence and calculate %. E.g. Now I can say I am 87 % highly confident and 12 % medium confident and 1 % low confident.

Note: This way I am not saying I am confident or not. My test confidence is not a boolean value anymore.

clip_image002[6]

We passed on this report to our developers to prioritize bug fixing with the aim to increase test confidence where it was low or medium. Now our development team didn’t really have to worry about lot of things like severity, priority, stack ranking, usability issues etc. We could make their life easy by giving a single indicator. This also helped our customers prioritize their testing and know the features that are not ready yet.

Do you want to use it in your next engagement? Let me know by posting your comments.

Friday, February 22, 2013

TFS Live - Windows 8 App



Note**: if you are trying with hosted TFS, please refer to slide 17 of this PPT for configuration change


Download Link  for Window 8 /RT (or search in Windows Store for TFS Live)
 
http://apps.microsoft.com/windows/en-US/app/tfs-live/6af3a840-1dbd-47c3-8ec8-3f7c1e8ac6a9


Want to just try out and don't have TFS access?




 

TFS Live - Windows Phone 8




Note**: if you are trying with hosted TFS, please refer to slide 17 of this PPT for configuration change



Download Link for Windows Phone 8  (or search in Windows Phone Store for TFS Live)

 
http://www.windowsphone.com/en-us/store/app/tfs-live/6b869573-44a2-4860-b2e7-55806d4b2011
 
 
 
Want to just try out and don't have TFS access?

 
 

 

Thursday, February 14, 2013

Coded UI Usability Automation using JavaScript


After downloading the extension, add the DLL to your references under your Coded UI Project. Please find below the sample documentation
#Coded UI Usability Automation using JavaScript

# Send your feedback to rankumar@microsoft.com / rajkamal@microsoft.com

     
//This method retreives the properties of hyperlink with inner text "News" and "Hotmail"

        [TestMethod]

        public void CodedUITestMethod1()

        {

            BrowserWindow bw = BrowserWindow.Launch(new Uri("http://www.bing.com/"));

            bw.WaitForControlReady();

            ListRepository> list = UsabilityAutomation.Usability.GetUsabilityProperties(bw, "a", "News,Hotmail");

            foreach (UsabilityAutomation.Repository prop in list)

            {

                Assert.AreEqual(prop.font_family, "Arial,Sans-Serif");

                Assert.AreEqual(prop.text_decoration, "none");

            }

 

        }

 

        //This method retrieves the properties of div element whose id is "hp_container "

        [TestMethod]

        public void CodedUITestMethod2()

        {

            BrowserWindow bw = BrowserWindow.Launch(new Uri("http://www.bing.com/"));

            bw.WaitForControlReady();

            ListRepository> list = UsabilityAutomation.Usability.GetUsabilityProperties(bw, "div","hp_container");

            foreach (UsabilityAutomation.Repository prop in list)

            {

                Assert.AreEqual(prop.margin_left, "117px");

                Assert.AreEqual(prop.font_size, "13.33px");

                Assert.AreEqual(prop.width, "1366px");

                Assert.AreEqual(prop.color, "rgb(0, 0, 0)");

            }

        }

 

        //This method retrieves the properties of input element whose id is "sb_form_q"

 

        [TestMethod]

        public void CodedUITestMethod3()

        {

            BrowserWindow bw = BrowserWindow.Launch(new Uri("http://www.bing.com/"));

            bw.WaitForControlReady();

            ListRepository> list = UsabilityAutomation.Usability.GetUsabilityProperties(bw, "input", "sb_form_q");

            foreach (UsabilityAutomation.Repository prop in list)

            {

                Assert.AreEqual(prop.padding_left, "9px");

                Assert.AreEqual(prop.padding_right, "5px");

              

            }

 

        }

 

        //This method retrieves the properties of all input element present on the page

 

        [TestMethod]

        public void CodedUITestMethod4()

        {

            BrowserWindow bw = BrowserWindow.Launch(new Uri("http://www.bing.com/"));

            bw.WaitForControlReady();

            ListRepository> list = UsabilityAutomation.Usability.GetUsabilityProperties(bw, "input");

            foreach (UsabilityAutomation.Repository prop in list)

            {

                // your assert logic

 

              

 

            }

 

        }

Thursday, January 17, 2013

TFS Live - Privacy Policy


TFS Live - Privacy Policy:

We are not affiliated with Microsoft Visual Studio team and this is not the official Visual Studio TFS App. We value your privacy. Here is our privacy policy:  Your privacy is respected. The application collects your TFS Credentials solely for the purpose of transmitting it to your TFS Server. The App does not store any of the information that you read or share. You can send email to raj.kamal@outlook.com if you have any concerns related to your privacy. All information received via emails is treated with strict confidentiality and used solely for the purpose of improving this App. We do not send commercial and/or unsolicited emails.