Abstract
Someone recently asked me, when we have
daily builds going into production, and we almost have daily/weekly OS, browser
and platform updates that can break our applications and we need to support a
multitude of devices (PCs, Tablet, Mobiles) with different form factors, “Can you do all your testing that is required
every day? “
And the next question, “If you need more time than that, then
probably you are too slow to keep up with the pace that our ecosystem is
evolving at and why don’t we just enable our developers to test it on fly? Why
not automate everything? Can testers really add value anymore?”.
AI is the new UI. In other words, the
future devices might not have a traditional UI to run automation using
QTP/Selenium. You future devices will not be just tablets and smartphones but
also smart watches, glasses, cars and more. They will take multiple inputs like
touch, voice as input.
Now we can look at this chasm and think
about returning back or falling down or see the opportunities that lie ahead if
we take a big leap forward. My response to these challenges is that “all” the
testing can’t be commoditized as testing is an art and science both. You can’t
automate everything and replace the intelligence of human testers. No
doubt, we (testers) need to be much more agile and change the way we used to
think about testing. However my question to him, “Would you go to office in a car that was just unit tested by a
developer?”
Key
Questions:
1.
De-mystifying the role of traditional
testers in years to come and what’s expected from them
2.
Discussion around adapting to embrace
daily releases and supporting the consumer devices that run apps. Support
trends like Consumerization of IT and BYOD.
3.
New types of testing to test software
that run on embedded devices, take voice inputs from humans. Could we be
working in hardware labs where hardware and software be tested together with
future devices becoming more life and mission critical?
4.
Why developers would still need to focus
on development and they can’t replace testers but can of course help.
Challenges
with today’s testing
The most repeated question to development teams in
reputed organizations today is, “Why do
I, the client, need to spend additional money on testing efforts when I am
already paying a premium for high-quality developers?”
Such questions are not without logic. Let us see
where the client is coming from.
To the client, an organization taking up the project
must have already developed similar projects in the past, have a highly-skilled
pool of resources to accomplish the same and able supervisory resources to
ensure there are no execution delays. However, as we understand, each project
is like a new innings – you cannot simply replicate your past laurels but can
draw from your experience.
The purpose of this paper is to precisely help the
testing community answer such questions. One can even be faced with such
questions from CTOs/CIOs with regards to justifying testing efforts. The entire
existence of testing services then faces a question mark – do we have answers
that ensure the need for testing in the future?
There are two parts to this answer and a larger
implication for subsequent projects.
a)
Functional
testing vs. Non-functional testing
The first part is that testing in parlance has
ceased to remain merely confined to functional testing. With the increasing
tribe of different devices, platforms and applications; testing has become an
envelope of services covering functional and several non-functional types of
testing. Therefore, for any forthcoming project, it will eventually become
commonplace for the client to pay for a bouquet of testing services and not
merely functional testing – which is commonly considered a plain-vanilla form
of testing meant to unearth development pitfalls. We are at the verge of a
paradigm shift where usability testing and user experience has assumed great
significance to the extent that usability is considered a must-have and the
associated functional testing is a given. This is a departure from the earlier
industry trend of considering non-functional testing as a good-to-have but the
focus remaining on functional testing as a must-have. Clearly, efforts are more
pronounced in value added testing these days as opposed to traditional testing
in the past. Therefore, testing services are no longer an added cost but an
investment to achieve wow factor with value-added testing services.
b)
Speed
of delivery
The second part to the
answer goes further in-depth of what functional testing will comprise of. With
the faster development sprints and release cycles – testing resources need to
be one step ahead of the development folks to firstly detect a flaw and
moreover, they need to do that with incessant regularity & speed. We would
have to agree that higher the quality of development effort, higher should the
testing abilities be for ensuring uncompromising quality. Additionally,
customizations and related activities will also assume significance in a lot of
projects – wherein testing of functionalities would be on the back-burner.
These answers, however, are not reasons alone to
indicate how testing services will be offered but in fact, they point towards a
larger implication, which is – testing services will be targeted for reducing
release times as we move on and this will mean the need to firstly justify the
entire bouquet of testing services that need to be offered to any codebase for
providing quality assurance and at the same time, ensure that this is done in
the least possible time. In fact, increasingly lesser time.
To add to this race against time, there lie greater
pits such as ensuring controlled updates like updating an application with time
are not broken. A bigger thing on the update space is uncontrolled updates
though – which are passed on by OS manufacturers and come thick and fast these
days across devices. The challenge is to therefore ensure that project-sponsors
are aware of the need for elaborate testing and testing teams are adept enough
to deliver these services in shorter spans of time – not compromising on
release quality.
Impact
of Future Trend
It’s not merely a classical time and work
mathematics problem. It’s just not possible to shrink timelines on testing
effort without touch resources or scope. At least that’s what the age-old
scope-budget-time triangle told us in software engineering classes. Instead,
the focus is now to strategize. Strategize on testing methodologies &
practices that will enable us to adapt to the new era of daily releases,
platform ubiquity and backward compatibility. In fact, strategizing on testing
services has never been so important – we are now in a phase where the poster
boy of testing – “automation testing”, faces a gripping test. Not only it is
going to be very expensive to have automation in projects requiring daily
releases and updates, it may also not be feasible.
a)
Need
to modify the testing approach to accommodate the need of faster releases to
production
Let us examine what the gap is today and what can be
done to bridge this gap. To articulate, we see widespread prevalence of
traditional functional testing services – in several cases with little or no
automation inspite of emergence of such offerings as testing as a service and
pure-play testing firms offering a bouquet of function & non-functional
testing services. The key element here is the adherence to traditional ways of
software testing. Despite all efforts by organizations to make their resources
up to speed on readiness for adopting cutting-edge trends & services, there
is clearly a mismatch in pace at which development & testing is carried out
and the expected time to go live in market. We need to change our test strategy
to accommodate the need to make the product available in production but
maintaining the quality too.
It is essential to understand how this change can
then be brought about. The truth is, there is no single way of planning out and
working towards adapting our testing methodologies. Each organization is used
to a different way of approaching a project, decomposing it into meaningful and
testable bits and then running quality tests on the whole.
Minimum
Viable Quality (MVQ)
There are many products/services like Bing.com that
are shipped rapidly to production. To be in the same phase of such rapid
ships to production, we need to twist our testing approaches now. We can’t
continue with our traditional testing approaches which used to run for weeks
for a build certified to go to production. An approach to certify such rapid
builds, we should consider now concept of Minimum Viable Quality (MVQ).
MVQ promises to help online service teams ship more
frequently with less testing, lower (initial) quality, and more efficiently
while actually exposing bugs to fewer real users and at a lower overall risk
than traditional methods involving stabilization phases and orchestrated
releases to production. The reality is that at Microsoft we have always been
comfortable shipping unbaked software. That is why we have dogfood and we
have alphas and betas and developer previews. All of these labels tell
the user that they will find bugs but that we feel the quality of the version
of the software is just good enough (the minimum) to make it worth their time
to use it. The only time we really ever hit ship quality is the final
golden build and even then we've always released products with some known bugs.
After all, what is a Zero Day Patch if not a mulligan on signoff?
The problem you get into with modern services
development is that you are never actually done shipping. There is just
the next feature, the next scenario, the next set of performance improvements,
and the never ending list of bugs to fix. A lot of Microsoft testers
bring the ship quality mindset to services release instead of looking at each
release as just another incremental step where some features are now ready for
mass consumption and others may need to remain hidden and exposed only to those
internal "dogfood" users or the external early "beta"
adopters. Perhaps the new code is pushed to a subset of users and if it
is too buggy a quick fail back to last known good takes the code out of use
with minimum negative user impact.
What is the advantage of taking an MVQ approach to
services delivery? The bottom line is that you start to get data about
how the code is functioning in production with real users more quickly.
The key aspect is to balance the minimum such that the feature set and the
quality of those features within the product is not so low that it is not used
by the target user segment. If it is too low, then you won't discover and
learn the harder to find bugs because the code won't be exercised.
The trick is to realize that it is the science
behind this process that needs to change – not the process itself. For
instance, features of a mobile OS that need a long time to be released into the
market may end up being obsolete or get zero adoption from the app development
community. Instead, the feature-set may be broken up into smaller sets of
sub-features with periodic go-live to the market and at the same time get the
app development community a continuous feed of feature-sets which they can
leverage to build additional apps – which eventually also ensures further
development when the entire feature set is released.
Reduction
in test effort for activities like test environment preparation or test case
writing.
A key element of course is how test teams will shape
up for such scenarios. Will test team still spend 20% of effort in preparing
the Test Cases and test environment? In order to test the builds daily, can we
really afford to spend time in upgrading the Test Cases or debugging issues in
test environment? Clearly, we would be moving from an era of end-to-end
functional testing to an era of greater ownership of application modules, a
more granular approach to scheduling of testing – for instance, passing results
and unlocking test-scenarios from one testing resource to another in an
assembly-line like setup. The eventual approach will depend on how the
organization plans to setup testing teams in the near future, evolve its test
organization and adopt various testing practices such as crowd-sourcing,
testing-as-a-service etc.
Alpha/beta
testing or Crowd Sourcing testing
A cursory look at these evolving practices highlight
such trends as Testing-in-Production, which essentially translates to
alpha/beta testing by releasing an MVQ application into production and awaiting
for users to unearth issues with the same. This is widely practiced by Google
for instance – whose applications have been known to remain in beta for years
before finally coming out of it. A similar trend is, as mentioned earlier,
crowd-sourcing. The only difference here being that a group of people who
usually excel at testing are the ones the application is exposed to and not to
the entire intended application audience. So why do we at all need testers then
if we can throw it open to the audience in production or to a competent set of
crowd-sourcing folks? The answer lies in controlled adoption of these trends –
for instance, an application in beta ends up receiving frequent reports of
issues with regards to features within a particular module of the application.
This should prompt the project team to test intensively within the module and
those affected by flows from this module.
So essentially the horizon of testing has gradually
expanded from simply closing testing efforts with UAT to the extent of
supporting even in production – and with a variety of variables on top it
ranging from ensuring functioning even with controlled/uncontrolled updates,
proper support for globalization, adapt to loss of internet connectivity
scenarios by ensuring offline availability and even seamless experiences for
users accessing the same application from one device to the other.
a)
Complexity
of systems has increased. Does it impact our testing?
The
testing strategy is not going to be revised just for today’s trend of rapid app
development and pushing in to production without spending ‘n’ no. of testing cycles but also for trends which are going to
impact largely the way we are doing the testing today. The complexity of
applications has increased exponentially. We are not in an era where we developed
an application for Windows XP Operating system and which will run only on
Internet Explorer 8. Your testing is no longer limited to 1 OS or 2 browsers.
The no. of variables have increased which are impacting the functionality of
the application.
Let’s
take an example of a Banking App designed for Windows 8 Operating system. Few
years back, if we had to test a banking application, the use-cases would be
very limited. One would verify the core functionality on any one OS plus any
one browser. It was obvious to define that this application would work only
with 1 particular OS along with one particular browser. The Usability testing
was not a key part of the testing strategy.
Figure:
1 Showing how testing permutations and combination of various devices has
increased the scope of testing
But,
today, the realistic use-cases for such an app other than the pure functionality
of the app would be -
-
Can the app run on Windows Phone 8?
-
Can the app handle if Operating System
updates are pushed?
-
Can the app be used on multiple touch
devices like Surface, iPad, tablets?
-
Can the app work on browsers like
Safari, Chrome, Firefox, several version of Internet Explorer?
-
Can the app handle if app updates are
pushed along with OS updates?
-
Can the app handle all types of input
methods like mouse clicks, touch, keyboard, gestures, voice inputs etc.?
Several permutations and combinations can come up out of this.
-
Can the app handle if any platform
update is pushed on which the app is designed.
And
the list does go on.
Application under test will run on
“family of devices”
We
also have such apps used by people that can be used when you are working from
home, when you are in office network or when you are in public network. For
example, on Lync, a user will start taking call using his laptop in the office
environment, then he moves towards his car and continues the call over his
phone using Lync app and when he reaches home, ultimately switches to his
Surface to finish the call. Can our app support this type of usability?
The
concentration of testing has moved from functionality to the usability. At any
point, the app should not break irrespective of ‘n’ variables impacting it.
Testing in Production.
With
so many variables to test with, how will one define the test coverage at the
end of the cycle? You can define in your testing scope that we will test the
app only for particular use-cases but from usability perspective, in today’s
era, you can’t direct user to use the app only with internet explorer because
we have done testing with IE only. To handle such situations, the answer is not
to increase the test team or increase the test cycles to cover all combinations
of devices. But, with use of crowd sourcing and testing in Production (TiP), we
can take care of this. If we do our testing more with production data rather
than dummy data, we are doing much more real-world testing and can uncover more
production environment issues.
Will there be such a thing as Test
environment?
Traditionally,
we used to invest lot of time in preparing the test environment to make it a replica
of production environment. Test leads used to keep one test resource dedicated
for preparing the test environment and maintaining the environment. But, now we
have our environments on cloud. It has reduced test environment setup time
drastically. With this, test teams can dedicate more resources plus time for
testing cycles. Also, having environment on cloud makes it more production
ready in terms of configuration and database setup. In Microsoft Azure, when
testing cycle is completed, same test environment is converted into production
and production becomes test environment.
The Case for Human Intelligence
So
far, we have talked about how apps are impacted with the presence of so many
devices like touch-enabled devices, mobiles etc. But, future devices are not
limited to touch devices. We are going to have devices like Smart Glasses,
Smart Watches or may be, Artificial intelligence driven Cars. Testing an
application on such devices using emulator is not going to be enough. Emulator
can’t bring the human piece of testing to such devices. Let’s take an example
of Artificial intelligence driven Cars, the car should apply automatic brakes
if the sensor senses an obstacle at 30 Feet and should reduce the speed to
20kmph if an obstacle is sensed at 60 Feet. If such scenarios are just unit
tested by the developer using emulator, will you feel safe in such car? Will
you feel safe if you know that only the code is tested but not the car behavior
when it is applied with several types of obstacles to verify the functionality?
With Emulator, you can’t validate this and hence, human element of testing
can’t be removed, even in future and even with various smart ways of
development.
Importance of User Experience Testing
Definitely,
testing strategy has to change and become smarter. We need to add a lot of
usability verification flavor in our testing approach. Verifying the user
experience is not just about checking tool-tips on mouse hover but various
factors like app should be minimalistic, individuals should be able to use it
with minimum clicks. It should be super responsive and self-explanatory to
users. Future generation is going to be different from today and their
expectation would be different from an application. We should consider that
factor too when we are talking about user experience of an app. Concentration
span is very low for future generation and if they do not find the app worthy
enough in first 30secs of usage, probably they will never try the app again.
And, that would not be just product failure but a failure of joint effort by
test and devs. Hence, it is valid to say that verifying an application for user
experience is equally important and it should be part of test strategy.
Test
Effort Distribution:
In essence, it’s time to stop doing things right and
instead do the right things!
Figure:
3 showing how testing strategy is going to change in future as compare to today
Conclusion
Taking overall stock of the situation, a few things
stand out. Bluntly put, functional testing may witness a fall in test effort
but this would eventually be transferred to other types of testing. From the
days of writing test cases and executing them, we would be pretty much doing
exploratory testing where more of the focus remains on intelligent testing and
needs a lot of human interaction.
So while it would be a future where testing services
are secured but the service catalog would be a completely re-wash. It is time
to move to smart testing, controlling costs but delivering on higher
expectations and ensuring the best user experience – not just testing
plain-vanilla functionality, and therefore no just traditional testing tools.
Testing-in-Production and Crowd-sourcing are already
here as promising ways of delivering the next generation of testing services
while optimizing testing efforts with smart resourcing and environments using
the power of cloud would pave the way for strategizing the way forward.
Author: Gunjan Jain and Raj Kamal
Author: Gunjan Jain and Raj Kamal
11 comments:
Many interesting thoughts. Thank you for sharing.
Best regards,
Johan
Very good article !
very good article .
Testing is very important because it shows the quality of it's .
This blog has a great information about it.
Workshop Equipment
This is a very good article.
Interesting and thoughtful as well as informative.
It is a very helpful and informative experience throughout the blog.
Thanks! for sharing the post.
Regards,
Maveric
Non technical testers will be gone. QA people will need to learn programming to do automated tests
The future will always be on the advancement of automated testing on every phase of quality assurance
This is very useful article.Thank you for sharing this article
Testing truly drives development. It is interesting to find a large group of CXO gathering brainstorming about future of testing and challenges of today’s testing environments and ideas to ensure delivering Speed@Quality by revolutionizing the way software is tested. LiQE on Thurs, Oct 8, 2015 at New York is such event where thought leaders gather to think of innovative ideas about testing. You can also meet Diego Lo Giudice from Forrester, he will be sharing his research findings, insights, experiences to explain about the latest trends in quality engineering.To know more http://liqe.cigniti.com/east/index.html#
Thanks for share the excellent post regarding IT Services .
Hi,
Future of software testing is completely depends on automation. There are many software organizations are using automation testing services to automate the testing process.
Post a Comment