As we begin another year and try to predict where quality assurance (QA) will go in the next few years, we need to reflect for a moment on where QA has been — especially with the dire predictions in recent years that QA in software engineering is dead.
One thing that is dead is the traditional way of doing QA. The days of huge QA departments conducting testing mainly using manual methods, and usually as a phase after the development team is done, are gone. Market pressures and the fast-paced demand of software releases have made sure that relying on only manual testing as your QA strategy is no longer acceptable.
Having said this, organizations that think that test automation can be used as their sole method of testing software have already realized — or are in the process of realizing — that this method alone is also not adequate. So where will the QA pendulum stop?
In the middle, as it usually does.
MANUAL TESTING IS DEAD … OR IS IT?
As managers debate what their testing strategy should be in 2016 and beyond, looking back to see how the QA of software has changed over the last decade can help prepare them for what will happen. To help me reflect on what was happening in QA a decade ago, I decided to review what I was doing in 2005 as a QA director in a security software firm. Here is what was at the top of my priority list in 2005:
- Windows XP was the main OS. How could we test it and all of the associated issues with drivers and all the service packs? The phrase of the moment was “testing matrix.”
- Internet Explorer 6 was the main browser, and we faced the challenges of how to test it given all the issues we had with this older browser and the technologies it used.
- Firefox 1 was starting to gain some favor with users, but it was still early.
- Netscape was the other main browser.
- How could we test everything before we had to ship the software on a CD? (Remember when we used to ship software?)
- How could we automate the testing of the software?
- How much manual testing should we do?
- How could we keep up with the development team?
Smartphones were something we had heard about, but they really had no implications for most software teams. We all had flip phones back then.
Most QA managers and directors at the time were being pushed to do more automation when it came to testing. Test automation was going to be the magic bullet that would finally enable QA to keep up with the development team. (I am sure I heard the same thing in the 1990s about test automation — it seems like déjà vu.)
In 2005, manual testing was just not cutting it and was on the way out. Or was it?
2016 AND BEYOND
As I consider what managers and directors are facing today, it looks and feels different, but the challenges are still the same.
Sure, we have come a long way since 2005 in terms of QA. Browsers have matured, and it is this maturity that has simplified the lives of QA teams. Shipping of software is now a foreign concept because all software is delivered via downloads, so the implications of releasing a bug into the field have been significantly reduced. Updating software on consumer devices is now a common everyday task that even the most basic user of a device understands. The flip side of that coin, though, is that releasing a significant bug into the field is now much more damaging to an organization’s reputation thanks to social media. For proof of this, just peruse any of the “top 10 software blunders” lists of the last decade.
“Older” technologies have gotten more mature, but there is a whole slew of new technologies that will help to ensure that both automated and manual testing will be required not only in 2016 but beyond. Software engineering managers and directors will need to keep up and make sure they have the right mix of both kinds of testing. You cannot just plan to use one type given the endless ways that software is being used and will be used in the future. Software has spread to almost every corner of our lives. In addition to the obvious computers and smartphones, here is just a short list of the things in which software is being used today:
- Entertainment devices (Blu-ray players, personal video recorders, musicplayers)
- Appliances (refrigerators, washers, dryers)
- Light bulbs and home lighting systems
- Security devices (cameras, deadlocks)
- Exercise machines
- Wearables (watches, fitness trackers, etc.)
- WiFi everywhere
This Internet of Things (IoT) will require software engineering teams to have the right level of automated testing developed by both development and QA engineers, and they will need to take a balanced approach to manual testing as well. Automated testing is essential to being able to deliver and meet the aggressive deadlines to stay competitive. However, until we have robots delivering software for other robots (somewhere Isaac Asimov is smiling), at the end of the day it is human beings that are using this software. And anybody who has been delivering software for a while knows how unpredictable these humans can be!
When it comes to test automation, the question of which tools to use is the same one that QA people have faced for years. What has changed — and will continue to be true in 2016 and beyond — is that no one tool will do it all. Given the proliferation of Internet-aware devices, software engineering teams have to have many tools in their tool belt, and among these they must consider open source tools. Open source tools have proven to be as good as, and in many cases superior to, the tools that vendors are selling. The reason for the move to open source tools is that the packaged tools have either failed to advance test automation, or the exorbitant cost of these tools has forced organizations to rethink their test automation strategies and consider using free open source tools. Some organizations will argue that the commercial tools can be used “out of the box,” but the reality is you still need someone who knows how to run the tools and maintain the test automation scripts, just as for open source tools, so there are no savings in this area.
In 2016 and beyond, organizations should look at open source tools like Selenium, Appium, Calabash, Ruby, and Swift for iOS; Python as a scripting language for test automation; TLIB test automation library; and otheropen source tools. Commercial tools are no longer the only option for test automation, and organizations need to weigh the alternatives depending on their particular reality.
AGILE AND QUALITY
The last decade has seen a big push for everyone to become “Agile.” While becoming more Agile in your software development processes is a great goal to set for your organization, we should not forget one of the fundamental reasons for doing so: increasing the quality output of the team. As Ken Schwaber and Jeff Sutherland say in their Scrum Guide:
The Scrum Master encourages the Scrum Team to improve, within the Scrum process framework, its development process and practices to make it more effective and enjoyable for the next Sprint. During each Sprint Retrospective, the Scrum Team plans ways to increase product quality by adapting the definition of “Done” as appropriate.
Companies that grasp this fundamental concept about quality in Agile/Scrum and set it as a goal for their team(s) will be the ones that are successful in the future, as the demands for faster releases and more features will surely continue. If you build a culture of quality in your organization it will pay for itself, and in the end it will help your company make money. Organizations that work to build in quality rather than trying to test it in will have a significant market advantage, as many companies (Apple, Honda, and Toyota, to name just a few) have shown.
More: Articles Like This
- Trends in Big Data Technologies and Analytics — Opening Statement
- Enterprise Mobility: Part IV-- The Internet of Things
- The Digitization of Health: Transforming Healthcare with Smart Services and the Internet of Everything
- Defining Test Requirements for Automated Software Testing
- Overcoming the Big Data Strategy Lacuna