I'm Paul Maxwell-Walters, a software tester and former test lead, hailing from the UK and living and working in Sydney. I am also the current Social Media officer and Acting Chair of Sydney Testers Meetup Group. This is my blog for general thoughts, musings and rants about development, software testing and the IT industry as a whole. My twitter handle is @TestingRants.
Saturday, 14 April 2018
Reflections on the Scripted Test Case
I have my reservations about the pre-scripted test case. I am more happy to use exploratory testing charters, automating the checking part, in my work. Being quite confident in coming up with test ideas on the fly, I find it far more interesting and personally find it results in more defects.
In my career I have only ever worked in one company that used exploratory in a major way (and it was I who implemented it as test lead!). In my experience, where manual approaches are taken the majority of the work is done using pre-scripted tests - based on the examination of requirements documents or user story in some tracking system like JIRA, approved in advance by a test manager or stakeholder and then executed sequentially. The test lead or project manager would monitor the execution of each test, produce lovely graphs and may even set rough targets of “X” tests completed per day.
In fact, according to this 2017 survey on behalf of Practitest, 58% of respondents still used scripted testing (the second highest testing methodology behind exploratory) and 56% created detailed scripted test cases - the latter having actually gone up slightly from 2016.
As a tester I have an uneasy and visceral relationship with this approach, although I have and still do work in projects as a tester with scripted tests. The idea of testing as crossing off a long list of manual checks is a very poor definition – especially in an agile context. When test execution metrics are applied and a daily test execution target set the incentive is not to enquire and investigate but to get through as many tests as possible. We are judged on our efficiency, if this is such a measure of it, and not on our ability to evaluate the product, think outside the book, come up with new real-time strategies – the tasks that should be the “bread and butter” of the testing role. It also relegates our conception and reporting of the product to a sum of minute behaviours taken separately from other. A product is fit for purpose when “enough” of its test pass. Test cases may be regarded as a type of document but as far as documentation goes they are usually pretty poor. If they weren’t then we would be including them in manuals and posting them in our site support section, however this is never done without substantial rewrite.
They also make great presumptions about the product before its release, especially when detailed steps are included. It takes an incredible amount of detailed documentation or a very precise oracle to come up with test case steps that are still relevant at the end of the development process. Done afterwards, the current behaviour of the product can bias their steps and expected outcomes, not to mention the perverse fact that to write the test case steps you have to “test” the product anyway.
Nevertheless, from the perspective of a busy test manager these can be seen as a dream, especially in reporting, communicating and negotiation with the PM, development team and key stakeholders. They provide easy (if somewhat naïve) metrics, almost instant evidence that the test team is “doing something”, and fast answers.
Test cases are a deliverable in themselves (and I have often seen them required as a deliverable to stakeholders before allowing the start of testing).
What are testers doing right now? Writing Test Cases! Are they testing what we want? Take a look for yourself, read the test cases! Is testing efficient? We completed an average of X tests per day! How serious is that defect? It breaks Y test cases, it must be serious! Why do you need more testing staff? Our "velocity" shows that we will only have 75% test case completion by the end of the test cycle, without the extra hands we can’t complete testing!
Test cases, in much the same way as widget creation in factories, act to de-skill the testing process. The real mental effort is involved in coming up with testing scenarios beforehand, however once the cases are done, depending on their detail they can be given to anyone, tester or not, to complete. In this point the tester becomes a pair of hands, an admin assistant with a checklist. Once we decide we want to, we can replace the tester with an automation script – think of all the money saved!
Also, despite its inherent inefficiencies, deskilling, ennui and lack of real depth and flexibility a test strategy of rote execution of a suite of manual scripted checks can be effective and I have worked in successful projects where this has been done. Much of this depends on how much the initial requirements do not change between their elucidation into test cases and the completion of test execution, how “high level” and open to exploration the test scenarios are and the amount of time and human resources available.
The above reasons are why scripted manual tests are so common in IT teams outside of tech companies and those relying on heavy automation, CI and fast regular releases. In fact, where I see a test approach consisting of mostly scripted manual checks, it is nearly always requested from top-down or mandated as part of an overall quality strategy. Otherwise they are enforced by the use of tools such as QTP which have very prescriptive views of testing.
Scripted testing methods are very easy to manage and report on. I do not believe that they are ever going away unless we can apply reporting of exploratory, bug bash and other testing methods that address all of the management communication points above. James Bach and Michael Bolton, in Session Based Test Management, have provided some answers to the reporting challenges above – thus showing that addressing these is not intractable.
Subscribe to:
Post Comments (Atom)
It's a good idea to compare manual testing with current practices in test automation or agile/devops. In both cases, organizations don't have a good understanding of testing. However, in both cases, organizations are content with what they are doing (I think this is misguided). I don't think we need to single out manual testing. In the current environment, the bigger issues are the lack of understanding of testing among agile/automation/devops followers.
ReplyDeleteExploratory testing is very effective in terms of finding defects and should be part of overall test strategy. It goes undocumented may be converting defects to test cases can help document those. The set of prewritten test cases most of the times dont yield defect however it ensures things are working. The effectiveness of test cases can be measured using code coverage tools however it's expensive. I had used structured exploratory testing approach to find a mid way between scripted and exploratory testing to make sure it's documented and has some structure around exploratory testing
ReplyDeleteDo you have some knowledge on scriptless automation for instance?
ReplyDelete