Once upon a time, in a company (thankfully) far far away I had an idea for a blog post I titled Testilocks and the three cares where I planned to detail the extremes of testing and what could be considered “just right”
Recently the excellent Katrina Clokie blogged about the testing pendulum which is along a similar vein although truth be told her analogy is a lot more fluid than mine and allows for the reality of testing where every single level between too deep and too shallow would be explored. Mine at the time was going to be about three individual states to summarise just the extremes – a lazy blog post in all truthfulness so perhaps that’s why it disappeared from my mind until it was blown away by Katrina’s excellent post – http://katrinatester.blogspot.com.au/2016/12/the-testing-pendulum-finding-balance-in.html
“too hot” was going to be “too deep”
“too cold” was going to be “too shallow”
“Just right” didn’t need a comparison. 😉
In any case there’s little value in posting details of that phantom blog post beyond the ones above. Instead I’ll skip forward to a short conversation Katrina and I had on Twitter where I said there would be times I would deliberately deviate from the ideal testing position on the pendulum Katrina describes.
For me everything is context and as a micro-term contract tester my daily testing is fairly different to the majority of testers out there I think. I tend to work on projects that range from half a day to three month extremes but that are generally three or so days long. That exposes me to everything from iOS apps to Android apps to Responsive websites to API verification to standard “we built this and we’d like it tested” web design.
Because of the micro nature of contracts quite often I have to risk-analyse the project, prioritise specific functionality as more important than other functionality based on designs, user stories, quizzing product owners etc and concentrating on those areas as priority, sometimes at the detriment of other functionality that I’d like to have also covered in an ideal world. And still get the project tested to a good enough standard within what are normally extremely tight constraints.
In the vast majority of cases the choice to deviate from more or less ideal testing depth is driven by time or budget constraints the client imposes.
Potential candidates for “deliberately too deep” testing on functions might include:
- The website is brand new and been built in-house by a client’s dev team and based on no known CMS. It needs a extensive look at layout and how specific objects render and affect others. Also CMS to front end functionality which I’d normally explain to the client we don’t test.
- The site is live and they’ve added a key new section detailing terms, conditions, business details etc and they’re worried about public perception if this info is incorrect. They’ve placed a higher value on this than the site’s actual functionality.
In the majority of “too deep” scenarios we have plenty of time and plenty of budget to thoroughly test many functions to a much higher level than would normally be suggested. The client has peace of mind that we’ve done an extremely thorough job of the testing and tested much more than we feel is necessary.
And for “deliberately too shallow”:
- It’s an existing website where page templates have been tested extensively already so layout is lowered in priority and testing will be shallower than ideal.
- The app is live and they’ve added a singe new feature. They have limited budget/time and want that feature tested thoroughly but are confident nothing else will be affected and are happy to ignore any of our advice in favour of adding other correlated or key functions into focus.
- The website is a framework and the client has said they are aware of content issues and are not fussed as they will update content at a later date. I then lower content’s priority and test at a shallower (or non-existent if needed) level.
In almost all cases shallower testing occurs because my focus is on more business-critical functionality or an area that the client has expressed concern about. It’s inevitably a calculated risk but a necessary one.
Now by no means is this an exhaustive list but I’m hoping it will be a step toward explaining my mindset when certain functions are deprioritised into shallow testing or others are prioritised enough to go deeper than I’d ideally like.
If I had full control over everything that came in through the door I’d also aim for the Goldilocks zone:
Comments always welcome. 🙂