At TestBash Brighton 2016 one of the superb talks was on “Building the Right Thing” with Lisa Crispin and Emma Armstrong and the talk started with a very basic task – “make something that flies through the air”. My first thought on hearing this task was, “I’ll scrunch up the paper and throw it; that would be funny” quickly followed by “but I’m sure they don’t mean that and anyway I can make superb paper planes which would be more fun”
I made a plane ready to fly and then we were told that the requirement was one of Minimum Viable Product (I hate this phrase but that’s a separate blog post!) so my initial thought would have been spot on. Gutted.
I’d completely over thought the task at hand.
This year at TestBash Brighton in their excellent mentoring workshop Shey Crompton and Nicola Sedgewick tasked us with something similar; the task was to build an actual “paper plane” this time though so I got to make a great paper plane and be on task for the challenge. Win win!
This time I knew that as it was a mentoring workshop there would likely be a challenge to teach and learn what we created so I decided to design my plane in as simple a way as I could think of; I based it on a simple dart shape rather than complex wing design with the idea being that if I threw it hard enough and it would fly a nice long way anyway but there would be a benefit of being able to easily teach the design. As predicted it flew across the room and happy was I!
We were then told to describe the process of building our planes to a partner (nice to know my intuition was right on that).
I was also confident that I could easily describe the process for my partner to replicate my plane…
Except that what is simple for me isn’t necessarily simple for others. I understand what my mind can process, understand and retain easily but I can’t know that of someone else, especially someone I’d never met until that moment. My partner did successfully follow my instructions although it was far from as easy as I’d envisioned it – I’d have been better off making an even more simple plane that was not as detailed with the trade off in flight capability being matched in reproduction capability.
(K)eep (I)t (S)imple (S)tupid
As testers we often can’t see the wood for the trees. We look at a system or a website or an app to be tested and immediately our mind fills with the potential things we can subject the system to in order to test it. Sometimes our mind is exactly on task and the intended tests are precisely what’s needed. Other times though we over-think it and end up wasting time in convoluted tests or bug investigations that could have been much simpler and allowed us to move on to other important tests.
We also try to make judgements and estimations of others’ capabilities whether it’s in our bug reports, test scripts (ugh!) clients demand or in meetings we have with our team.
Let’s look at some practical examples:
Example #1 – “I’m going to create a tool for test data creation”
Except that an account takes 2 minutes to create and the project only has 2 days of test time so in reality you would have used 2/3 the amount of time just doing it manually for the duration of the project.
Example #2 – “The main banner image is missing on one specific browser”
You’re testing a website and your baseline is Google Chrome. You’re happy with your coverage on the baseline and you move on to test on other browsers. Firefox passes with flying colours, Edge is happy and so is Internet Explorer 11. Safari has a couple of minor issues that you report and the next browser on your hit list is Internet Explorer 10.
You boot the clean image test machine, clear down cookies and cache and load the page but the main banner is missing.
A quick investigation verifies that the image is declared correctly in the code and it exists on the server when the URL is added straight into the navigation bar of the browser so it seems like a rendering issue.
Better bug it!
Except that you reckon you can track down the exact cause of the problem and provide the developer with a load more info than a simple “Banner image on Internet Explorer 10 does not display” bug report.
You end up spending a further hour figuring out the full problem (the CMS uses a handler that’s declaring a different mime-type than the image actually had) and you report the full details. How great is that?
Wait… you used an entire test session to investigate that? So now there’s another charter than needs to be dropped or postponed?
Maybe just bugging it after 2 mins of investigation and letting the developers investigate the actual cause was a better plan even if you were providing far less information with that bug report.
Over thought it again (and ironically thought nothing about the other charters impacted by over-thinking this one thing)
So you see as testers it’s very easy for us to get carried away with our work, over-think the task at hand and produce less value to the long term project targets in favour of short term value to that single task.
Try to be mindful of Pareto’s Principle – you’ll get 80% of the output from 20% of the input and most of the time 80% is more than adequate. If you know you can investigate further perhaps add that offer into the bug and discuss it at the next scrum or with your project manager and allow them to decide.
Recently I did a 99 second talk at TestBash in Brighton in which I compared golf to testing:
Now I know golf is a rather dull subject to many people who may have never played it so I’ll try not to bore you all too much. 😝
The thing with golf as anyone who has ever watched it on TV knows is that you only ever need a single golf club to play golf at the highest level… Rory McIlroy, Sergio Garcia, Seve Ballasteros, Tiger Woods… They all only ever used one club! Let me explain…
- Those guys tee off using a driver to hit the ball as far as possible. BOOM and down the fairway the ball flies.
- Then they use the driver again to punch the ball down the fairway or onto the green.
- And if they hit it into the rough or a green side bunker guess what club they teach for? That’s right, the driver!
- Then they’re a foot from the pin where just a nice little tap in will win the hole. What club do they use?
Except that actually nobody uses one club all the time because at the very best professional level in golf it would be inefficient (and for the vast majority of weekend warriors it makes the game completely inaccessible and not even remotely enjoyable.)
Every golf club has a purpose and an ideal use case; some clubs have multiple purposes or several use cases but no club fits all the purposes and all the use cases and yet I’ve met testers who apply that “one tool fits all” mentality to their testing daily!
I’ve met testers who use one set of browser developer tools and never try others (even berating others without trying them!) and I’ve met testers who have a certain choice of tracking their exploratory test coverage and they never look into other possibilities:
- Is a notepad doc sufficient, useful and logical for others to read?
- Would Rapid Reporter give the notes more structure?
- Is the time trade-off worth the benefit to YOUR context?
- Should you be tracking coverage in a spreadsheet because your client is a 3rd party who requires a visual representation of the testing done?
- Does your tool integrate directly with your issue tracker? Should it?
Do people in YOUR context regularly evaluate your testing tools and techniques to make suggestions on improvements or do you sit quietly and not question the status quo?
For a long time I myself had one single choice of Proxy software to analyse what’s being sent from and fired back to a test app. I knew what I could do with it so I never looked into others, what was the point when I knew my choice of app well?
Sometimes tools can be used for purposes other than the intended one, much like someone may choose to “bump and run” a golf ball from the fringe of the green with a club normally intended for long shots off the fairway – For that specific shot it’s a great option. For that specific test you want to do perhaps cURL + JQ is a superb option to pull in some JSON and reorder it for comparison but for the rest of your testing there may be little value in those tools.
As testers we should strive to read about tools, try tools out, make notes on how you might use those tools best in your daily testing work and then maybe leave the tool alone until the task at hand DEMANDS that tool!
Maybe that will put us in a far less biased position when it comes to using the right tool for the job and it will expose us to more tools, better tools and ultimately make us more efficient in our day to day work.
The driver is not the best club for all shots all the time.
… Continued from TestBash 2016 (part 1)
Coffee done and dusted, more people caught up with and back for more TestBash 2016 goodness.
Talk 3 – Katrina Clokie – “A Pairing Experiment”
As someone who has never pair-tested, nor seen any developers pair-developing and never having heard anybody in real life tell me that pairing is particularly valuable I was torn 50/50 when I read about this talk (well maybe closer to 70/30) in favour of pairing.
Part of me knew immediately of the benefit that sharing knowledge could bring to testers with differing skill sets, different mindsets or that are willing to take on board all ideas and give them a real shot in their daily testing – Testers who want to get better and will entertain all information in that pursuit.
The other part of me simply saw the age old argument of “while someone is not actively working on a project there are less resources on it and therefore testing isn’t as efficient in that moment of time”.
A few minutes into the talk and the genius is revealed; Native vs Visitor testing!
Native tester (who is involved heavily with the product) “drives” the testing as normal but has the benefit of a second pair of eyes on the screen asking questions; Why did you do that? Can you repeat that? How do you know it works?
And for a Visitor (little knowledge of the product) “drives” this time with the Native tester’s expertise to navigate and answer questions. “What do I do next?”, “Is this a bug?”, “Can you remind me how to….?”
The benefit here is immense! Having two sets of eyes on the same product with different domain knowledge for the test “session” basically:
- Refocuses the Native tester into questioning *everything*
- Points out potential issues that the Native tester’s domain knowledge may be masking because they instinctively know the answer.
- Potentially provides the Native tester with new approaches to testing their system.
- Provides the Visitor tester with new domain knowledge (useful for future projects)
- Provides the Visitor tester with new approaches which may be useful to their testing and/or project.
- etc etc etc
SO many benefits! Really excited to try and incorporate this type of work if possible!
EDIT: Since this draft was written I’ve spoken to my boss and we’ve agreed to pair test for an hour per week across different projects and skill levels to disseminate experience and knowledge and to generally break the work up a little. Bonus!
Talk 4 – John Stevenson – “Model Fatigue”
(F)requent (I)ntensive (B)usiness Critical (L)egal (O)bvious (T)echnically Risky (S)takeholder Mandated
(S)ecurity (L)anguage Requ(I)rements (M)easurement (E)xisting
(S)couting Obsessively (A)uthentic Problems (C)ognitive Savvy (K)knowledge Attracts Knowledge (E)xperimentation (D)isposable Time
Mnemonics mnemonics everywhere! And lights! And animation! And music! Well that’s certainly woken up the crowd! And now the mnemonics are changing; adapting, combining.
I SLICED UP SACKED COWS
Funny stuff! It flies in the face of those reciting mnemonics like a mantra. I reckon there are a few people with hairs on the back of their neck standing up right about now; it’s borderline blasphemy to some! *chuckle*
The point is well made however. Mnemonics are test models and re-using “cookie cutter” test models without first slicing and dicing them to remove useless inclusions (and add your more useful ones!) wastes time, budget and degrades the effectiveness of the testing effort. It should almost be a pre-requisite that a mnemonic can *only* be used as guidelines.
An example given here is in the photograph; adapting HTSM (http://www.satisfice.com/tools/htsm.pdf) to evaluate valuable areas to automate.
Another fantastic talk!
Talk 5 – Patrick Prill – “Accepting Ignorance”
I am ignorant… To a certain extent and by a certain version of the dictionary definition of it. I think a lot of people are and we all have the ability to correct that issue should we choose to, it’s just that many people don’t have the drive or the need to overcome it (“if it’s not necessary I’m not doing it”).
Ignorance can be boiled down to four letters – DIKW:
- Data – We have an overwhelming amount of objective, raw facts, relatively unordered. The base level of a pyramid.
- Information – Contextual, insightful information. The next level up in the pyramid.
- Knowledge – How to do things the right way. The next level up on the pyramid.
- Wisdom – Do the right things and why we should do them. The smallest part of the pyramid. The pinnacle of info.
Be aware of the borders of your knowledge and use that awareness to improve yourself or speak to others with the knowledge you need in the short term to reach your goals.
Much of the talk seems like common sense to those with a thirst for improvement and knowledge. For those who don’t seek to improve either through apathy or simply not thinking about improving it’s a key talk for the day!
Aaannnddd that’s lunch!
Part 3 coming in the next few days!
So just last week there was another phenomenal Ministry of Testing event; TestBash 2016 and what a great event it was. Unfortunately I wasn’t able to attend the workshops day (next time Gadget; next time!) but the main conference day truly was spectacular.
I absolutely love that feeling of walking into the main bar area before the event starts and seeing wall-to-wall testers and people I’ve not seen for months wandering over to say hi and have a quick catch-up over a coffee. If there’s one thing I regret at this point it’s that our B&B only allow breakfast from 8am which means no Lean Coffee with people and not enough “catch up” time before the event.
So after a nice catch-up with people we’re ushered into the main auditorium for the festivities to begin. The main man TutuBoss Vern is in front of the stage fending off a bazillion hello’s and psyching himself up to be the voice of TestBash 2016; our very own compare. A quick hello, hug and good luck to him and off to find seats we go.
Then we’re off!
With introduction, banter and itinerary for the day delivered more smoothly than a freshly set jelly it’s pretty clear why he’s the voice of the day. Kudos my friend!
So on with the show.
Talk 1 – Emma Armstrong & Lisa Crispin – “Building the right thing”
On the stage are Emma Armstrong and Lisa Crispin to talk about “Building the right thing” and I have a sneaking suspicion that this talk might be the reason everyone has been given a piece of A4 paper. The talk starts with a challenge; “Build something that flies through the air within 2 minutes” and my gut is to scrunch the paper up and throw it – Minimum Viable Product! My brain though has other ideas; my brain remembers a plane design I created in school that won a distance contest and it’s definitely achievable within 2 minutes so I build it with time to spare; accomplishment!
Emma and Lisa then show a slide of the product they wanted… Can you guess what it was? Damn you brain! Scrunched up paper ball would have been perfect. *sigh*
The talk is then about the fact that across multiple facets of a team there’s often a disconnect between what the business *say* they want, what they *really* want and what’s *actually* delivered and as testers we’re ideally suited to help to piece together all of the disparate parts of the projects so that we ALL know what the plan should likely be. We can be information facilitators to create a shared understanding of the end product expected by the client.
The talk perfectly verbalised the reason why one of my recent projects went 100% smoothly from inception to delivery with minimal overspend and only a few days beyond the deadline (solely due to internal It resource limitations). It’s definitely an approach I like to take where possible and something I’ll push for more often, especially now I can verbalise it.
Talk 2 – Dan Billing – “Testing or Hacking? Effective Security Testing Strategies”
This talk I’ve been looking forward to for ages. I’ve met Dan on several occasions, seen some short talks and his knowledge is fantastic on security testing so to listen to a solid talk on the subject is great for me.
Dan’s advice on getting to know your system under test (your “stack”) is logical but sadly not done enough in testing. From my perspective it’s about the length of projects I work on so in reality I’d spend a good few days getting to know the environment and how to best use/abuse it but when the total time on a project is a couple of weeks tops that’s far too much overhead. I think to a certain extent a “cookie cutter” approach would be the only viable option on such short contracts *if* security testing was allowed and within the project scope.
Dan goes on to reference an advert many people probably won’t remember. Armadillos! Crunchy on the outside, smooth on the inside! Armadillos!
Like systems armadillos are armoured on the outside to fend off attacks. Predators though have become more savvy and understand that the only attack possible is the soft underbelly so predators (hackers) have adapted accordingly.
Dan goes on to describe his “Model” for security testing. Great stuff!
- SCAN – Use Zed Attack Proxy (ZAP), Burpsuite etc
- VERIFY or CHALLENGE your scan results. Do they warrant further investigation?
- EXPLORE the verified security holes to see how they can be exploited.
- GO TO 1
Bug advocacy plays a HUGE role in security testing. Product sponsors often don’t pay attention to the impact specific issue could have on a system so it’s best to boil it down to specific, financially-related risks or consequences to get the message across.
Great stuff from Dan and I’m itching to start investigating the capabilities of ZAP, Burpsuite and BugMagnet more now.
Coffee break time!
… Continued in TestBash 2016 (part 2)
EDIT: This blog post was written earlier this year as an introduction to me but in the end I decided to actually talk about it in a 99 second talk at TestBash Brighton 2016 instead.
Because of that I saved it as a draft and never published it but I figured it’s daft sitting on my blog unpublished so here you go. 🙂
I’ve been a professional tester since 2003.
I say I’ve been a professional tester since 2003 but recently realised that I’ve had a “tester” mindset for much, much longer than that; In fact as cliché as it sounds I’ve pretty much been a tester my whole life.
So today I thought I’d like to suggest a little game to everyone – See if you can remember your earliest memory of “testing” something.
For me as far as I can remember it’s when I was 9 years old and was bought a Bandai Lazer Tank for Christmas.
I remember unwrapping it and staring in wonderment; what on earth does this thing do?
Within minutes I had questions:
• Will it work on lino?
• What about carpet?
• What about the pile rug? It’s a tank after all! – Oh nope; stuck on the rug!
• Do the pieces pop off in a different sequence each time?
• What if I shoot it through the patio door? Or the mirror? Or my granddad’s Ray Bans that I’m specifically forbidden from touching?
• How far from the tank can I be and still get the laser to hit it? That’s a great reason to go outside in the cold to the back of the garden in nothing but pyjamas!
• I was obsessed with finding the limitations of the thing!
So for me that’s my earliest memory of “testing” and I think a lot of people with testing careers probably have a similar mindset that started WAY before your “profession” as a tester began.
So have a think about your own earliest testing memories and let’s get sharing.