Select Page
Do. Or do not…

Do. Or do not…

Recently at an event I overheard a conversation between two testers in which one tester was describing some examples he’d apply to an input field.

As the conversation went on there were a lot of “tests” (which I’d call “checks”) I found to be basically pointless and it got me thinking about how as professional testers we automatically use our experience, knowledge and heuristics to decide on the best approach and best set of tests/checks to run – from applying every test known to man to no tests at all, with anything in between.

The tester in question had used an example of:

“So in the First Name field you might try a vowel, a consonant, two consonants, two vowels, a vowel and a consonant etc”


To quote the phenomenal Michael Bolton (@michaelbolton) – “In traditional parlance, domain testing involves identifying equivalence classes— sets of things that we expect the program to (mis)treat in the same way” and to my mind from a pure data perspective, whether a consonant or vowel is used to check an input field’s validation or submission makes no difference.

If I was looking at physical size, letter widths etc that’s a different story completely, as would checking Unicode characters which are most definitely not an equivalent to non-Unicode characters but to my mind those tests I overheard are simply a waste of time that could be better spent testing something that could reveal valuable information.

Sometimes testing isn’t just about what we DO test/check, it’s about what we DON’T as well…


Do you like it dirty?

Do you like it dirty?

When I arrive at work after my morning tube commute the first thing I do is use some antibacterial hand gel before I check my emails and tasks for the day. I’d rather kill any cold or virus germs asap but not enough to walk from the ground floor up to the 3rd floor.

After I’ve checked emails, tasks and anything else urgent I’ll head upstairs to the second floor to make a coffee and en route I’ll pop up one more floor into the bathroom and wash my hands so that they’re clean as well as germ free.

I’m a clean kind of chap; not OCD clean but perhaps cleaner than I need to be.

With my testing I like my environments to also be as clean as necessary for the system under test:

  • Websites (including mobile sites) – No unnecessary programs running in the background and a cleared down browser with no addons enabled that may affect the site.
  • Mobile apps – Factory reset device with no unnecessary apps installed or accounts that aren’t essential to the OS. No backgrounded apps or non-test-app push notifications enabled (except for deliberate testing)

The idea behind this is to reduce any false positives – Any issues I discover will almost certainly be a problem with the system under test and I won’t be incorrectly reporting issues that are centred around interaction with other software, much of which could be unique to my own preference of apps.

For instance if I was testing software on a system that had an IDE installed on it and I was unaware that the software had a reliance on certain runtimes that were previously installed and were not native to the operating system I’d completely miss the fact that the software may not even run for many users.

Similarly certain setups from previous testing sessions may not have been returned to their base state so I could end up spending unnecessary time troubleshooting proxy/DNS settings, environmental variables that should be changed etc.

I’ve spoken to quite a few people about their test system preference and generally there seems to be two camps:

  1. The Angelically Clean Crew – Their machine/device is a freshly imaged desktop machine or mobile device irrelevant of any previous testing. If for example the only other testing that had been done on that environment was website testing and there’d be an almost non-existent chance of any false positives it doesn’t matter – “Image all the things!”
  2. The Dirty Dozen – A more “real world” approach – Machines/devices are more often than not their development machines or personal mobiles and filled with other apps, accounts, push notifications for lots of apps enabled, backgrounded apps etc – “But real users don’t have perfect environments”

There doesn’t seem to be many people with a “good enough” approach when it comes to test environments for some reason. That may be a completely factual statement but also it may be that the people I’ve spoken to have tacit knowledge about what they would accept and it’s not entirely clean/dirty but a happy Goldilocks place.

I completely see the benefit to all levels of dirtiness in testing setups but what I struggle with is lack of time on our many “one hit” (waterfall) projects and prioritising those levels of dirtiness in terms of time spent setting up versus bugs encountered versus time spent investigating and ruling out other system interactions as the cause etc.

I think until we have the time to perform mirrored testing on both a clean and dirty device simultaneously across many projects to prove one way or other how much impact there is to time due to device cleaning versus false positives sticking to a an “everything immaculately clean” approach is the best option, even if it’s overkill.

What are your thoughts? How dirty do you like it? Do you have any real world experience or statistics of the value between clean/dirty?

Tracking manual compatibility testing coverage – How do you do it?

Tracking manual compatibility testing coverage – How do you do it?

This blog post I’m hoping will actually be a good discussion on various methods and techniques you guys may use to track your coverage when performing manual compatibility testing on a project.

In the interest of not ‘seeding’ any specific ideas or techniques I’m going to avoid discussing the current process we use at work in this blog post but I’ll happily share after you guys have talked in the comments below.

Let’s say a project lands on your lap and you’re told “here’s our website redesign. We need it testing. The baseline environment is Windows 10 on the latest Chrome release but we want it tested on a total of 20 mixed Windows, Android and iOS environments”

How do you track your coverage while manual compatibility testing those 20 environments?

The “red rag” heuristic

The “red rag” heuristic

A few weeks ago there was a problem with our phone system in the office; whenever the main phone rang and was picked up from a different phone the main phone registered the call as “missed”

There was a discussion about how nobody was answering calls except for the office manager and various people were told off who had been in the office at the time even though everybody insisted that no calls had been missed.

A discussion took place the following morning where the following phrase came up:

“It’s a simple phone system so it can’t be broken; it has to be the testers not picking up incoming calls!”

As you’d  imagine this assertion immediately got my Testy Senses™ tingling so I set out to prove the opposite; that the phone system can indeed have a defect and quite possibly did (or our domain knowledge  of the phone system needs improving to understand why it’s behaving in a way that seems unnatural).

My internal response to the assertion was instant, powerful and absolutely 100% necessary for me to investigate immediately – this couldn’t wait and I would stay late that day if I needed to!

I time boxed 15 minutes to investigated the issue, started straight away and did indeed find a bug with the system as expected.

After the fact I realised that the intensity of the reaction I’d had (which made me want to investigate desperately) was caused by the assertion itself – it was a statement of believing absolute fact with no evidence to back it up and my experience as a veteran tester immediately disagreed with it being “absolute”.

And that was the moment my “red rag” heuristic was born:

“The red rag heuristic” – Having an almost uncontrollable urge to charge straight into a situation with no pre-planning, no thought for the consequences or what effect there will be on other things that require your time or skill.

Language interpretation

Language interpretation

Recently while walking to the shops with my wife and 22 month old son in his buggy we approached a pedestrian crossing we needed to cross at (no, not Abbey Road!).

As we approached the crossing my wife noticed that the green man was showing so we’d be able to cross straight away, however as we got to the lights the man turned red; we’d have to wait a little while.

 “Oh, it’s turned red” she said as we got to the kerb.

 “It’s OK” I replied.

 At which point she took a step toward the road with the buggy just as the cars started to pull away at the lights…

 Talk about heart attack moment!

 As my mouth opened to speak she stopped and said, “I thought you said it was clear?”

 “Nnnooooo!” I said – “it’s OK” as-in it’s OK that the light went red because we’re not in a hurry and we can wait, not “it’s OK to walk out in front of the moving cars!”

 The consequences of language or terminology assumptions can be horrific; thankfully in this case they weren’t and good old fashioned self-preservation won the day. 😱

In testing make sure that what you’re reading or hearing is absolutely accurate and unambiguous before you make any decisions that affect the project; don’t walk out in front of a car!