Select Page
Accept misnomers – The bigger picture is helping, not correcting language

Accept misnomers – The bigger picture is helping, not correcting language

EDIT: This post was mid-draft when I was reminded by @NicolaSedgwick on Twitter of @PaulHolland_TWN‘s #TestBash #99SecondTalk about not attacking people with a different understanding of terminology than you. I’m pretty sure there are improvements yet to be made on it but for the sake of remaining relevant I’m publishing it now. 🙂

In life there are misnomers everywhere:

  • “Coffee beans” – they’re actually coffee “seeds”
  • “Dr Spock” (Star Trek) is actually Mr Spock
  • Panama hats were designed in Ecuador
  • “Personal PIN Number” – PIN is an abbreviation for “Personal Identification Number” so in effect you’re saying “Personal Personal Identification Number Number”
  • “Wherefore art thou Romeo,” is often understood to mean “where are you Romeo?” When in actual fact “wherefore” means “why” – Juliet is asking why he is a Romeo, not where he is.
  • “Peanuts are nuts” – They’re not, they’re legumes as they grow under the ground, not on trees.
  • “A tomato is a vegetable” – It’s a fruit.
  • Koala Bears are not bears.
  • “Slider” is sometimes used to reference a gallery/slideshow/carousel function
  • “Checkbox” is frequently used to reference a radio button. Checkboxes though are multiple-selection “AND/OR” elements whereas radio buttons are single choice buttons.
  • “Drop down” has been know to reference an expandable <div> because the panel drops-down to display content.
  • “Quality Assurance” is regularly used to reference “Testing”

And so on.

So let’s use an example misnomer and discuss the potential states of it.

We’re working on a website that is early in development and has had no technical team members on it. There is a header section <div> that expands when clicked to reveal additional content.

The project manager and project sponsors have always referred to the <div> as a “header dropdown” as from their perspective that’s what it is; the header “drops down” when clicked.

With misnomers there are several states of understanding:

  1. Misnomer is accepted as being factually correct and not questioned. This often occurs on projects where the beginning of the project starts with substantial planning from non-technical team members or those with less domain knowledge.
  2. Misnomer is known to be factually incorrect but the person understands the context and sees no reason to correct the misnomer or hold a discussion about it. Using our example misnomer, a developer may start working on the project and as there is no other way to implement the required functionality (it is not achievable via an actual “drop down” element) there really is only one possible thing it can be right now – an expandable <div>, so why waste everyone’s time holding a conversation about it?
  3. Misnomer is understood to be factually incorrect. The person feels it is necessary to correct the originator of the misnomer with their own understanding of the misnomer. OK so let’s say a tester joins the project and decides that in order to make sure everyone is using the correct terminology and avoid any ambiguity the current “header dropdown” misnomer should be corrected. This will ensure that any future functionality that may be added to the header is not confused with the misnomer, say for example a “language” dropdown is added. There is no room for discussion here; it must be corrected.
  4. Misnomer is known to be factually incorrect. A conversation takes place with the project facilitator to gauge any potential risk associated with the misnomer. An agreement is reached whereby the most efficient solution is implemented for the misnomer. Using our example let’s say the tester joins but instead of taking a “must have” approach they explain to the project facilitator that there are ambiguity risks with using the terminology but that those risks may be extremely low depending on the development plan for the project. For example if there will never be anything added to the header of the site that resembles a dropdown then referring to the expandable <div> as a “dropdown” will have little impact. The project manager explains that once the site development is complete it will be handed over to a 3rd party and there will be no maintenance period and no future development on the project for the development team. The project is also only a 3 month project and there are no team member changes planned that could cause confusion with terminology. The “header dropdown” misnomer is then deemed to be unimportant and everyone is happy to use the misnomer rather than set about re-educating the team.

Inadvertent use of misnomers allows quicker communication in groups where there is a joint understanding of the domain being discussed and the context of the misnomer itself by all parties but herein lies the risk of not correcting or discussing a misnomer; there is an inevitable assumption that everyone has similar domain knowledge and understands the terminology used equally, hence there is no need to discuss or correct it.

Using terminology or descriptions which are even slightly ambiguous to those with different domain knowledge can create issues though.

From my perspective what we need to strive for is the establishment of a common “language” on projects that allow the participants the ability to discuss things in the most efficient manner possible adding as little ambiguity into the discussion as possible, not to pick apart people’s differing descriptions of the same thing.

I test software. I perform quality analysis. If you say I am “QA’ing” your software and I understand the task to be “testing” the software then there’s little value in me trying to redefine your vocabulary or use of the English language. If anything it will make everything even less efficient as there will be a period of readjustment before the new vocabulary is “normal” in day to day work.

The approach I normally take is to reiterate the required task using the correct terminology, for example:

“We’ve added Search functionality to the header dropdown on the staging environment. Can you exploratory test it this morning please?”

“Oh cool, yep absolutely! Exploratory test the Search function in the expandable header <div> – got it. I’ll start right after this test session and feed back as soon as possible.”

Now having said all if the above I do think there’s a huge benefit in establishing a common vocabulary within an industry or community where our “domain” is the industry itself. But that’s a whole new blog post… 😉

Do you like it dirty?

Do you like it dirty?

When I arrive at work after my morning tube commute the first thing I do is use some antibacterial hand gel before I check my emails and tasks for the day. I’d rather kill any cold or virus germs asap but not enough to walk from the ground floor up to the 3rd floor.

After I’ve checked emails, tasks and anything else urgent I’ll head upstairs to the second floor to make a coffee and en route I’ll pop up one more floor into the bathroom and wash my hands so that they’re clean as well as germ free.

I’m a clean kind of chap; not OCD clean but perhaps cleaner than I need to be.

With my testing I like my environments to also be as clean as necessary for the system under test:

  • Websites (including mobile sites) – No unnecessary programs running in the background and a cleared down browser with no addons enabled that may affect the site.
  • Mobile apps – Factory reset device with no unnecessary apps installed or accounts that aren’t essential to the OS. No backgrounded apps or non-test-app push notifications enabled (except for deliberate testing)

The idea behind this is to reduce any false positives – Any issues I discover will almost certainly be a problem with the system under test and I won’t be incorrectly reporting issues that are centred around interaction with other software, much of which could be unique to my own preference of apps.

For instance if I was testing software on a system that had an IDE installed on it and I was unaware that the software had a reliance on certain runtimes that were previously installed and were not native to the operating system I’d completely miss the fact that the software may not even run for many users.

Similarly certain setups from previous testing sessions may not have been returned to their base state so I could end up spending unnecessary time troubleshooting proxy/DNS settings, environmental variables that should be changed etc.

I’ve spoken to quite a few people about their test system preference and generally there seems to be two camps:

  1. The Angelically Clean Crew – Their machine/device is a freshly imaged desktop machine or mobile device irrelevant of any previous testing. If for example the only other testing that had been done on that environment was website testing and there’d be an almost non-existent chance of any false positives it doesn’t matter – “Image all the things!”
  2. The Dirty Dozen – A more “real world” approach – Machines/devices are more often than not their development machines or personal mobiles and filled with other apps, accounts, push notifications for lots of apps enabled, backgrounded apps etc – “But real users don’t have perfect environments”

There doesn’t seem to be many people with a “good enough” approach when it comes to test environments for some reason. That may be a completely factual statement but also it may be that the people I’ve spoken to have tacit knowledge about what they would accept and it’s not entirely clean/dirty but a happy Goldilocks place.

I completely see the benefit to all levels of dirtiness in testing setups but what I struggle with is lack of time on our many “one hit” (waterfall) projects and prioritising those levels of dirtiness in terms of time spent setting up versus bugs encountered versus time spent investigating and ruling out other system interactions as the cause etc.

I think until we have the time to perform mirrored testing on both a clean and dirty device simultaneously across many projects to prove one way or other how much impact there is to time due to device cleaning versus false positives sticking to a an “everything immaculately clean” approach is the best option, even if it’s overkill.

What are your thoughts? How dirty do you like it? Do you have any real world experience or statistics of the value between clean/dirty?

Tracking manual compatibility testing coverage – How do you do it?

Tracking manual compatibility testing coverage – How do you do it?

This blog post I’m hoping will actually be a good discussion on various methods and techniques you guys may use to track your coverage when performing manual compatibility testing on a project.

In the interest of not ‘seeding’ any specific ideas or techniques I’m going to avoid discussing the current process we use at work in this blog post but I’ll happily share after you guys have talked in the comments below.

Let’s say a project lands on your lap and you’re told “here’s our website redesign. We need it testing. The baseline environment is Windows 10 on the latest Chrome release but we want it tested on a total of 20 mixed Windows, Android and iOS environments”

How do you track your coverage while manual compatibility testing those 20 environments?