Quantcast
Channel: Hiring and Recruitment – The Social Tester
Viewing all articles
Browse latest Browse all 11

Let’s say I gave you a giant box to explore

$
0
0

Let’s say I gave you a giant box to explore.

In the box was a machine that did a number of different functions along with an instruction manual. The functions that the machine can perform are complicated but ultimately observable and open to identifying more thoroughly than we currently have done in our user guide.

The machine however has lots of unknowns when it comes to putting spurious data in and connecting with a wide array of third party apps. The results of your actions are outputted to a simple user interface. It’s not pretty but it does the job. The above was a test that a company I worked for would give during a job interview.

box Image from subsetsum on Flickr.

They provided a laptop with this “test machine” running on it (a Java application) and they provided the instructions (a simple two page html document).

When I sat the test I hated it.

I didn’t feel it dug deep enough around some of the diverse skills I, and others, could offer.

In hindsight though I think this type of test was actually very insightful – although maybe the insights weren’t always used appropriately.

I got the job and soon found myself on the other side of the process giving this test to other testers hoping to come and work with us.

When I was interviewing I was handed a checklist to accompany the test. This checklist was used to measure the candidate and their success at meeting a pre-defined expected set of measures.

It was a “checklist” that told the hiring manager whether or not to hire someone.

The checklist contained a list of known bugs in the “test machine”. There were about 25 “expected” bugs followed by a list of about 15 “bonus” bugs. There were two columns next to these bug lists, one headed junior and one headed senior.

If we were interviewing a ‘junior’ (i.e. less than 3 years experience) then the company expected this person to find all of “expected” bugs.

If we were interviewing a ‘senior’ (i.e. more than 3 years experience) then the company expected this person to find all of “expected” bugs AND all/some of the “bonus” bugs.

There was also space to record other “observations” like comments about usability, fit for purpose etc

This interview test was aimed at flushing out those who were not very good in both junior and senior categories.

I’m assuming many of my readers are jumping up and down in fits of rage about how fundamentally flawed this process is…right?

I immediately spotted the fundamental flaws with interviewing like this, so I questioned the use of the checklists (not necessarily the test itself) and I stumbled across two interesting points.

Firstly, there were some hiring managers that stuck to this list religiously (like a tester might to a test case). If someone didn’t find the bugs they were expected to then they didn’t get the job. Simple as that.

Other hiring managers were more pragmatic and instead balanced the performance across the whole exercise. For example – I didn’t find some of the things I was expected to find, but I still got the job. (It turns out a few managers were ticking the boxes next to the bugs people didn’t find to “prove” to senior management (auditors) that candidates were finding the bugs they should have).

Secondly, this test was remarkably good at exploring the activities, approach and thinking that most testers go through when they test.

Whether someone got any or all of the bugs they were expected to was beside the point for me. I was more interested by the thinking and process the tester went through to find those bugs and assess the “test machine”.

It just so happened that most hiring managers were not measuring/observing/using this rich data (i.e. the testers approach, thoughts, ideas, paths, etc) to form their opinion.

Junior or Senior – it made little difference

It was clear from doing these interviews that it made little difference how many years of experience a candidate had and their ability to find the bugs they were expected to.

I actually spotted a very prevalent trend when doing these interviews.

Those that followed the user manual found most of the “expected” bugs no matter what their perceived experience level; juniors, seniors, guru grade – it made no difference.

If they followed the user guide (and often wrote test cases based on the guide) then they found these “expected” bugs.

Those that skimmed the user guide but went off exploring the system typically (although not always) found most of the expected bugs AND some/all of the “bonus” bugs – again, irrelevant of their experience (or perceived experience).

This second point highlighted to me that the approach a tester took to testing influenced the types of bugs they found, but also, in this interview case, it challenged our assumptions about experience and bug count. It also made a few people think whether they had categorised the known bugs correctly – were the bonus bugs too easy to find?

I observed very junior testers finding the “bonus” bugs and senior testers missing them purely because of their approach and mindset. (although not a scientific finding it did give me enough insights to set me on my journey towards exploration and discovery).

It wasn’t even as simple that all seniors did a more exploratory approach and that juniors did a more scripted approach. That simply didn’t happen.

It seemed to run deeper than that. It seemed to focus more around a curious mindset than years of experience.

Obsessed with finding out more

Those that took an exploration and discovery approach appeared to develop an obsession with exploring the “machine”.

They seemed more compelled to discover interesting information about the “test machine” than those who followed a script based on a partial user guide.

Following the guide simply wasn’t enough. They needed to find out the information for themselves. In a sense, they needed “primary” information. Information they had obtained themselves straight from the source (the product). Other testers were happy with using a “secondary” source of information (the user guide) to formulate conclusions. It obviously wasn’t as clear cut as that, as many straddled the middle ground, but the trends did show an interesting difference in approach.

Note Taking & Story Telling

Another dimension to this process, which was what kick started my obsession with note taking, was that those who went off exploring appeared to tell more compelling stories about their testing, often through detailed and interesting notes.

The notes they took enabled them to recite their exploration and discovery story back to others with an astounding level of detail – something most people who followed a script didn’t do ( I can’t say whether they “could” do it though).

As these people got stuck in and delved around in the product they started to tell a really interesting story of their progress. Through narration of their approach and interesting notes they got me and the other hiring managers interested, even though we’d seen dozens of people perform this very same test.

They started to pull us along with them as they explored and journeyed through the product. They wrote notes, they drew pictures, they made doodles and wrote keywords (to trigger memory) as they went.

They built up a story. They built up a story of the product using the user guide as a basis, but using their own discoveries to fill in the gaps, or to even challenge the statements made in the user guide (of course…the user guide had bugs in it too).

They wrote the story of their exploration as it developed and they embraced the discovery of new information. They kept notes of other interesting areas to explore if they were given more time.

They told stories (and asked lots of questions) about how users might use the product, or some of the issues users may encounter that warranted further investigation.

They involved others in this testing process also; they described their approach and they made sure they could report back on this compelling story once they’d finished the task.

What intrigued me the most though was that each tester would tell a different story. They would see many of the same things, but also very different things within the product and the user guide.

They built up different stories and different insights.

They built their testing stories in different orders and through different paths but actually they all fundamentally concluded very similar things. The endings of the stories were similar but the plot was different so to speak.

What they explored and what they found also said a lot about themselves and their outlook on testing.

Their actions and outputs told stories of who they were and what their contribution to our company/team would be. They let their interview guard/nerves down and they showed the real them.

Everyone who sat that test told us something about themselves and their testing, however, we (the hiring managers) were pulled along more with those who built their testing stories as they went rather than those who had pre-defined the story and stuck to it.

The explorers appeared to tell more insightful and compelling stories and they also revealed deeper insights about themselves and the product.

As our hiring improved (better job adverts, better phone screens, better recruitment consultants) we started to interview more people who would explore the product and tell us these compelling stories; these were the people we wanted to seek out.

As a result even those hiring managers who were obsessed with their checklist would find themselves being pulled along with the exploration. They found themselves ignoring their precious checklists and focusing on the wider picture.

It didn’t take long for senior management to see the problems with measuring people against lists too.

Did I learn anything?

Oh yes indeed.

We should never hire purely on binary results (i.e. found this, didn’t find this, is this type of person, is that type of person, isn’t this).

Only by testing the system do we find out the real truth about the product (specs are useful, but they are not the system). In a choice between primary information or secondary information I will gravitate to primary.

When we explore we have the opportunity to tell compelling stories about our exploration, the product, the testing and more importantly, ourselves <– not everyone has these skills though.

All testers will conclude different insights, sometimes they may find the same bugs as others, sometimes very different ones. Therefore it’s good to pair testers and to share learnings often.

The hiring approach should be consistent across all those with the power to hire and should focus on the person, not the checklist.

Good exploratory testing relies on the ability to explain the testing, insights and findings both during the testing session and after (and often relies on exceptional note taking and other memory aids).

If you’re hiring a tester, observe them doing some testing before making an offer.

I believe an obsession to learn more (and be curious) is one of the most observable traits I can see. Exploration and curiosity does not always come naturally to people. Not everyone is curious about the world around them or the products they test, but for testers I believe this trait alone will send them down the many paths of learning. (and this is a good thing)

Someone with 10 years of experience is not always better than someone with 2 years – especially so if they have had 10 of the same year.

Dividing people in to junior and senior camps is not helpful. Where is the cut-off mark (years of experience? – see point above)? We are all on a journey and our skills will change and develop over time – the person is what you are hiring, not the years that they have worked.

The big question I now ask is “Does this person want to be on a journey to mastery? If they don’t, it’s very hard to encourage them to improve.

 

Note: This post may come across as me suggesting exploratory testing approaches (and therefore exploratory testers) are better than scripted testing approaches (and hence testers who follow scripts). That’s not my intention. How you perceive this post will depend very heavily on how you perceive the role of testing and the role of the tester to be in your organisation. You know your organisation and the needs that you and your team have. Some organisations desire people to follow scripts, some organisations desire people to explore the product, some desire something in between and some desire other things all together.

 


Viewing all articles
Browse latest Browse all 11

Trending Articles