Questions from the future

October 12, 2010

I did a guest lecture today at University of Washington’s Bothell campus.

It was for CSS 490 (“Software Testing”), taught by a colleague of mine named David Socha.

He handed out 3 x 5 cards to the students and asked them to come up with questions for me about testing.

In the interest of giving all of you fellow software professionals a glimpse into the mind of our future workforce, here is what they asked:

* I’m looking for a career in the gaming industry as a game programmer. I know software testing can help me as a game programmer but I want to learn more. Where do I start?

* Is it possible to work both as a tester and a developer?

* What certifications are industry standards?

* How do I get started as a tester, and also more interview questions / techniques?

* How much do testers make in comparison to devs?

* Is there a different approach to take when testing critical systems?

* What’s it really like to be a tester?

* Where do you get started?

* Can you tell us any interesting stories?

* What is the salary range for entry level, 5 years, 10 years?

* What are the working hours?

* Is the software testing job secure and stable?

* What is the major challenge?

* How do you know you’re a tester, not a developer?

* How much time a day does a tester spend documenting, coding, and testing?

* What are some testing career paths?

* What are the popular testing tools?

* During an interview how is a tester tested?

* What are the internship options around here?

* How do I get a tester job at entry level with no experience?

* What role does the PM have in testing?

* Is it a very stressful job?

* Can you give us some interview tips?

* What one thing can a dev do in coding to make things easier on a tester?

* Why should I be an SDET over an STE?

* What is the growth path for a tester at your company?

* What is the difference between a tester and an SDET?

* How do I write code in a way that is testable?

* How do I test my own code?

* How do I work with testers efficiently and in a friendly way to keep a positive relationship?

* What should I know to be a successful tester?

* How do you define a successful tester?

* What other college courses can help me become a tester?

* What’s it like if you have to work as a tester for a defense contractor in terms of ethics?

* Do testers work longer hours compared to other team members?

* Does paired testing result in better tests than individual testing?

* What’s a good answer to the classic “How would you test a soda machine?”

My next-to-last favorite:

* As a PM, how do I learn to like testers?

And my favorite… :

* Is testing depressing?

These questions would make Utest proud given their skill in asking questions for their “Testing the Limits” series.  In fact, I mentioned them today as an answer to “How do I get started with no experience?”  [ David Socha (the instructor) also mentioned mifos.org as a good way to build skill by volunteering. ]

Anyway, I’m sure we can build a whole conference around answering these, and I plan to do them service next time I’m asked to guest lecture (on Wednesday, actually). If you want to drop a dime on some of these, feel free to comment below and I’ll say to them: “This is what my colleagues had to say…”

Advertisements

Class, class, and class and STARWest

October 7, 2010

Just back from STARWest in San Diego.

The hotel was great and so were the attendees I ran into. Well, it was easier to run into some mainly because James was in fine form drawing a mob around his tester games in the main walkway.

Class was in session, even when it wasn’t. To me, that’s the mark of a good conference.

I had emerged from a session to find about 30 people crowded around him.  He was like a chess master, playing his dice game with 3 people at once.  He always carries three bags of about 30 dice each with him wherever he goes, and he seemed to be using every one of the dice in those bags.

Knowing the operating premise of the game, I decided to help him out.

He looked over and saw me, and told me he was glad to see me and could use my help because he had to go meet someone.  I’ve never had to follow James after he finishes a talk, but this was as close as it got — I inherited his crowd and did my best to keep the momentum.

He got back a few minutes later, glad to see the party was still happening.  But he noticed that the bags of dice were all laid out on the table — all 100 or so of them.  He seemed annoyed.

“Hey, man! My dice are all mixed up!” he said.

I had no idea that the dice had an organization to them. I felt bad, but I told him there was no way for me to have known that.

Still, this was not an impossible problem to solve. The “dice game” is about recognizing a pattern in  James’ head.  You do this by rolling dice.  After each time you roll, he says a number.  You have to figure out why he’s saying what he’s saying. It’s fast feedback from testing, and you have to form conjectures just as fast — then you roll the dice to either confirm or refute your conjectures.  You win when you can describe the operating pattern to a high degree of certainty.

Welcome to testing. Fast, fun, and authentic skill-building.

But back to James’ annoyance.  I replied, “ok, then, don’t tell me how they were organized in the bags.  Let me figure it out with you giving me feedback as I do it.”

He seemed to lighten with the perspective of that challenge.  But before we could make this into a game, he said “Each bag uses the principle of requisite variety.”

I knew what James meant by that. It’s one of the many terms to describe tests.  In this context, it meant the bags were equivalently and sufficiently diverse.

The first thing I wanted to do was  make sure the crowd knew this was spontaneous.  I wanted them to know this was something new between James and I, so they could pitch in to help if they wanted. Like everyone else at conferences, he and I follow threads of ideas that pique our interest. I had no idea what was going to happen, but I knew it would be fun, interesting, and meaningful somehow, and I hoped the crowd would play along as James tested me.

So I took all the dice in one big (laid-out) pile and scanned for affinity. It quickly emerged that there are several ways to make affinities. The ISTQB folks want you call these “equivalence classes”, and that’s fine, but they don’t tell you what to do when you suddenly realize that many tests might belong to more than one equivalence class.

For example, a red translucent die with black pips could go in 5 categories: red dice, black pipped dice, lightly opaque dice, dice that were identical to it, and approximate size to other dice like it.

My aim was to quickly take stock of the primary dimensions that stood out to me, then group them by those properties. There was background color, size, shape, pip color, pip type (dots, numerals, or symbols), opacity, albedo, elasticity, number of dice that were identical, and weight — and many fit in more than one category as I sorted them, so the notion of “equivalence partitioning” was difficult.

Good thing I remembered the context.  These dice are for games. The point is to have three bags that are equally diverse such that the odds are greater of having a similar experience no matter what bag James reached for. If one bag had too many of one kind, or not enough of a different kind, that may stall the exercise or the lesson he wanted to teach.

So I grouped mostly by “similarity to each other.”  I grouped the white dice together, the red translucents, the small standard blues, the more-than-six sided, the “few huge” (relative to most of the others), the ones with numerals instead of pips.

Then someone noticed that in the red dice pile, some had black pips and some white.

I took the suggestion to separate those into two piles.

In the white dice pile, someone noticed a few felt like they could be “eraser” dice — they had a rubbery consistency to them — which brought another dimension to dice: “the tendency of the pips to wear off with heavy use.”

I separated those.

Then I just did a straight count of each affinity and divided by 3.  If it was not perfectly divisible, someone suggested I make a separate pile — a kind of Holding Pattern /  Issues List pile to check with James about his preference for which bag they were most appropriate for.

The exercise took about 5 minutes, and with James as my “onsite product owner”, the mission was done when he grabbed the last 5 “issues list” dice and threw them into two bags, as if to say “good enough” (translation: further sorting into deeper affinities would be a waste of time and resources).

Too cool. Mission accomplished with a potential blog idea (mission accomplished there, too).

That was my best class that day, and a classy way to have an impromptu experience about class.

A New Thread

August 27, 2010

(Mirrored from http://www.quardev.com/blog/2010-08-26-551545950)

Some of you know me and my brother James as exploratory testing practitioners.

You also know us as firmly aligned with the principles of the Context-Driven School. But sometimes we’re more known for being the guys who created Session-Based Test Management to solve the problem of how to manage and report effort from exploratory testing.

Ten years ago, SBTM was considered a pilot project with promise. Dreamt up on a napkin at a Denny’s in Boise, Idaho in March 2000, it was showing a lot of promise. We had done 150 sessions worth of experimentation by this day (August 26) of that year and liked where it was going.

I was struck by how powerful and efficient it was. For the first time I had a method for creating sheet music for jazz – a way to describe creative, improvisational work, and make it understandable and measurable. We didn’t invent exploratory testing, we had designed words and structures for it to give testers and managers a new way to describe their work to stakeholders other than to say they were “just playing around.”

Well, James and I have a new context-driven idea to experiment with. Like SBTM, it is meant to help you describe activities you’re already doing.

I picked James up in Seattle yesterday after he returned from a difficult project. A lot was going on in a small amount of time – a project with typical daily chaos. We went to my favorite sushi place and talked about it. He couldn’t disclose much because of NDA, so he talked in patterns and generalities, but he said something simple that stuck:

“Seemed like I did a lot of my work in threads.”

“Threads?” I asked.

He explained there are situations where we might leave a task and come back to it many times.

I agreed. That was a typical day for me at Quardev and as a tester on most projects I’ve worked.

He said that since the nature of the task could change over time, you wind up pursuing it like you pursue a thread in a conversation.

I agreed there, too.

“When our work is so tentative and exploratory,” he said, “the word ‘task’ seems too narrow a word to capture it. It’s really a thread.”

We talked about what it means to follow a thread, work in threads, drop a thread.

An artifact-based approach to test management usually aims to create documents like test cases. But the concept of working in threads de-emphasizes artifacts. It focuses on what you actually do: it’s an activity-based approach, not an artifact-based one. It’s meant to emphasize the learning that happens along the way toward solving an authentic problem.

James and I agreed that a session charter is an example of a thread. But where a charter seemed to differ from a thread is that a charter is a commitment, an agreement to accomplish a task (or set of tasks) in a session. A thread, on the other hand, has no such agreement or commitment, and no timebox as sessions do. It’s more general.

In short order, we had created a new sibling for SBTM, and we called it Thread-Based Test Management.

TTM is a generalized form of Session-Based Test Management.

While SBTM seeks to manage exploratory testing in timeboxes toward a commitment or agreement to execute a charter, TTM is more general – with threads focused on emerging problems and objectives that you need to solve.

You choose an activity that needs to get done, and you follow that thread. You follow it hither and yon, up and down, back and forth, round and round, over hill and dale until you decide it’s over – that further time spent on it is not worth it (for now).

Think of threads as the Notes you take in a session report. It’s the list of activities that tell the testing story of that session.

With threads, James and I are suggesting a new pattern of testing activity we think all knowledge workers already do, and we offer a way to think about managing activities you might do while on a thread.

The premise of TTM is this:

There are kinds of projects where we cope with a great many interruptions as we pursue objectives. How could we manage testing if we embraced those interruptions?

TTM is a test management approach that organizes testing around patterns of activity (“threads”). By identifying and organizing threads we’d might be able to keep testing on track and provide a credible report of the ongoing story of the test project.

James and I left the sushi place after flushing this out a bit more and continued our “Thread Theory” discussion back at Quardev. We set up shop in a conference room where we were quickly interrupted by Quardev’s Enterprise Manager who needed status from me on a project proposal. I excused myself from the room and let James hash some stuff out on the whiteboard about what TTM could be, knowing I’d return soon.

15 minutes later, I returned, picking up that thread with him – the one about exploring what Thread Theory could mean. We resumed a brainstorm about the mechanics and philosophy of why it could have value to label the interruptible courses of action we take when we test.

We discussed what might happen in the course of a day of following threads.

You could:

– focus on one thread or many;

– drop threads;

– create new threads;

– pick up dropped threads;

– “comb” threads by creating a structure to organize them;

– “knot” threads by declaring a meaningful checkpoint in your exploration;

– “untangle” threads by uncovering new context or seeing a pattern that’s valuable to know;

– spawn child threads;

– realize an overarching parent thread…

Fifteen minutes into these bullet points on the whiteboard, the conference room phone rang again. It was Scott, apologizing for the interruption but wanting clarity about insurance for a subcontractor as part of the proposal we were working on earlier.

Ah, the “Scott/proposal” thread I dropped earlier had resumed.

We let it interrupt us because it was important (and a self-referential example of embracing emerging context on another thread), and after a few minutes, James and I resumed (again) our other thread of talking about Thread Theory on the whiteboard.

Another 15 minutes later, Scott came into the conference room, reporting that the idea we had given him earlier about the subcontractor insurance had some merit in a call he had just made in the other room.

When you’re following threads, it doesn’t matter how many times they get interrupted, it’s about being aware that the number and types of threads you’re following might help you tell the story about your day of testing and solving problems.

Some people report their day around the stories they develop or test.

Some people report sessions.

Some people report the test cases they ran.

James and I suggest a thread paradigm because it may be more useful on chaotic projects to get to the heart of what we actually do – follow the flow of thinking and activities around solving ambiguous problems.

So, yes, he and I had just two threads going on for a few hours yesterday. If we were taking notes about our day or making a report to some stakeholder that mattered, these two threads would be included in what we learned or what value we created. If there were an Agile-style standup the next day, I may decide to talk not in stories or session charters, but in threads-followed.

Whenever you think, “I need to get something done” or “I need to solve a problem”, TTM is a way to manage that.

Since it’s about maximizing opportunities and following the flow of problems, there are two main questions that are useful to ask:

1) Primary: “What thread is most important right now?”

2) Secondary: “On what thread can we make the most progress right now?”

Whether or not you call them threads, these patterns of activity are already there in our day. Knowledge workers (like testers) already know this.

When we find a bug and start our investigation into how bad it is, that’s a thread.

When the Triage Team has a question and we test because we think we can give them more more context to make a better decision, that’s a thread.

When we need to know if testing in the cloud is a solution for us, and start researching VMs, Hyper-V, Azure, and different notions of virtualization, that’s a thread.

And instead of lamenting the interruptions to those missions, we hope you take them in stride, asking the two questions above. Maybe it makes sense to cut your current thread for now because of something more urgent. Maybe make a knot or two along the way, comb through some tangles, discover a child thread lurking, or drop a thread entirely and pick it up later. Maybe devote a thread to tie up loose threads. That’s the spirit of TTM.

Either way, try it. When someone asks you how you day went, talk in terms of threads-followed or threads-in-progress, not artifacts-produced.

As you’re reading this, James has written a simultaneous blog on this topic that goes into more detail.

The “testing moment” with Jon Bach

July 3, 2010

Recently, I met a so-called “testing expert”, and it was a profound experience. 

I was on a project at a client site in the Bay Area.  I was doing an exploratory session, modeling a product, but sloppily so.  I was not using the session template.  My notes were rough and unpolished. I was not using James’ HTSM or GFS or even SFDPOT.  I was not using any of the skills I teach others, it seemed. 

I was winging it because I was overwhelmed.  With all there was to know about a new product to test and only two days to learn it and deliver a test plan for the crew back in the lab, I let myself take on everything at once, get scattered, lose my focus, disoriented, and the pressure fed on itself the more aware I became.

There were 20 documents open on my screen, there were interruptions from the client as they gave me new pieces of info, there was email coming in to answer, there were things installing on my laptop that were crucial, there were IT issues for me to attend to as I got set up. Everything was getting done at once, but I was making no progress.

A thought came to me right away:  “Jon Bach would not be proud.”

I felt he was watching me — this expert famous tester Jon Bach guy — co-inventor of SBTM, conference keynote speaker, article author, blogger, blah blah blah… He was watching a person I’ll just call “me” — a struggling student tester for 15 years, not as technical as he should be, certainly no programmer, scatter-brained, impatient, too self-critical, easily overwhelmed at times as he tried to live up to Jon Bach’s reputation.

I was in the “testing moment” and I was not doing well. My habits were awful. I started things and left them unfinished and moved to something else. It was like I forgot everything I had learned in 15 years of testing.

As I got up-to-speed on the product given to me to test, I found that habits were driving me more than my skill. I felt compelled to dive in and know it all right away — the readme, the build notes, the FAQ, the software, the test plan, the spec, the strategy, the existing bug database.  I had no compass. I had lost track of my charter. I read things and didn’t know what they meant, and tried to cure that by reading it again, slowly. I read things 5 times and it bounced right off. Nothing got in.  I asked the same question to a stakeholder 3 times that day. It was not good. I felt out of shape and not good enough.

Recently, I traded Tweets with Pradeep Soundararajan, one of India’s most famous testers.  We have never met, but his stock has been rising over the past few years because of his soulful dedication, practice, leadership, and writing about our craft. More and more, he is someone I want to meet. When I commented on his notoriety lately, he said: “I think I am doing what every other tester is actually supposed to do. [Others] are just offering an opportunity to me to make me look good.”

I agreed.  Others have made me look good, and I, too, am always trying to do what I think every tester is supposed to do.  But why did I feel like I was failing (and flailing)?

It occurs to me now that we can take all the training in the world, and have all the experience in the world, but what it often comes down to that drives us in that “testing moment” is our own self-worth.  We are only as good as our last great test idea, our last bug found, our last oral report to stakeholders; and if we have none of those to immediately recall, we can feel pretty lousy.  Worse, that attitude tends feeds on itself and we expect more from ourselves, adding to the pressure and compounding the problem.

This blog has always been about the humanity in software testing, and I never felt more human than in those recent overwhelming moments at that client site. I should have stopped testing and taken a break.  I should have written an issues list with every concern and question I had. I should have not dismissed my confusion so easily.  I should have remembered what I tell people — “confusion is a powerful tool — use it.”

I did not do any of that until the plane ride home the next day.  That’s when I had a coaching session with Jon Bach, the expert. He schooled me there on the plane.  He gave me a robust critique and reminded me of some key points to practice.  Outside of the pressure cooker, he systematically called out my mistakes. Better, he helped me point them out for myself, and he was fair about it. He also called attention to the things I did well (even though I disagreed with him on some of those). 

More importantly, he and I decided to write this blog together, and we wanted to say that we agree on one major point: “We should find ways to rehearse our testing so that when we’re in the “testing moment”, we have a better chance of feeling worthy.

If testing is a performance (as some say), then when do we get a chance to play badly before the curtain goes up? When do we practice scales and read new sheet music, and study other musicians to see how they play?

One answer is why Pradeep is so notorious — he inspired and mentored the “Weekend Tester” phenomenon into what seems to be a reliable and powerful culture of learning and testing practice that offers everyone a chance to try new things and fail safely.  I’ve been a part of just three of them, but there have been over 30 as of this writing. It is a culture of “learning moments” borne from many participants’ “testing moments” on projects.

So with this blog, I dedicate myself (and challenge so-called “testing expert” Jon Bach) to do more in his training to create more “learning moments” — epiphanies and discoveries of one’s skills through hands-on exercises that give testers a chance to rehearse.

If we both do that well enough, we may hear an inner voice in those testing moments that says things like “slow down, remember your charter, ask for help, one thing at a time, remember this tool, remember this technique, remember that this is normal, and I know just what to do…”

Else, we may feel lost when those testing moments come, and beat ourselves up about it.

But Jon Bach reminds me in this last sentence to say there is another choice — we can realize that the failure to be brilliant and proud of ourselves in any given “testing moment” could be considered an important “learning moment”.  (With that, I agree and hopes he reminds me of that from time to time.)

The Right Combination

June 8, 2010

Once upon a time, I was in a meeting in a conference room with a projector on the table, chained to the desk with a combination lock — the kind that has four dials with numbers from 0 to 9.  The bulb on the projector was burned out, so it was a big interfering brick.  Word was, the admin had forgotten the combination, so there it stood.

It occurred to me that I had nothing to lose by trying to crack it.  I did some math in my head and determined that it would take just four uninterrupted hours to try every combination from 0000 to 9999.

This meeting was a two-hour requirements meeting, so there was half of it right there…

Now I wouldn’t be worth my reputation if I just started at 0000 and tried every combination until I hit 9999.  No, no, no.  I am a rapid tester, schooled in the use of risk-based methods and heuristics – especially when stakeholders want to ship as soon as possible.

So before the meeting started, I asked a few questions of the people around me.

“Does anyone know the combination to this thing?” I asked. (Always good to check some major assumptions that can haunt you later)

Nobody knew.

The conference room was 1215.

No luck.

Our address started with 2445.

No love there either.

“What’s Tracy’s birthday?” I asked, referring to our admin.

No one knew.

I had already taken stock of the spinners — four spinners, 0 – 9.  I spun them to see if any were sticky, maybe that would give me a clue as to how the tumblers were seated. A sticky number may be a clue.

I quickly tried 0000, 1111, 2222, and up.

Nope.  Just like a good password, this one wasn’t immediately obvious.

Some people offered ideas (welcome to paired testing).

“Try the phone extension — #16787.”

I tried 1678 and 6787.  Nope.

“2468,” someone else offered.

Why that?!?

“Random testing.”

Didn’t sound too random to me, but it was easy and quick to try anyway.

I blindly spun the spinners to get something more random.

3374.

Nope.

Other ideas from people around me:

0101 — binary — it was the room closest to the developers. I tried that and 1010, 1011, 1101, 1001, 0001, etc.

Then 2525 (that was how old someone thought Tracy was).

Then I remembered to write my ideas.  This was a testing problem after all — just like a bug repro.

I wrote one combination and tried to change the last number 0 – 9 as I pulled on the lock each time to see if I could feel it getting looser.  Then I tried the next spinner over, and spun it 0 – 9 (the heuristic “one factor at a time”, OFAT as opposed to MFAT for “many factors”).

Then it opened.

Just like a good math exam, it doesn’t matter that I got the answer right, it matters how I GOT the answer. Show your work. And thank goodness I had a little diagram to show my last few tests.

For a few happy seconds, I was a hero, a magician, a svengali, as a few people laughed at their astonishment of what they had just seen.  It kind of sucked that it opened right there in the meeting.  It was a bit distracting, and hard not to celebrate my accomplishment even a little bit.

Since then, I have always carried with me a combination lock to present to testers and take notes on what I observe as they try to discern the combination.  The exercise is very much like reproducing a bug. You try things on your own and you try to elicit data from sources.

A day came when I was in Walgreens and found a combination lock with letters instead of numbers.  You could also set the lock to a custom word to make it easier to remember.  I thought that was a great idea.

I have tried this on professional colleagues like James, Rob Sabourin, Shmuel Gershon, and Justin Hunter from Hexawise – a company that makes a combinatorial testing tool! I wrote their questions, recorded their tries, watched their techniques, and asked them questions as they tested. I also invited them to turn the tables on me by setting their own combination and giving it to me to discern.

Then, about two weeks ago, I met a 12-year-old at a peer conference about testing. It was the son of one of the attendees.  His name was Steven and he seemed bored, of course.  It wasn’t even a testing conference, after all, it was a *writing* about testing conference. Snoozefest for him, for sure.

Never one to shy away from new perspectives on testing no matter what age, I asked if he would be interested in helping me with a problem.

I handed him the letter lock.

“I forgot the combination to this lock,” I said.  “What if I said ‘If you find it, there’s a half million dollar prize…? What would you do to open it?”

I said he could ask me anything he wanted.  I took out my notebook ready to observe what he tried.

The following is a report from Steven about what happened next.

I present it free and without commercial interruption…

“I went to a conference in Colorado with my mom.  It was a group of software testers.  This guy, Jon, gave me a lock and told me to try to open it.  He said I would get a half million dollar prize.  So, I tried to lift the metal part of the lock because when it got stuck, that’s probably where the combination was.  That’s what I have done on some other combination locks.

I tried doing that but it didn’t work.

Then I started asking him questions, like “Was it something you forgot in the office?”

I tried the combination “DOOR “, but that didn’t work.

Then I asked him “Were you trying to remember something?” and he said “Yes.” So then I tried “BUGS ” and that didn’t work either.

Finally, he gave me a hint and he said the first letter was T and the last letter was S.    When I heard that I tried TESTS because it was the first thing that came to my mind when he said that and it popped open!

Then I asked him “Where is my half million bucks?”

After that, we went back inside, and I asked him to give me another problem.  I kept trying and trying and trying and this time he gave me a few hints and then I finally got it — the combination was “WRITE”.

Then he said I could set the lock, so I wanted to try to give him one.  I gave him one where I put the letters into the default code of WORDS and then I picked another set of letters from another side of the lock.  They didn’t make a real word.  But eventually he got that.

Then I tried giving him a combination of “DALLAS”, which was too long, so I used “DALAS”.  But as I was setting it to that combination, after I turned the key to set it, I tried to open it with that combination and it wouldn’t open.  I mixed it up and tried “DALAS” again.  But that didn’t work.  I realized that as I was setting it to that combination, I think I mixed up some letters.

I tried doing the letters just before and after the ones in DALAS, but that still didn’t work.  Then I didn’t know what to do, and then I told Jon and he tried to open it but he couldn’t either.  I tried working on this for many hours, but I couldn’t get it open.  L

I remembered that you can open a lock with a soda can.  I’ve seen on the internet where you cut it in like little hill shapes, and then it’s like a rectangle on the bottom and you fold it in half and then slide the hill part where the button is inside the combination lock and then when you slide the hill part over you can push on the lock and then it should open.  But I cut a whole can apart and I did it wrong and I messed it up.

I was really upset because I really wanted to get this lock opened.  I got so frustrated that I went downstairs to play video games on the computer.

Jon came down and said that I actually had given him a new challenge. He told me that this was like when testers couldn’t reproduce a new bug.  So he told me that this would be a really great challenge for him and next time he brings this to some students, they could try to get it open.

================================

Steven working on the lock as I record his ideas (photo by Lisa Crispin)

Here’s what I noticed in his report:

1) Authenticity: he was upset and frustrated at his mistake of forgetting the combination once he set it.

2) Obsession and drive: he said he worked on the lock for many hours. He never asked for a hint, but I felt bad and gave him one, which he accepted and improvised ideas around.

3) Cognition and recall: he remembered seeing a video about hacking a lock (be careful, Mom).

4) Integrity: he admits he messed up the soda can hack.

5) Curiosity: he tried things.

6) Inquiry: he asked questions.

7) Humor: “where is my half million?”

He also gave me a web reference to where he had seen the soda can lock hack, but for security and ethical reasons, I’m choosing not to disclose it.

This is a wonderful combination of skills and traits that make me think this kid has a bright future.

Yes, I had rolled my eyes when Steven mentioned something about needing a coke can to open the lock.  He explained, but I could not visualize what he was talking about.  It was no different than working with someone with a thick accent trying overseas. I needed him to show me.

The next day, he did.  He found a can, cut it up into little pieces and used one of those little pieces to insert around the hasp in an attempt to make a shunt for where the tumbler met the lock.  I thought that was pretty cool.  I obsessed about it with him for several minutes.

I think he came up with this idea because of his guilt for forgetting the combination he had set. Call it peer respect, but it seemed important for Steven to make sure I could get this thing open before I left.

That blew me away.  Unlike my teenage nephews who could not care less what I do for a living, this young man was engaged and engaging.  They call these kinds of kids “problem children” in school, but give them a real problem to solve that interests them and the “problem” goes away.

His hack didn’t work, but that wasn’t the point.  When his mother said he felt awful about forgetting the combination, I knew it would take a mere 3 seconds to think of what I would say to him. I told him I saw more heart and energy in his idea than I see in professional testers sometimes, and even though it didn’t open, I would have a challenge for the plane ride home *and* a story to tell about it at the next conference.  At the very least, I said, I’d have a blog entry about it.  And here it is.

I’ll have to take his Mom’s word for it that I made a difference to him somehow, but even then that’s not my aim.  Steven confirmed for me that the spirit behind the lock exercise is a good one, no matter the age.  Maybe we’re all just grown-up 12-year-olds looking to apply ourselves to something that needs our skill and insight.

Well done, Steven.  Given more time, you would have gotten that lock open, but that’s beside the point. There’s a future in this business if you want one, and all you have to do is show up and try different combinations. I can’t think of a better life metaphor than that, so thanks for the life lesson.

NOTE: Steven tried some four-letter words above despite it being a 5-letter lock, but he forgot to mention (and so did I) that there was a blank space on the 5th spinner, so the ideas were legitimate tests.

Context: male, female, or N/A

June 6, 2010

If you’re reading this, it’s a safe bet that you are either male or female.

But maybe you’re reading this at work right now and feel gender neutral.  After all, gender is irrelevant in testing.

Or is it?

You could be testing a website tailored to women or acting out a male persona as you test an e-survey about your last prostate exam.

But let’s say you’re testing a password login for an e-commerce site with the standard bag of tricks for test ideas: cross-site scripting, SQL injection, embedded HTML, super long passwords to exploit a buffer overflow.  In that case, you may be asexual, gender neutral, and think that ideas are ideas regardless of sex.

But let’s say you do this testing thing very well and have garnered a bit of a name for yourself.  You get an award for it, public recognition, accolades, blog mentions, a testimonial dinner in your honor.  Oh, and to qualify for this great honor, it was required that you be female.

Now how does it feel? Your ideas were great, but better that you’re a woman!

I doubt a condescending tone was the intent of the organizers of “Women In Agile“.  I’m sure they felt there are not enough women in testing — though it’s unclear how they calculate such a thing — and this is their way of promoting diversity, or in their words: “give a voice to this group and promote the empowerment of women in agile teams.”

I hadn’t realized women were “underpowered” and voiceless, but maybe I’m nieve (wouldn’t be the first time).  Regardless, they’re going to find ways to empower women. I’m not included in that just because of my gender.  Apparently, my gender already makes me empowered enough not to need outside help.  In fact, if there was a Men in Agile org, there would be an outcry, right? 

I would be mad about this female bias if I felt I needed empowerment from an outside source.  Maybe I have it made because I’m a man, but I prefer to believe it’s because I have found ways to develop my own power. Just because I’m a man doesn’t mean the way has been paved for me.  If it was, I must have missed the secret meeting.

But what really struck me about the Women in Agile program was this: “[Women’s] stories will describe how embracing the diverse opinions, experiences and special perspectives of women can and does make agile teams and projects better.”

I felt that not only was sexist but would even be condescending to some women testers I know.  I asked some and they confirmed that.  And that’s why I felt justified in reacting strongly on Twitter. The WIA says just because you are female, you have a “special perspective”, but special in what context? Any context? I suppose women would have a special perspective about prostate cancer, but wouldn’t I have a special perspective about uterine cancer?

With this, I tweeted about the WIA Friday night and it kicked off a conversation between Lanette Creamer and James. (Lanette has since posted about this topic).  Marlena Compton joined in and it escalated. After a few tweets, she suddenly (and oddly) condemned the Context-Driven philosophy.

A follower supported her, tweeting “Context-Driven School implodes”, referencing the debate between Marlena and James, tagging Marlena’s tweet that the Context-Driven School was “sexist bullshit.”

Not sure how she made that leap. I’ve met Marlena, I’ve read her smart and thoughtful posts about data visualization and other technical topics. She’s never been one to like Twitter debates, but I was disappointed about how much anger she had so fast, deciding to condemn an entire testing philosophy after a few tweets with one of its founders — especially on a subject that was all about context — in this case, the context of gender in testing.

Though Marlena might have imploded that night, the Context-Driven School did not.  It was stronger and more affirming to me because gender may indeed have an important context in testing.

Marlena has since written a blog saying: “So if you are among those who think we all ought to be wearing badges announcing how great it is that we fit some cultural stereotype/straightjacket, I hope you take some time to rethink that stance.”

I was going to agree, but then I remembered Louann Brizendine’s two books: “The Male Brain” and “The Female Brain.”  In the latter, she writes: “scientists have documented an astonishing array of structural, chemical, genetic, hormonal, and functional brain differences between women and men.  We’ve learned that men and women have different brain sensitivities to stress and conflict. They use different brain areas and circuits to solve problems, process language, experience and store the same strong emotion.  Women may remember the smallest details of their first dates, and their biggest fights, while their husbands barely remember that these things happened.  Brain structure and chemistry have everything to do why this is so.”

With that, maybe we do need a “Women in Agile” organization.  Maybe women do have “special perspectives” by virtue of having something Brizendine calls a “female brain.” Should they be rewarded for that perspective, though? I still don’t think so.

Just when I was confused on which side of the issue I was on, Context came in to clarify it.  Actually, Context and Maura van der Linden, to be exact.  I’ve known Maura for years and I forgot how much I respect her judgment.  Forget my gender-neutral password security testing example above — Maura happens to *be* a security testing expert (author of the extremely useful “Testing Code Security“)!  But it was her most recent blog that clarified it for me:

“When I think of any group called “Women in X”, I immediately try to figure out what the purpose of the group is. I am never a fan of any type of diversity quotas or rules. But I consider that there are HUGE numbers of ways to be different from another person. Things like skillsets, experience, interest, hobbies, etc. Being a female is a part of my makeup but it’s only a small part of the puzzle. I’m more likely to consider myself an Agile tester or a security tester than I am a female tester because I don’t think being female is a major point I bring to the table.”

I don’t think being male is a major piece I bring to the table, but in the right context, it could be meaningful.  I just don’t want that meaning to qualify me in any way for rewards or recognition.

Automated Baseball?

June 3, 2010

Something big happened in a baseball game last night that is causing a buzz in the sports world today.  I think it’s related to a buzz in the world of software testing.

Armando Galarraga, a pitcher for the Detroit Tigers, was on the verge of pitching a “perfect game” — a game not only in which no batter of the opposing team gets a hit (a “no-hitter”), but in which no batter even makes it to first base. That means pitcher Galarraga would have had to outlast 27 batters trying to smack the ball into play. That’s some great pitching on his part along with some exceptional defensive support from his teammates.

Perfect games are rare. In the 134-year history of Major League Baseball, there have only been 20 perfect games.  Two of them, amazingly, happened last month, which has never happened in one season.

And last night at 6 pm Pacific Standard Time, Armando Galarraga was set to be the 21st.

In the 9th and last inning, Galarraga faced one last batter: Jason Donald. Galarraga delivered a pitch and Donald connected.  The ball was covered by Tiger first baseman Miguel Cabrera who was way off the base to field the ball, so pitcher Galarraga ran to cover the base that Donald was running for. Cabrera threw the ball to Galarraga in time to beat Donald by a full step before hitting  first base in mid-stride.

But to everyone’s astonishment, first base umpire Jim Joyce called Donald safe!   Being safe means Donald had made it to first base before the ball reached Galarraga’s glove, spoiling his perfect game.

As the crowd booed, Tiger manager Jim Leyland came out and argued with Joyce, but the call stood.  The crowd then watched the instant replay which showed the Indians batter Donald out by a full step. Donald had not beaten the throw. He should have been out. Jim Joyce got the call wrong and everybody saw it.

But in baseball, even though umpire judgment calls can be argued, those calls rarely get reversed unless by another umpire who saw the play.  It was hopeless.  Furthermore, it was time to move on to the next batter, which Galarraga did — and subsequently got him out to end the game.

It didn’t matter that the Tigers won the game.  The “perfect game”, a game in which Galarraga technically allowed no batters to reach first base — was spoiled even though the objective truth (according to the camera footage) showed that Galarraga did not allow Donald to safely reach first base.

Unlike other sports, the camera has no say in how baseball games are decided.  In baseball, it’s the umpires that decide.  It’s purely human judgment in the moment. Other sports allow appeals to officials if the camera shows a different story than what their ruling indicated. Not baseball.  At least, not *yet*.  After last night, that might change because this particular game had a bearing on some historical statistics that make baseball much more interesting for a lot of people to follow.

That judgment call by umpire Jim Joyce is now the topic of sports radio call-in shows, newspaper sports sections, and online blogs and articles all across the country today – how he got the call wrong, what the camera showed, if baseball should allow instant replay to influence the game, even how the call was handled by the pitcher, the umpire, the manager, and soon, the Commissioner of Baseball, who oversees everything in the sport.

How is this important to software testing?

There is a balance in baseball between what the camera sees and what the umpire sees.  In testing, there is a balance between what the tester can test and what the computer can test.

In software, testers use their judgment.  Machines have no judgment other than what they are programmed to do.  They are programmed to execute and record, to render and calculate.

As it happened, about an hour before that game, I was talking with Michael Bolton and Ben Simo online about the term “exploratory test automation.”  I had retweeted Elisabeth Hendrickson‘s post about a class she was hosting at Agilistry (called “Exploring Automated Exploratory Testing“).

Bolton, Simo, and I were discussing that title, trying to see if we could come up with something more accurate, because Elisabeth’s title seemed to be a contradiction-in-terms. How do you automate exploration when exploration is inherently human judgment and skill as we react to what we learn in the moment and automation is not?  We were pretty sure we knew what she meant by the class, but how best to describe the interaction between machine and human?

It’s important to know that me, Michael, Ben, and my brother are people who believe in the power of language to convey ideas and meaning.  We argue over precision and semantics because they communicate more than just words.  We believe it is important to debate these kinds of things, openly, publicly, because it propels and provokes conversation about meaningful ideas that are meant to help all testers everywhere win more credibility and respect, much in the same way arguing baseball calls can evolve the sport.

So we traded ideas of how to describe the computer’s role in exploration.  Since it was a public discussion on Twitter, people following that thread could chime in:

Michael Bolton’s idea was to call it “Tool-Supported Exploratory Testing” (proving to be a humorous, dyslexic TSET)

James wanted to flip the words and call it “Exploratory Automated Testing”

Oliver Erlewein liked “ETX” (and so did I) but doesn’t yet know what the X could be — it’s just cool.

Zeger van Hese suggested “Hybrid Exploratory Testing”

I offered the playful “Bionic Testing” after the Six-Million-Dollar Man.

Alan Page said it could simply be called “exploratory testing” and leave it at that because no matter whether your exploration was computer-assisted, it’s still exploration.  James liked that and so did I.

But isn’t there a term or a phrase or a word that can more accurately and precisely describe the computer’s role in assisting testing?

Is it automation when you use a tool to help reveal a bug?

Is it automation when a machine executes a programmed test procedure?

Is it automation when you use Task Manager to see the list of processes in memory?

Is it automation when you execute Cucumber or Fitnesse (keyword-driven) tests?

What do you call it when you click a button on a test harness and it clicks on the objects on the screen for you and delivers a report at the end of the script?

If it’s all “automation”, doesn’t that imply that it needs no human intervention?

I think we can find a better term.

Everyone can agree that computers help exploration.  Call them “probe droids” or “bots” or “tools” — they inform a human about things that are notoriously hard for humans to know on our own.  They do things that are hard or slow or tedious or expensive or impossible for a human to do.

But we also know that it’s also impossible for software to test itself in all the ways we can test it — just like it’s impossible for a camera to replace umpires at baseball games.  Computers and humans enhance each other.

Today in baseball, there’s a lot of energy and debate because of that game last night.  Galarraga’s near-perfect game may lead to a major change in using replay in baseball games.  The Commissioner of Baseball may even overturn Joyce’s ruling, meaning that the official record books would reflect a perfect game last night in Detroit.

Today in software testing, there’s energy and debate around the word “automation”, especially with more classes like Elisabeth’s and the more we talk about Test-Driven Design and tools on projects.

While baseball debates whether to use instant replay in helping to decide close plays , I’ll bet you if they decide to use it, they will not call it “automated baseball.”  We testers *know* we use technology to help us with testing, I just think we can do better than “automated testing”.

What is AU2H? (and why I cared)

May 27, 2010

Agile Up to Here:  an experience report

If you haven’t heard the term “Agilistry”, don’t worry, it’s not a new development methodology you have to learn in order to be current, but there is a good chance you will be hearing more about it. 

Agilistry is the name for a training space in Pleasanton, CA, opened by Agile luminary and long-time software development consultant Elisabeth Hendrickson.  Known for her immersive and practical software development exercises, Elisabeth has opened a space for software professionals to learn the “true spirit of Agile software development.”

Last week, I had a chance to see if her studio lived up to her claim of “a place where Agile software development professionals come to sharpen their saws and practice their craft.”

I’ve known Elisabeth since 2000 when she came to Satisfice, the company (and training space) my brother created in 1999.  He created it to give testers a chance to practice their craft.  Ten years later (and partly inspired by her experience at Satisfice) she has turned the tables and invited me to see it in action.  Actually, I was just one of 11 guests summoned to Pleasanton to see what she had in mind for her workshop idea called “Agile Up to Here.” (search #au2h on Twitter for threads)  

As Manager for Corporate Intellect here at Quardev, part of my job is to put myself in places that maximize my ability to learn new things about software so we can stay competitive. Principles and practices related to Agile Development are things that continue to emerge for us on more and more projects we are asked to bid on.

When she invited me, my main concern was what value I would add to an Agile workshop.  In my experience, Agile was about programmers doing all of the testing and I’m not a programmer.  Also, Agile proponents always seem to imply that there are no defined roles for testers because developers did all testing through unit and acceptance tests. 

I expressed this concern to Elisabeth and she was adamant.  “Not only is exploratory testing part of Agile, it is a crucial component of it. You are required to be here.” That made me feel better.  I trusted Elisabeth because she had demonstrated that although a very fervent fan of Agile, she hadn’t lost her passion for testing.

I’m not a newbie to Agile, but there are tons of people who know a lot more about it than me. Sure, I’m familiar with the Agile Manifesto and know about story cards, backlogs, refactoring, sprints, Scrumboards, big visible charts, Test-Driven Design.  I was also a stage producer at the Agile2008 Conference in Toronto, hosting the “Questioning Agile” track, and I have worked as a test manager on projects that used facets of Agile. 

At Agilistry last week, I was first to arrive (a bag of Seattle coffee in hand to brew for the crew) and found Elisabeth setting up.  There were 7 pairing stations, a big rolling whiteboard, index cards of every color everywhere, a few small couches to sit, a monitor on the wall for the Hudson integration system to advertise its results, a small fridge and sink area, a printer, a wireless network… and that was about it.  A pleasant space in Pleasanton, not over-complicated, but resembling what the Agile conventions suggested – no cubes, no walls, maximized for pairing, transparency, and communication.

 

Leading up to the workshop, there had been a wiki for us to get to know each other, post our bios and expectations, take advantage of the Twitter hashtag (#au2h), etc., but as people arrived, it wasn’t clear to me what our mission was. 

We had our first stand-up – introductions. Everybody was a programmer except for me.  Just as I had thought, I was sure I was going to be made obsolete, but I trusted what Elisabeth told me.  That I was a required component.  That I would add value by being there.

 

Alan Cooper from Cooper Interaction Design and author of The Inmates Are Running the Aslyum and About Face told us the mission: He was a word nut.  For years he had collected homophones – words that sound alike but are spelled differently and mean different things (e.g. ere, air, and heir). He had a website that listed some of his collection, but a lot of it was tucked away on his hard drive.  Furthermore, his site was old – vintage 1997, web .5 (not even 1.0) and the list was hardcoded HTML.

As Product Owner (not designer), his main objective for us was “Get me out of 1997!”

He didn’t elaborate more than telling us what homophones were, but he did make enough of an introduction for me to get the gist that we would be building a site for him from scratch in these 5 days. I love challenges like that, especially when they are authentic – a real problem for a real person.  Abstraction lessons can be fun, too, but I’d much rather provide value to some person.

Part facilitator, part host, and part programmer, Elisabeth announced that she would need some help configuring the machines.  In seconds, she got two of the programmer-types to volunteer — Pat Maddox and BJ Clark helped her configure the pairing stations with the tools we needed: Hudson CI, GitHub, Rspec, Ruby on Rails, and Cucumber.

BJ Clark and Pat Maddox

Jeff Patton, an independent consultant and Agile coach, was also in attendance and emerged as a natural ScrumMaster, suggesting that the rest of us meet with Alan to get an idea of the kinds of things we wanted to see in a new site.

Jeff Patton and Alan Cooper

And just like that, without fanfare or ceremony, we broke from our huddle like a team taking the field. 

It felt weird.  No specs, no design docs, no budget, no buy-in, no high-level meetings, no executives, no paperwork to fill out.  Just go and DO.

So as Jeff Patton took the lead to interview Alan Cooper about his ideas for the new site, Dale Emery, Matt Barcomb, Katrina Owen, and I gathered around to listen. Index cards were plentiful and Jeff used them like a sculptor uses clay.

Two hours later, the machines were set up and my group was done talking with Alan – we had enough to get an idea of what he wanted and the board was full of Backlog.

Storyboard

The standup we had after that was simple.  After a quick status report, Alan did a brief chalk talk on design, then we set to work, picking the few stories that we’d do the rest of that day – no bickering, no dissention, no turmoil.  It just flowed.  There was no confusion, no chaos, no tension.  It reminded me from that scene in Apollo 13, where the ground crew had to build a filter out of spare parts.  Yes, there was urgency and energy around the mission, but there was no clumsiness. People worked together and all they had to do was say or suggest something and a natural affinity formed for people who agreed.  For those that wanted to do something different, they did and found someone to pair with.

Elisabeth, Alan, me (in hat), and Matt Barcomb

What struck me when I paired with Elisabeth was TDD seemed like hacking.  She would write code and then tests around that code and the tests would fail.  That was a good thing, she said.  Then she did trial and error fixing so that the tests worked.  She admitted when she was stuck or didn’t know how to do something and she’d just ask the other pair next to her for advice or look it up online or in the API help docs and a solution would emerge, but I rolled my eyes because this was just hacking. She was trying different things, not knowing if they would work. That was TDD?!?  Come on, really?!?

When I questioned Elisabeth about this, she said something that instantly hit me.

Yes, experimentation is ok with TDD, but it’s not just trying *anything*, it’s thoughtful experimentation.  In one phrase, Elisabeth caught me judging TDD in the same way people attack exploratory testing as just reckless “banging on the keys.”  There was a method to her trials and I didn’t see it because I didn’t know what to look for. Much in the same way test managers and execs aren’t hip to the language of skills and tactics that testers use when they explore – things like modeling, conjecturing, observing, branching, backtracking, questioning – these words describe what many people walking by would call “playing around”, but when the right language is used to describe what exploration really is, it’s more apt to be understood and taken seriously.

Elisabeth, working out code

Another thing I chided Elisabeth was how she found a bug and fixed it in about 30 seconds. The finding and fixing part was cool, but then she took 30 minutes to write TDD tests around it!  I thought that was a waste of effort.  The bug was found and fixed, why waste all that time writing a regression fix for such a little thing?!?  Then she explained it to me, it’s not just regression, but the *process* of creating the test that’s important.  The lessons learned in building that test may come in handy later.

Again, I felt sheepish.  Sometimes I go down a rat hole with a test and it may seem like a waste of time to a stakeholder.  But it was what I learned from that “wasteful” test that stays with me.  That seemed to me to be a big part of Agile development – learning.  In fact, I was surprised (happily so) to know that when developers do a spate of programming in this trial-and-error way, they call it a learning “spike”.  I liked that.  I have a word for it, too, called a “session”, but I didn’t have a word for a smaller period of time, so “spike” is what I can borrow from them.

Dale Emery, BJ, and Kat Owen

The first three days, I did not feel that the site or any of its functions were ready for me to test using my favorite testing approach.  I didn’t feel that I would have added *value* by testing what was there.  The components were simple, they worked, and to test it in the ways I had in mind did not seem to suit anyone, even me.  The risk was low and it was still under construction anyway.

When developers finished a story and some TDD tests, they would ring a bell and everyone would want to know what was implemented. That turned out to be an important component of feeling we were providing value — a mini-celebration.  The bell rang more frequently that I had expected.  Progress was very fast, but not sloppy. The confirmatory tests we wrote were working, but I was ready to try something more sinister to expose risks.

By Wednesday, enough of the pieces were coming together where I felt it would be worth it to the team to see what could be wrong with it. So I started pairing.

First, I paired with Matt on a session to explore risks in the homophone search feature:

Then I paired with Pat on a session to explore risks in how homophone sets were presented:

 

I got to show exploratory testing in action — questioning, adapting, chartering, note-taking, and learning *outside* of TDD-creation. And the programmers were open and receptive.  I bounced ideas off them, and they bounced ideas of me.  When we found bugs, I was happy, but instead of ringing a bell, all the celebration I needed was to write it on a red card and put it on the board to make the point Elisabeth knew all along — exploratory testing has an important place in Agile development. And no one complained about that. On the contrary, they reacted with purpose and curiosity to what I have found.

Work-in-progress board (kanban)

I learned that the synergy of Agile programming and testing was not meant to make testers extinct after all.  It was a means to learn both sides of two important components of development.  In fact, I’d say it was the fun part of the studio environment. It was, as Elisabeth might say, “Agilistry in action.”

Most importantly, in 5 days, we turned this old, 1997 site:

http://www.cooper.com/alan/homonym_list.html

Into this:

http://homophones.heroku.com

“You just have to try it for yourself,” is a conversation-stopper. It’s usually said when the person trying to persuade you of something has given up on you.  But if you dismiss the freight and take them up on their invitation, it might be a profound experience.   

After what I went through at #au2h, I was honored to have been invited. I wanted the chance to see if Elisabeth’s studio was indeed a place where “Agile software development professionals come to sharpen their saws and practice their craft” and I left convinced that she had hit a home run in designing the perfect space to emphasize these experiences.

Oh, by the way… did you remember that Alan Cooper was Product Owner? If you want to read his lessons on what happened for him, here it is: http://www.cooper.com/journal/2010/05/agile_up_to_here.html

The Truth about Testing?

May 19, 2010

It takes a lot for me to get riled up, but here I am.

Stuart Reid is doing a keynote at EuroSTAR titled “When Passion Obscures The Facts: The Case for Evidence-Based Testing.”

Here are three things he intends to show:

• How testing ‘evangelists’ use their apparent passion to conceal a lack of evidence supporting their claims
• Which claims are supported by evidence, which are just plain wrong, and which lack real evidence.
• How we should collect metrics to provide evidence to support testing improvements.

To me, these are not articles of scientific inquiry for an honest presentation about the origins and intricacies of controversies in our craft, they are weak opening arguments in a frivolous lawsuit he is bringing against it. 

His argument is that there are rival philosophies of testing (called “schools”) that are misleading you about testing. (Though for what purpose, he does not say).  This talk seems to be about how he will drag these rival, passionate evangelist ne’er-do-wells before the High Council so that he can show how they are obscuring the truth as represented by what he calls “facts” & “evidence”.
 
First, I identify myself as one of the “passionate evangelists” from one of the schools he is taking to task (the Context-Driven School). Second, I consider myself an advocate for the craft and science called “software testing” and that questions like “is exploratory testing more effective than scripted testing?” need to take a lot of context into account before they can be answered to someone’s satisfaction.  But to say I have the “facts” about controversial testing topics like this framed as “evidence” that can transcend years of controversy would not only be ridiculous, but arrogant and insulting.

But he goes on…

“This presentation will identify which claims are supported by valid evidence, which claims disagree with the available evidence, and those claims where there is currently insufficient evidence to reasonably support a claim one way or the other.”

Did you notice what words he chose to accompany the word “evidence?” — “real”, “valid”, “available”, and “insufficient”. 

According to whom?  You, dear reader? 

Of course not.  You can’t use these words because you don’t know any better.  You’ve been manipulated.  He hasn’t, thank goodness.

His case depends on convincing you how his evidence — obscured to you by people like me [see his title] — finally allows you to sort out six specific software testing controversies that have persisted for years.  How else other than showing you his briefcase full of facts will me and the other svengali evangelists from rival testing schools be exposed for misleading you about these issues? How else, other than seeing his evidence, will you be free once and for all from the polarizing debates we svengalis perpetuate?

I see Reid as a misguided politician-lawyer who needs a big case to get noticed.  He’s hoping you will not be smart enough to see that any premises (and promises) of “evidence” are subjective.  In other words, they need context — the theme of one of the very schools he says is swaying you. 

Is he really the crime-fighting hero, armed with a briefcase that once opened, would settle these testing debates bewteen the rival schools that have been misleading and plaguing gentle, innocent, unsuspecting tester-folk for years?

I think it’s more likely that you’re the jury in this case, knowing that software testing is a challenging intellectual process, not a set of absolute truths held in someone’s briefcase waiting to be laid out for you — especially by someone who doesn’t think it is. 

At least, that’s what the “evidence” of his title and abstract show to me.  The main difference between me and Reid is that my School has taught me that evidence, like in court of law, can be circumstantial.

20 things I’ve done to inspire testers

May 15, 2010

Clarification: I don’t know if these *actually* inspired testers that have worked for me, but I have indicators to believe that they seem to have built good will.

1) Help them midwife their ideas.

2) Catch them doing something cool.

3) Be an example (as in Parimala’s blog about looking for a book).

4) Pretend you’re the new guy and ask them for tips and advice.

5) Tell them your failures and invite them to suggest what you could have done differently.

6) Find a way to “Dogfood” the app you’re testing — to not just pretend to be a user, but find a way they would actually use it themselves.

7) Ask them what movies or actors inspire them, then care about the answer.

8- Solve one problem for them OR allow them to solve one problem for you.

9) Back them up in a conflict they had with management.

10) Demonstrate testing to them but show your thinking (mistakes, assumptions, etc.) as you test.

11) Have them email someone in our business like Michael Bolton or Lanette Creamer, who have good ideas and love responding to honest questions from colleagues.

12) Pretend that the developer forbade them to test something and see what they would do about that.

13) Have a friendly competition of who can find the best bug or create a flash mob for them to share their ideas or borrow from others (the #parkcalc one last month is a good example).

14) Have them go to a testing conference but be sure to hang out with other testers AFTER the conference day is done.

15) Allow them a safe space to fail, but also to show their smarts.

16) Invite them to give YOU a brainteaser or puzzle that YOU have to solve in front of them.

17) Pretend that you are shipping tomorrow and see what they would do about it at a time when management may think “we found everything”.

18) Encourage them to participate in a Weekend Tester session, and if they’re shy, just have them lurk.

19) Take an existing bug and don’t tell them what it is, but have them try to reproduce it by pairing up with someone.

20) Remind them of other metaphors — like how testers are heroes like the Secret Service, Men in Black, bodyguards, or crime scene detectives.