I’m reviewing transcripts from interviews we did with customers last year and came across a nice example of interview technique.
The hardest thing about customer interviews is knowing where to dig. An effective interview is more like a friendly interrogation. We don’t want to learn what customers think about the product, or what they like or dislike — we want to know what happened and how they chose. What was the chain of cause and effect that made them decide to use Basecamp? To get those answers we can’t just ask surface questions, we have to keep digging back behind the answers to find out what really happened.
Here’s a small example.
Doris (name changed) works in the office at a construction company. She had “looked for a way to have everything [about their projects] compiled in one area” for a long time. All the construction-specific software she tried was too “in depth.” She gave up her search. Some months later, she and her co-worker tried some software outside the construction domain: Monday and Click-Up.
I asked: How did you get to these options?
She said she and her co-worker did Google searches.
I asked: Why start searching again? You tried looking for software a few months before. What happened to make you look again?
She said they had quite a few new employees. And “needed a place for everything to be.”
That sounds reasonable enough. We could have moved on and started talking about how she got to Basecamp. But instead of accepting that answer, I kept digging.
Ok so you hired more employees. Why not just keep doing things the same way?
“It was an outdated system. It’s all paper based. And this isn’t a paper world.”
We have our answer, right? “Paper based” is the problem. No, not yet. That answer doesn’t tell us enough about what to do.
As designers that’s what we need to know. We need to understand the problem enough to actually decide what specifically to build. “Paper based” sounds like a problem, but what does it tell us? If we had to design a software fix for her right now, where would we start? What would we leave out? How would we know when we made the situation better enough to stop and move on to something else?
So I asked one of my favorite questions:
What was happening that showed you the way you were doing things wasn’t working anymore?
This question is extremely targeted and causal. It’s a very simple question that invites her to describe the problem in a way that is hard, factual, time-bound, contextual, and specific — without any analysis, interpretation, speculation or rationalization. Just: What happened. What did you see. What was wrong.
“The guys would just come ask for the same information over and over again. And it was taking up time for me. . . . They shouldn’t have to ask me some of these questions. You get asked 20, 30 stupid questions and try to go back to something you have to pay attention to . . . you’re working numbers and money you need to be paying attention to what you’re doing.”
Aha. Now we’re getting somewhere. She holds all the information. The guys in the field need the information. She needs to give them access to the information so they can look it up themselves. Then she’ll stop getting interrupted and she can focus on her own work.
This dramatically narrows down the supply side and begins to paint the outlines of actionable design requirements.
I check to see if I understand the causality here.
Was the number of interruptions worse after you hired more people?
“Oh yeah, absolutely.”
Because we kept digging for causality, we got to an understanding of the situation — what caused it, what went wrong, what progress meant to her, and why she was saying “yes” and “no” to different options as she evaluated them.
For more on this interview approach I recommend checking out Competing Against Luck by Clay Christensen.