Small data, big impact: Making the case for qualitative research in libraries

A
Artefacto
· · 4 min read
Surveys are undoubtedly a popular approach for collecting data from service users. And while they can seem like an easy approach, designing a good questionnaire can be harder than it looks. You are not just selecting what you want to ask, but thinking about the right question format, sample sizes and how you’ll interpret the results. 

But even a well-designed survey has its limits. Most surveys lean heavily on quantitative data for their ease of analysis, but this can be deceptive. As design researcher Erika Hall warns, "Surveys are the most dangerous research tool — misunderstood and misused." The ease of creating and distributing them, and tallying up the results, can make findings feel true and valid even when they're misleading.

There's also the burden on your users to consider. Survey fatigue is real. Because surveys are easy to put together, they're often a first resort. But your users are already busy, and another form to fill in might not be especially welcome.

So what's the alternative? Qualitative data is often overlooked, which is a shame, because it gives you much deeper insight into users and their experiences. Better still, it lets people describe things in their own words, rather than choosing from options you've already decided for them. That's why when it comes to a survey we're advocates for a single, well-chosen open-ended question.

Customer satisfaction scale
Satisfaction and impact are not the same thing

Before you can ask the right question, it helps to be clear about what you're actually trying to measure. Most library evaluation sits in one of four categories: activity (what happened and when), attendance (who came and how many), satisfaction (did people enjoy it or find it useful), and behavioural change (what people actually did differently as a result). The first three are relatively easy to capture with closed questions and tick-boxes. Capturing real impact, real outcomes is where things get harder.

It's easy to confuse satisfaction with impact, but they're not the same thing. Knowing that 85% of attendees rated a workshop “good” or “excellent” tells you they left feeling positive. It doesn't tell you whether they applied anything they learned, changed how they work or felt more confident.

Why open questions work better for capturing impact

When it comes to measuring impact, we always advocate for a single open-ended question. No ratings. No multiple choice. Just THE question you really want to ask your users about the difference your library, your services or a particular intervention has made.

Designing a question that captures real outcomes means resisting the urge to ask about your service and instead asking about your user. Rather than “how would you rate today's session?”, try something like “what will you do differently as a result of today?” or “what's one thing you learned that you'll actually use?” These questions ask people to reflect, and reflection is where meaningful responses come from.

As well as being kinder on your participants, this approach has some real qualitative research benefits. Open-ended questions reveal things you didn't know to ask about - scaled questions can only measure what you already thought mattered. They capture the “why” and “how” behind people's experiences, which is crucial for understanding your impact. They avoid leading respondents towards answers you've already assumed. And the real quotes and stories they generate are genuinely powerful when you're making the case to funders or decision makers.

Scaled questions are often easier to analyse, and they produce neat, tidy data. But that tidiness can give you false confidence - you might be measuring the wrong thing entirely. Open questions are messier, but often more truthful.

Qualitative doesn’t mean unmeasurable

It's also worth saying: qualitative doesn't mean unmeasurable. This is a common misconception. But gathering rich, open responses doesn't mean you can't identify patterns, draw conclusions or demonstrate impact in a meaningful way. Keeping your survey to the one question you really want answered can actually make the analysis more manageable than you might think.

A single, well-crafted open-ended question won't give you a dashboard of statistics. But it will give you something arguably more valuable: evidence of change, in people's own words.