Kath Straub—Is it really plain? A case for content testing (PLAIN 2013)

Kath Straub of Usability.org showed attendees at PLAIN 2013 how important—and easy—user testing is for plain language projects.

She began with an example: the Donate My Data brochure was supposed to inform veterans about a program through which they could donate their health records to test health software. She and her team identified ten “must-know” facts that readers should glean from the brochure and hoped to hit a target of 80 percent recall. They tested the brochure using Mturk, a crowdsourced testing tool run by Amazon, and found that reader recall didn’t meet their expectations. Some of the key facts they wanted to emphasize weren’t clear enough, and, as a result, the brochure wasn’t as persuasive as they’d hoped.

This example highlights the importance of testing, said Straub. “Here we were, plain language people thinking we were good at what we do—yet we were surprised with the results.” In the age of content, she explained, there are no guides, and we have to stop blaming the victim. Usability experts and content experts have to come together to create effective documents and tools.

Fortunately, comprehension testing sounds harder than it is. There are three types:

1. “Simple” comprehension testing

Did the users get the key facts? To see if they did, the user testing team should

  • agree on the facts
  • decide which are the most important
  • create a question for each fact
  • agree on the answers

Pre-test your questions, and expect to revise them several times. Good questions are hard to write—test takers remember strategies for answering multiple choice questions from school (e.g., the longer, specific answer is the right one)—so offer participants an alternative to guessing (e.g., “The brochure didn’t say”).

Test multiple versions of your comprehension test to narrow down which version might work best for which audiences.

When reporting results, it’s important to note not only how many people got a question right but also what those who got it wrong chose as answers.

2. Confidence testing

Could users explain what they’ve just read to a family member of friend?

3. Persuasiveness testing

Users may understand the content, but will they change their behaviour accordingly? Understand their motivators, their concerns, and their barriers.

***

Straub has used Mturk for a lot of her user testing: participants get paid a small amount to answer an online survey. The advantages are that Mturk has a wide reach across the U.S., which translates to a lot of participants. The disadvantage is that you don’t have much control over your testing population. As such, your test should start with a filter—a comprehension test and “catch” questions (e.g., “Answer A even if you know that’s not the right answer”)—that can help narrow down your pool of testers who are genuinely reading the questions. Over time, you create a “panel” of people who return to your studies. “You get what you invest and what you pay for,” said Straub.

Each testing session takes about a week, including setup and analysis.

Using tools like Mturk, Straub reiterated, crowdsourced testing can be quick, inexpensive, and effective. It doesn’t have to be complicated to be robust. Most importantly, she said, you don’t know something is plain language to your target audience unless you’ve tested it in your target audience.

Leave a Reply

Your email address will not be published. Required fields are marked *