Blog

Why open access proponents should care about plain language

Virtually all academic research in Canada receives support from one of three federal funding agencies—the Social Sciences and Humanities Research Council (SSHRC), Natural Sciences and Engineering Research Council (NSERC), or Canadian Institutes of Health Research (CIHR)—and researchers in other countries similarly depend to some extent on federal funding, whether from the National Institutes of Health in the U.S. or Research Councils UK. Advocates of open access (OA) have long argued that all of us contribute to this research as taxpayers and so should have access to its results—namely, the scholarly books and journals that report on the research—without having to pay for them. In fact, in 2007 CIHR became the first North American public research funder to mandate that all publications stemming from research it has funded must be published in an OA journal (known as “gold open access”) or archived for free use in an institutional repository (“green open access”). In fall 2013, Canada’s other two funding agencies followed suit.

There’s no question that the idea of OA is democratic and altruistic. (Whether OA can flourish given its financial constraints is another discussion.) Making peer-reviewed scholarly work available for free helps researchers broaden their reach and makes it easier for them to collaborate and build on the work of others. For the general public, free access to the latest research means that

  • people with health conditions can read up on the newest treatments
  • professionals who have left academia but still work in a related field can keep up to date
  • citizen scientists—such as hobbyist astronomers, bird watchers, mycologists, and the like—can learn from and contribute to our collective body of knowledge.

Making publications free, however, doesn’t go far enough. The way I see it, there are four levels of access, and if researchers fail to meet any one of them, they haven’t really met the overarching goal of OA, and we taxpayers are still getting shortchanged.

First level: Can readers find the work?

Discoverability, in this case, is an information science problem and falls mainly under the purview of librarians (major OA supporters), who have to make sure that their library’s catalogue links to all available OA journals and that they make their users aware of free, author-archived versions of papers published in traditional for-profit journals. However, plain language and clear communication have a role to play even at this level: authors are responsible for the title, abstract, and keywords of their articles. Journal publishers are usually reluctant to make substantive changes to the title and keywords in particular and rely on the authors to supply appropriate ones. All authors could benefit from plain language training that helps them craft succinct, unambiguous, and descriptive titles and distill their research into a handful of clear key terms that readers will likely search for.

Second level: Can readers get to the work?

Open access proponents focus mainly on this level, arguing that eliminating the price barrier would allow everyone to read and benefit from scholarly publications.

Third Level: Can Readers Read the work?

Beyond promoting standard concerns about design for readability—appropriate fonts, effective use of white space, and so on—this level should accommodate alternative formats that promote universal accessibility, including access for people with print disabilities, a demographic that will grow as the academic community continues to become more inclusive.

Fourth level: Can readers understand the work?

For the most part, language and comprehension don’t concern the OA movement, but I think they are key: who cares if your paper is available for free if it’s impenetrable? Specialized disciplines will use their own specialized language, to be sure, but scholarly writing could undoubtedly do with fewer nominalizations and less convoluted language. Researchers who publish for OA also have to understand that their audience isn’t the same as it was twenty years ago. Academics today are overloaded with information and simply don’t have the time to decipher dense writing. What’s more, OA has opened up the readership to people outside of their field and to ordinary (taxpaying) citizens who want to become more informed.

OA has an important presence in developing nations, too, where researchers often don’t have the means to pay for journal subscriptions, and clear communication is doubly important in this case. Although OA journals can be based anywhere, they’re largely published in English, and many authors in developing nations are writing in English as a foreign language. If they all model their writing on stilted, confounding academese, the problem of impenetrable scholarly language becomes self-perpetuating.

Sadly, clarity in language can be among the lowest priorities for OA publishers. As many people in scholarly publishing have pointed out, including Laraine Coates of UBC Press, free for readers doesn’t mean free to produce. Publications—particularly book-length monographs—still cost (quite a bit of) money to make, and that money has to come from somewhere. Unfortunately, without being able to collect subscription fees from libraries and individual users, OA journals and publishers face more limited budgets, and many of them choose to forgo copy editing, leaving their articles riddled with stylistic and grammatical infelicities that can make a publication effectively unreadable. Peer review alone isn’t enough to ensure clarity and resolve ambiguities.

Ultimately, both open access and plain language movements have the same aim—to democratize information—and each would benefit from forging a stronger alliance with the other. I’m inspired by the story of Jack Andraka, who, at age fifteen and using resources he found online through Wikipedia, YouTube, and Google—including OA journal articles—developed a low-cost, accurate, paper-based test for a marker for pancreatic cancer. Not to take away from Andraka’s insight and resourcefulness, but I can’t help wondering how much more we could collectively accomplish if we all had access—on all levels—to the latest scholarly literature.

Katherine Barber’s PLAIN 2013 banquet talk

I was incredibly privileged to get to see Word Lady Katherine Barber‘s speech at the PLAIN 2013 banquet. Because it was a banquet, I wasn’t rigorously taking notes—and even if I had been, I know I couldn’t do justice to her humour (short of reproducing a full transcript). Despite the casual levity of her talk, though, some of her points are very much worth discussing, so here is an extremely brief recap.

***

“We ideally and naively believe that language is for communication,” said Barber. In fact, language has always been used to impress others or to make the speaker feel superior. In the sixteenth century, people used to borrow fancy words from Latin (Shall we ebulliate some water for tea?), and in the eighteenth century, they borrowed fancy words from French. A secondary function of language, beyond simple communication, is to create an in-group and an out-group (hence teen slang).

Geographical variance also creates an in-group and an out-group, whether consciously or unconsciously. Barber gave some examples of how the English  Canadians use might baffle our visitors. What must they think about our morals, for example, when they walk down the street and see a sign that says “Bachelor for rent”? Or when they go to buy a newspaper and see “Loonies only”?

Language varies even within Canada, of course: in Thunder Bay, a shag is a kind of party—a cross between a shower and a stag. And in Manitoba, people would understand that if you promise to bring dainties, you’ll be bringing assorted sweets rather than frilly underwear.

From our old fort cheddar to our midget basketball teams, we use Canadianisms all the time in our writing and speech without a second thought, but we should bear in mind that what might be plain to us may not at all be plain to outsiders.

Kath Straub—Is it really plain? A case for content testing (PLAIN 2013)

Kath Straub of Usability.org showed attendees at PLAIN 2013 how important—and easy—user testing is for plain language projects.

She began with an example: the Donate My Data brochure was supposed to inform veterans about a program through which they could donate their health records to test health software. She and her team identified ten “must-know” facts that readers should glean from the brochure and hoped to hit a target of 80 percent recall. They tested the brochure using Mturk, a crowdsourced testing tool run by Amazon, and found that reader recall didn’t meet their expectations. Some of the key facts they wanted to emphasize weren’t clear enough, and, as a result, the brochure wasn’t as persuasive as they’d hoped.

This example highlights the importance of testing, said Straub. “Here we were, plain language people thinking we were good at what we do—yet we were surprised with the results.” In the age of content, she explained, there are no guides, and we have to stop blaming the victim. Usability experts and content experts have to come together to create effective documents and tools.

Fortunately, comprehension testing sounds harder than it is. There are three types:

1. “Simple” comprehension testing

Did the users get the key facts? To see if they did, the user testing team should

  • agree on the facts
  • decide which are the most important
  • create a question for each fact
  • agree on the answers

Pre-test your questions, and expect to revise them several times. Good questions are hard to write—test takers remember strategies for answering multiple choice questions from school (e.g., the longer, specific answer is the right one)—so offer participants an alternative to guessing (e.g., “The brochure didn’t say”).

Test multiple versions of your comprehension test to narrow down which version might work best for which audiences.

When reporting results, it’s important to note not only how many people got a question right but also what those who got it wrong chose as answers.

2. Confidence testing

Could users explain what they’ve just read to a family member of friend?

3. Persuasiveness testing

Users may understand the content, but will they change their behaviour accordingly? Understand their motivators, their concerns, and their barriers.

***

Straub has used Mturk for a lot of her user testing: participants get paid a small amount to answer an online survey. The advantages are that Mturk has a wide reach across the U.S., which translates to a lot of participants. The disadvantage is that you don’t have much control over your testing population. As such, your test should start with a filter—a comprehension test and “catch” questions (e.g., “Answer A even if you know that’s not the right answer”)—that can help narrow down your pool of testers who are genuinely reading the questions. Over time, you create a “panel” of people who return to your studies. “You get what you invest and what you pay for,” said Straub.

Each testing session takes about a week, including setup and analysis.

Using tools like Mturk, Straub reiterated, crowdsourced testing can be quick, inexpensive, and effective. It doesn’t have to be complicated to be robust. Most importantly, she said, you don’t know something is plain language to your target audience unless you’ve tested it in your target audience.

Neil James and Ginny Redish—Writing for the web and mobiles (PLAIN 2013)

Veteran plain language advocates Neil James and Ginny Redish shared some eye-opening statistics about web and mobile use at the PLAIN 2013 conference that may prompt some organizations to reprioritize how they deliver their content. In 2013, for example, there were 6.8 billion mobile phones in use—almost one for every person on the planet. Half of the users were using their mobiles to go online. In 2014, mobiles are expected to overtake PCs for Internet use. Surprisingly, however, 44% of Fortune 100 companies have no mobile site at all, and only 14% of consumers were happy their mobile experience. Mobile users are 67% more likely to purchase from a mobile-friendly site, and 79% will go elsewhere if the site is poor.

People don’t go to a website just to use the web, explained Redish. Every use of a website is to achieve a goal. When writing for the web, always consider

  • purpose: why is the content being created?
  • personas: who are the users?
  • conversations: what do users have to do to complete their task?

Always write to a persona, said Redish, and walk those personas through their conversations. Remember to repeat this exercise on mobile, too.

Consider the following areas when creating content:

  1. Audience
  2. Physical context
  3. Channels
  4. Navigation
  5. Page structure
  6. Design
  7. Expression

Words, noted the presenters, are only one element out of seven.

Some basic guidelines

Build everything for user needs

Again, think of who your users are and what they are trying to accomplish. Consider their characteristics when they use your site. Are they anxious? Relaxed? Aggressive? Reluctant? Keep those characteristics in mind when creating your content.

Consider the physical context

Mobiles are a different physical environment compared with a tablet or PC. The screens are smaller, and type and links on a typical website are too small to read comfortably. Maybe soon we’ll have sites with responsive design that change how content is wrapped depending on the device being used to read it, but for now,  creating a dedicated mobile version of a site may be the best way to ensure that all users have an optimal experience on your site regardless of the device they use.

Select the best channels

Smartphones, equipped with cameras, geolocators, accelerometers, and so on, are capable of a lot. We need to be creative and consider whether any of these functions could help us deliver content.

Simplify the navigation

Minimize the number of actions—clicks and swipes—that a user needs to do before they get to what they want. “People will tolerate scrolling if they’re confident they’ll get to what they want,” said James.

Prioritize the content on every page

Put the information users want at the top, and be aware that, for a given line length, a heading with more words will have smaller type, which can affect its perceived hierarchy.

Design for the small screen

Pay attention in particular to information in tables. Do users have to scroll to read the whole table? Do they need to see the whole table at once to get the information they need?

Cut every word you can

The amount of information you can put on a website might be seemingly infinite, but for mobile sites, it’s best to be as succinct as possible. Pare the content down to only what users would need.

What the heck’s happening in book publishing? (EAC-BC meeting)

Freelance writer, editor, indexer, and teacher Lana Okerlund moderated a lively panel discussion at the November EAC-BC meeting that featured Nancy Flight, associate publisher at Greystone Books; Barbara Pulling, freelance editor; and Laraine Coates, marketing manager at UBC Press. “There are lots of pronouncements about book publishing,” Okerlund began, “with some saying, ‘Oh, it’s doomed,’ and others saying that it’s undergoing a renaissance. What’s the state of publishing now, and what’s the role of the editor?”

Flight named some of the challenges in trade publishing today: publishers have had to scramble to get resources to publish ebooks, even though sales of ebooks are flattening out and in some cases even declining. Print books are also declining: unit sales are up slightly, but because of the pressure to keep list prices low, revenues are down. Independent bookstores are gone, so there are fewer places to sell books, and Chapters-Indigo is devoting much less space to books. Review pages in the newspaper are being cut as well, leaving fewer options for places to publicize books. The environment is hugely challenging for publishers, explained Flight, and it led to the bankruptcy just over a year ago of D&M Publishers, of which Greystone was a part. “We’ve all risen from the ashes, miraculously,” she said, “but in scattered form.” Greystone joined the Heritage Group while Douglas & McIntyre was purchased by Harbour Publishing, and many of the D&M staff started their own publishing ventures based on different publishing models.

The landscape “is so fluid right now,” said Pulling. “It changes from week to week.” There are a lot of prognosticators talking about the end of the traditional model of publishing, said Pulling. The rise of self-publishing—from its accessibility to its cachet—has led to a lot of hype and empty promises, she warned. “Everybody’s a publisher, everybody’s a consultant. It raises a lot of ethical issues.”

The scholarly environment faces some different challenges, said Coates. It can be quick to accept new things but sometimes moves very slowly. Because the main market of scholarly presses has been research libraries, the ebook issue is just now emerging, and the push is coming from the authors, who want to present their research in new ways that a book can’t really accommodate. She gave as examples researchers who want to release large amounts of their data or authors of Aboriginal studies titles who want to make dozens of audio files available. “Is confining ourselves to the book our mandate?” she asked. “And who has editorial control?”

Okerlund asked the panel if, given the rise in ebooks and related media, editors are now expected to be more like TV producers. Beyond a core of editorial skills, what other skills are editors expected to have?

“I’m still pretty old-fashioned,” answered Flight. “The same old skills are still going to be important in this new landscape.” She noted an interesting statistic that ebook sales are generally down, but ebooks for kids in particular have fallen 45% in the first half of 2013. As for other ebook bells and whistles, Greystone has done precisely one enhanced ebook, and that was years ago. They didn’t find the effort of that project worth their while. Coates agreed, saying “Can’t we just call it [the enhanced ebook] a website at this point? Because that’s what it really is.” Where editorial skills are going to be vital, she said, was in the realm of discoverability. Publishers need editors to help with metadata tagging and identifying important themes and information. Scholarly presses are now being called upon to provide abstracts not just for a book but also for each chapter, and editors have the skills to help with these kinds of tasks.

Pulling mentioned a growing interest in digital narratives, such as Kate Pullinger’s Inanimate Alice and Flight Paths, interactive online novels that have readers contribute threads to the stories. Inanimate Alice was picked up by schools as a teaching tool and is considered one of the early examples of transmedia storytelling. “Who is playing an editors’ role in the digital narrative?” asked Pulling. “Well, nobody. That role will emerge.”

Okerlund asked if authors are expected to bring more to the table. Flight replied, “Authors have to have a profile. If they don’t, they are really at a huge disadvantage. We’re not as willing to take a chance on a first-time author or someone without a profile.” Pulling expressed concern for the authors, particularly in the “Wild West” of self-publishing. “What happens to the writers?” she asked. In the traditional publishing model, if you put together a successful proposal, the publisher will edit your book. But now “Writers are paying for editing. Writers are being asked to write for free. They need to be able to market; they need to know social media. It’s very difficult for writers right now. Everybody’s trying to get something for nothing.” She also said that although self-publishing offers opportunity in some ways, “there’s so much propaganda out there about self-publishing.” Outfits like Smashwords and Amazon, she explained, have “done so much damage. It’s like throwing stuff to the wall and seeing what sticks, and they’re just making money on volume.”

Pulling sees ethical issues not only in those business practices but also in the whole idea of editing a work to be self-published, without context. “It’s very difficult to edit a book in a vacuum,” she said. “You have to find a way to create a context for each book,” which can be hard when “you have people come to you with things that aren’t really books.” She added, “Writers are getting the message that they need an editor, but some writers have gotten terrible advice from people who claim to be editors. Book editing is a specialized skill, and you have to know about certain book conventions. Whether it’s an ebook or a print book, if something is 300,000 words long, and it’s a novel, who’s going to read that?” A good, conscientious book editor can help an author see a larger context for their writing and tailor their book to that, with a strong overall narrative arc. “It’s incumbent upon you as a freelancer to educate clients about self-publishing,” said Pulling. Coates added, “We have a real PR problem now in publishing and editing. We’ve gotten behind in being out there publicly and talking about what we do. The people pushing self-publishing are way ahead of us. I think it’s sad that writers can’t just be writers. I can’t imagine how writing must suffer because of that.”

Both Flight and Pulling noted that a chief complaint of published authors was that their publishers didn’t do enough marketing. But, as Pulling explained, “unless it’s somebody who is set up to promote themselves all the time, it’s not as easy as it looks.” Coates said that when it comes to marketing, UBC Press tries everything. “Our audiences are all over the place,” she explained. “We have readers and authors who aren’t on email to people who DM on Twitter. It’s subject specific: some have huge online communities.” Books built around associations and societies are great, she explained, because they can get excerpts and other promotional content to their existing audiences. She’s also found Twitter to be a great tool: “It’s so immediate. Otherwise it’s hard to make that immediate connection with readers.”

Okerlund asked the panel about some of the new publishing models that have cropped up, from LifeTree Media to Figure 1 Publishing and Page Two Strategies. Figure 1 (started by D&M alums Chris Labonté, Peter Cocking, and Richard Nadeau), Pulling explained, does custom publishing—mostly business books, art books, cookbooks, and books commissioned by the client. Page Two, said Pulling, is “doing everything.” Former D&Mers Trena White and Jesse Finkelstein bring their clients a depth of experience in publishing. They have a partnership with a literary agency but also consult with authors about self-publishing. They will also help companies get set up with their own publishing programs. Another company with an interesting model is OR Books, which offers its socially and politically progressive titles directly through their website, either as ebooks or print-on-demand books.

The scholarly model, said Coates, has had to respond to calls from scholars and readers to make books available for free as open-access titles. The push does have its merits, she explained: “Our authors and we are funded by SSHRC [the Social Sciences and Humanities Research Council of Canada]. So it makes sense for people to say, ‘If we’re giving all this money to researchers and publishers, why are they selling the books?'” The answer, she said, lies simply in the fact that the people issuing the call for open access don’t realize how many resources go into producing a book.

So where do we go from here? According to Pulling, “Small publishers will be okay, as long as the funding holds.” Flight elaborated: “There used to be a lot of mid-sized publishers in Canada, but one after another has been swallowed up or gone out of business.” About Greystone since its rebirth, Flight explained, “We’re smaller now. We’re just doing everything we’ve always done, but more so. We put a lot more energy into identifying our market.” She added, “It’s a good time to be a small publisher, if you know your niche. There’s not a lot of overhead, and there’s collegiality. At Greystone we’ve been very happy in our smaller configuration, and things are going very well.”

Pulling encouraged us to be more vocal and active politically. “One of the things we should do in Vancouver is write to the government and get them to do something about the rent in this city. We don’t have independent bookstores, beyond the specialty stores like Banyen or Kidsbooks. And at the same time Gregor Robertson is celebrating Amazon’s new warehouse here?” She also urged us to make it clear to our elected representatives how much we value arts funding. One opportunity to make our voices heard is coming up at the Canada Council’s National Forum on the Literary Arts, happening in February 2014.

Mark Hochhauser—How do our readers really think, understand, and decide—despite what they know? (PLAIN 2013)

Mark Hochhauser, who holds a PhD in psychology from the University of Pittsburgh, is a readability consultant based in Minnesota. Writing, reading, judging, and deciding, he explained at his PLAIN 2013 plenary session, are neurobiological processes that take place in different parts of the brain. Plain language can benefit some of them, but not all.

What can affect reading comprehension?

Word knowledge is critical for good comprehension. You need to know 85–90 percent of words to understand a document; to fully understand, you need to know 98–99 percent. Hochhauser was quick to add, “Common understanding of legal words is not the same thing as legal understanding of legal words.”

Vocabulary does not correlate with language comprehension or verbal fluency in adults with low literacy. Poor readers tend to recognize individual words but have not made the shift to stringing them together into sentences.

“All readers are not the same,” said Hochhauser. Reading, comprehension, and cognition are affected by

  • the aging brain, learning difficulties, and disorders like ADD/ADHD
  • how reading comprehension is measured: True/false questions, for example, are not good tests of comprehension, and some reading tests use only a few hundred words.
  • health problems: Acute coronary syndrome, intensive care, chemotherapy, metabolic syndrome, type 2 diabetes, drug addiction, traumatic brain injury, and menopausal transition can all affect how well we think.

How do we make decisions?

Daniel Kahneman, author of Thinking, Fast and Slow, noted a “law of least effort” in thinking and decision making. Hochhauser explained, “If there are several ways to achieve the same goal, readers will take the least demanding route.” We have two systems of thinking: logical and emotional. Decisions are emotional first, logical second. “We often feel a decision before we can verbalize it,” he said. Whereas logical decisions are slow, controlled, and require a lot of effort, emotional decisions are fast, automatic, and require little effort. Our brains can retain only so much information, said Hochhauser. Miller’s Law refers to 7 ± 2—the number of items we can retain in our short-term memory, but more recent research suggests we can retain only about 4 to 7, depending on age (peaking at 25–35).

When we make quick decisions, we rely on intuition, which Hochhauser defined as “knowledge without reasoning” and “knowledge without awareness.” We are influenced by heuristics—shortcuts to making decisions. “Affect heuristics” are tied to our emotional responses to previous experiences, and “effort heuristics” make us assign value to work based on the perceived effort that went into it. Our decisions are also strongly influenced by how information is framed: Would you prefer “75 percent lean” or “25 percent fat”?  “Ninety-one percent employment” or “9 percent unemployment”?

Hochhauser concluded with an anecdote about an amusing bit of legalese in a letter he received that read, “Please read and understand the enclosed document.” The problem is, of course, as Hochhauser put it, “You cannot compel understanding.”

Stefan Dollinger on the changing expectations of the Oxford English Dictionary

Until December 24, 2013, Rare Books and Special Collections at the UBC Library is running an exhibition, The Road to the Oxford English Dictionary, that traces the history of English lexicography and the work that eventually led to the OED. To kick off this exhibition, Stefan Dollinger, assistant professor in the English department at UBC and editor-in-chief of The Dictionary of Canadianisms on Historical Principles, gave a free public lecture titled “Oxford English Dictionary, the Grimm Brothers, and Miley Cyrus: On the Changing Expectations of the OED—Past, Present, and (Possible) Futures.”

The OED, said Dollinger, bills itself as “the definitive record of the English language.” So what happens when you try to look up a recently coined word like “twerk”? The Oxford English Dictionary itself returns

No dictionary entries found for ‘twerk’.

but oxforddictionaries.com, the contemporary dictionary, gives this definition:

twerk
Pronunciation: /twəːk/
verb
[no object] informal

dance to popular music in a sexually provocative manner involving thrusting hip movements and a low, squatting stance:

just wait till they catch their daughters twerking to this song

twerk it girl, work it girl

Will words like “twerk” and “bootylicious” eventually make their way into the OED? We don’t usually expect these kinds of neologisms to become accepted by the dictionary so quickly, but earlier this year, the OED quietly added the social media sense of the word “tweet,” breaking its rule that a word has to be current for ten years before it’s considered for inclusion—a move that possibly signals a change in our expectations of the dictionary.

Dollinger took a step back to the roots of the OED. As much as Oxford University Press would like to claim that the dictionary was a pioneering publication, a lot of the groundwork for the kind of lexicography used to put it together had been laid by Jacob and Wilhelm Grimm a few years earlier when they published the first volume of their German language dictionary (Deutsches Wörterbuch von Jacob Grimm und Wilhelm Grimm). Nor is the OED‘s the world’s largest monolingual dictionary; that distinction belongs to the Woordenboek der Nederlandsche Taal (Dictionary of the Dutch language), with over 430,000 entries running almost 50,000 pages. Is the OED the most historically important dictionary? Dollinger offered the contrasting example of the Dictionary of American Regional English, a project of the American Dialect Society, which used detailed questionnaires to collect rigorous regional, social, and historical data about words used in American English. Although the number of entries pales in comparison with the OED, the level of detail is unparalleled and probably more important to researchers of the English language.

Still, there’s no denying that the OED has been extremely influential and is still considered an authoritative resource. Dollinger gave us a run-down of the dictionary’s history.

In November 1857, Richard Chenevix Trench, Dean of Westminster Abbey, addressed the Philological Society in London in a talk later published as On Some Deficiencies in Our English Dictionaries. In this publication, which planted the seeds of the OED, Trench outlined seven problems with existing dictionaries:

I. Obsolete words are incompletely registered; some inserted, some not; with no reasonable rule adduced for the omission of these, the insertion of those other.

II. Families or groups of words are often imperfect, some members of a family inserted, while others are omitted.

III. Oftentimes much earlier examples of the employment of words exist than any which our Dictionaries have cited; indicating that they were earlier introduced into the language than these examples would imply; and in case of words now obsolete, much later, frequently marking their currency at a period long after that when we are left to suppose that they passed out of use.

IV. Important meanings and uses of words are passed over; sometimes the later alone given, while the earlier, without which the history of words will be often maimed and incomplete, or even unintelligible, are unnoticed.

V. Comparatively little attention is paid to the distinguishing of synonymous words.

VI. Many passages in our literature are passed by, which might be usefully adduced in illustration of the first introduction, etymology, and meaning of words.

VII. And lastly, our Dictionaries err in redundancy as well as in defect, in the too much as well as the too little; all of them inserting some things, and some of them many things, which have properly no claim to find room in their pages.

Trench’s recommendations included using quotations to show usage, a practice now known as the “OED method” but that should, accordingly to Dollinger, perhaps more accurately be termed the “Grimm method,” seeing as they used the same approach for their Wörterbuch. Trench also wrote

A Dictionary, then, according to that idea of it which seems to me alone capable of being logically maintained, is an inventory of the language… It is no task of the maker of it to select the good words of a language. If he fancies that it is so, and begins to pick and choose, to leave this and to take that, he will at once go astray. The business which he has undertaken is to collect and arrange all the words, whether good or bad, whether they commend themselves to his judgment or otherwise, which, with certain exceptions hereafter to be specified, those writing in the language have employed.

This most progressive thought of Trench’s echoes the Grimms, who, three years earlier, in their 1854 Wörterbuch, had written

“And here the difference between adorned language and vulgar (raw) language comes into effect… Should the dictionary list the indecent words or should they be left out?… The dictionary, if it is supposed to be worth its salt, is not here to hide words, but to show them… one must not try to eradicate such words and expressions.”

Trench, incidentally, never acknowledged any of the Grimms’ innovations, many of which the OED‘s lexicographers (consciously or unconsciously) borrowed.

In 1879, Oxford University Press appointed James A.H. Murray as editor-in-chief of the OED, and he edited more than half of the entries in the first edition. In 1928, the dictionary was published in twelve volumes, at which point it already needed updating. William Craigie and C.T. Onions edited a supplement, published in 1933; the thirteen volumes together are referred to collectively as OED1. Edmund Weiner and John Simpson co-edited the dictionary’s second edition, OED2, which was published in print in 1989 and on CD-ROM in 1992.

Did these editors follow Trench’s suggestion that the OED be a comprehensive inventory of the language? Dollinger noted that colonial bias in Victorian times, and consequently, in the OED, was pervasive, and despite the editors’ best intentions of keeping the dictionary up to date, likely more than 50 percent of the original entries remain unchanged. Dollinger argued that perhaps the tagline “The definitive record of the English language” should more accurately read “The definitive record of the English language (as seen by Oxford [mostly] men largely of the [upper] middle class).” For instance, the dictionary has long been criticized for relying on literary texts for examples of usage. Dollinger offered the example of “sea-dingle,” whose OED entry reads as follows:

sea-dingle n. (now only arch.) an abyss or deep in the sea.

a1240 Sowles Warde in Cott. Hom. 263 His runes ant his domes þe derne beoð ant deopre þen eni sea dingle [= abyss of the sea: cf. Ps. xxxv. 6 Vulg. Judicia tua abyssus multa].

c1931 W.H. Auden in M. Roberts New Signatures (1932) 30 Doom is dark and deeper than any sea-dingle.

Yet, as Seth Lerer has noted, W.H. Auden (an Oxford man) “mined the OED for archaic, pungent words.” Does his use of the word really reflect common usage? Not, said Dollinger, if you look at the Urban Dictionary entry for the term:

1. sea-dingle

A sex act involving two people in which salmon roe is used as lubrication facilitating anal penetration by a penis.

Yeah, I was out camping with my wife. I got lucky when we went fishing and then again when we went back to the tent. She was totally down for a sea-dingle.

(This practice of recycling old terms in a “reification of literary writers” brought to my mind this XKCD cartoon on citogenesis.)

Dollinger pointed out a problem with the way the OED describes itself:

the Oxford English Dictionary is an irreplaceable part of English culture. It not only provides an important record of the evolution of our language, but also documents the continuing development of our society.

What is “English culture,” and what is “our”? In other words, who owns English? As early as the late 1960s, linguist David Crystal noted that, in order to be a comprehensive record of English, the OED would have to include World Englishes. Today the number of people who speak English as a second language outnumber native speakers five to one, and they use a kind of global English for trade and other interactions. Who are native speakers to say that their terms—handy for “cell phone” in Euro-English, prepone for “rescheduling to an earlier time” in Indo-English, and batchmate for “cohort member” in Philippine English—aren’t proper English usage?

As far as Dollinger is concerned, the OED is at a crossroads and can go down one of three paths:

  1. Take an Inner Circle focus (i.e., UK, Australia, New Zealand, North America, South Africa).
  2. Retreat to focus on British English only (which would in itself be a challenging task, owing to the variations of English spoken across the country).
  3. Include all World Englishes, in which case the dictionary should treat the Inner, Outer and Expanding circles on an equal footing. If its aim is truly to be the “principal dictionary of record for the English language throughout the lifetime of all current users of the language,” as the preface to the third edition of the OED claims, this path is the only logical choice.

Dollinger closed by encouraging all of us to check out the exhibition at Rare Books and Special Collections.Road to the OED poster

Karen Schriver—Plain by design: Evidence-based plain language (PLAIN 2013)

We may be good at the how of plain language, but the why can be more elusive. To fill in that missing chunk of the puzzle, information design expert Karen Schriver has scoured the empirical research on writing and design published between 1980 and 2010. She gave the PLAIN 2013 audience an eye-opening overview of her extensive, cross-disciplinary review, debunking some long-held myths in some instances and reaffirming our practices in others.

Audiences, readers, and users

In the 1980s, we classified readers and users as experts versus novices, a distinction that continues to haunt the plain language community because some people assume that we “dumb down” content for lower-level readers. Later on we added a category of intermediate readers, but Schriver notes that we have to refine our audience models.

What we thought

A good reader is always a good reader.

What the research shows

Reading ability depends on a huge number of variables, including task, context, and motivation. Someone’s tech savvy, physical ability, and even assumptions, feelings, and beliefs can influence how well they read.

Nominalizations

What we thought

Processing nominalizations (versus their equivalent verbs or adjectives) takes extra time.

What the research shows

It’s true, in general, that most nominalizations do “chew up working memory,” as Schriver described, because readers have to backtrack and reanalyze them. However, readers have little trouble when nominalizations appear in the subject position of a sentence and refer to an idea in the previous sentence.

Conditionals

What we thought

Conditionals (if, then; unless, then; when, then) break up text and help readers understand.

What the research shows

A sentence with several conditionals are hard for people to process, particularly if they appear at the start. Leave them till the end or, better yet, use a table.

Lists

What we thought

Lists help readers understand and remember, and we should use as many lists as possible.

What the research shows

Lists can be unhelpful if they’re not semantically grouped. If an entire document consists of lists, we can lose important hierarchical cues that tell us what content to prioritize.

Text density

What we thought

A dense text is hard to understand.

What the research shows

It’s true! But there’s a nuance: we’re used to thinking about verbal density, which turns readers off after they begin reading. Text that is dense visually can make people disengage before reading even begins.

Serif versus sans-serif

What we thought

For print materials, serif type is better than sans-serif. Sans-serif is better for on-screen reading.

What the research shows

When resolution is excellent, as it is on most screens and devices nowadays, serif and sans-serif are equally legible and easy to read. Factors that are more important to readability include line length, contrast, and leading.

Layout and design

What we thought

Layouts that people prefer are better.

What the research shows

We prefer what we’re used to, not necessarily what makes us perform better. This point highlights why user testing is so important.

Impressions and opinions

We thought

It takes sustained reading to get an impression of the content.

What the research shows

It takes only 50 millseconds for a reader to form an opinion, and that first impression tends to persist.

Technology

What we thought

Content is content, regardless of medium.

What the research shows

Reader engagement is mediated by the technologies used to display the content.

Teamwork in writing and design

What we thought

Writing and design are largely solitary pursuits.

What the research shows

Today, both are highly collaborative. We now have an emphasis on editing and revision rather than on creation.

***

Evidence-based plain language helps us understand the reasons behind our principles and practices, allowing us to go beyond intuition in improving our work and developing expertise. We can also offer up this body of research to support our arguments for plain language and convince clients that our work is important and effective. What Schriver would like to see (and what the plain language community clearly needs) is a repository for this invaluable research.