Blog

Craig Morrison—10 fixes for improving your product’s UX (webinar)

UserTesting.com hosted a free seminar featuring usability consultant Craig Morrison of Usability Hour. Morrison began as a web designer, focusing on visual design, but he soon discovered that aesthetics alone aren’t enough to ensure a good user experience. Freelancers often get into the habit of satisfying only their clients’ demands and, once they finish one project, they move on to the next, which means that they don’t get a chance to refine user experience. But positive user experiences translate into user recommendations and business growth, so it’s a good idea to help clients see the importance of placing user needs ahead of their own.

Morrison outlined ten of the most common UX mistakes and how to fix them:

1. Focusing on impressive design instead of usable architecture

It’s tempting to want to make a site that will wow people with its visuals, but aesthetics alone don’t provide value. Morrison offered Craigslist as an example of how a plain-looking site can be popular because it has great functionality. He recommends that you consult a UX consultant first to plan a usable content structure, then focus on visual design.

2. Not removing unvalidated features

If your site has features that nobody is using, all it’s doing is cluttering up the site and making it harder for users to find what they really want from you.

3. Listening to user ideas

This is not to say that you shouldn’t listen to your users at all; listening to their problems is valuable, but often what users suggest as solutions wouldn’t work well. Morrison suggests that you start user testing and watch how people use the product. Seeing where they falter will highlight what you need to work on.

Polling your audience is also a good way to get feedback, particularly for new features, but phrase your questions carefully. You’re looking more for users’ motivations for using a particular feature, as opposed to their opinions about which option they’d prefer.

4. Forcing people to sign up without offering any value

Your landing page can’t be just a logo and a sign-up form. People aren’t willing to exchange their information for nothing. Instead, show why your product is valuable before they sign up. This also goes for credit card numbers: asking for that information during a free trial will turn people off before they’ve even tried your product.

5. Taking user feedback personally

If your dismiss negative feedback by saying “they just don’t get it” or “users are dumb,” you’re sabotaging your business. Complaints are opportunities to improve UX.

6. Poorly designed search function

Half of web users are search oriented and won’t browse. Morrison admits that this bit of advice may sound like a bit of a cop-out, but “follow proper guidelines for designing a usable search function.” There are best practices out there, and he’s written about some of them on his blog.

7. Not optimizing for mobile

“Mobile traffic on the web is 20% and rising,” said Morrison, and you’re driving that traffic away if your site isn’t optimized. People aren’t going to voluntarily spend the time to zoom and navigate through a website meant for larger screens. Invest time and money into a simple mobile site. Morrison says that whatever solution you choose is up to you, but he’s found CSS media queries to be a simple way to ensure your content displays how you want it to, and he prefers it over responsive design.

8. Not offering users help

Despite your best efforts to designing a user-friendly site, inevitably some people will get lost or confused and then won’t come back, out of frustration. Morrison suggests buttressing good content architecture with a searchable wiki and an FAQ page. How-to videos are great, as is live support, if you can offer it.

9. No emotional connection between brand and users

People who feel emotionally connected to your brand will have a better experience. If your users aren’t familiar and comfortable with your brand, they’ll be quick to dislike you for even the smallest flaws. Focus on building your brand early, and get buy-in from all of your employees. For example, if part of what you offer is excellent customer service, ensure that all of your employees live up to that expectation.

10. Not including user onboarding

A user’s first impression is key, and if they get frustrated with using your product, they’ll quit and never come back. You’ve sunk a lot of effort into attracting a new user but you’ll lose it all by not being able to activate them into a long-term user. User onboarding is a way of teaching users how to use your product while demonstrating its value.

At the same time, Morrison recognizes that not everybody loves onboarding. Always offer users the ability to skip it if they’re confident in using your product. At the same time, make sure they can go back whenever they want to do the onboarding if they need to brush up.

According to Morrison, real business growth through UX comes from

  1. getting traffic to the landing page
  2. converting that traffic
  3. activating new users to become long-lasting users

Morrison will be offering an online course through his website to teach people how to meet those goals using great UX. He’s also written an ebook, 5-minute UX Quick Fixes, available free on his site. The webinar I attended will be posted in a couple of weeks at UserTesting.com.

***

I liked that although Morrison’s advice is obviously more geared toward websites or apps, a lot of it applies to other kinds of documents as well. I saw the following parallel mistakes for plain language documents (numbering corresponds to list above):

1. Focusing on aesthetics over functionality. Aesthetic design is important, but usability is paramount: do your choices regarding type, graphics, headings, and white space make the document easier to read and understand?

2. Including too much “nice to know” information. In most plain language documents, you should give readers what they need to know.

3. Listening to users? This point of Morrison’s gave me pause, but his advice of paying attention to the users’ problems rather than their suggested solutions makes sense. For instance, users that consistently fill in a part of a form wrong may not pinpoint poor layout as the reason, but a plain language expert might.

5. Taking user feedback personally. This problem probably applies to the client more than the plain language writer or editor, but the editor may have to go to bat for a user and convince a reluctant client that you have to make certain changes.

6. Poorly designed search function. A good search function is a must-have for websites and apps. The print analogue is an excellent table of contents, descriptive and logical headings and subheadings, and a thorough index.

Have I’ve missed other parallels? Let me know in the comments.

Informed-consent documents: Where legalese meets academic jargon

Ever since the Nuremberg Trials put on display the atrocities of human experimentation at the hands of Nazi doctors, the concept of informed consent has been a cornerstone of both medical treatment and biomedical research. [1] Although no country has adopted the Nuremberg Code in its entirety, most Western nations have acknowledged the importance of informed consent as a pillar of research and medical ethics. But if study participants or patients don’t understand the documents that describe the study protocol or treatment plan, are they truly informed?

For human research subjects, the U.S.’s Code of Federal Regulations states:

46.116 General requirements for informed consent

Except as provided elsewhere in this policy, no investigator may involve a human being as a subject in research covered by this policy unless the investigator has obtained the legally effective informed consent of the subject or the subject’s legally authorized representative. An investigator shall seek such consent only under circumstances that provide the prospective subject or the representative sufficient opportunity to consider whether or not to participate and that minimize the possibility of coercion or undue influence. The information that is given to the subject or the representative shall be in language understandable to the subject or the representative. [Emphasis added]

In “Public Health Literacy in America: An Ethical Imperative,” Julie Gazmararian and her co-authors note that one of the “Characteristics of a health-literate America” is that “Informed consent documents used in health care are written in a way that allow people to give or withhold consent based on information they need and understand.” [2]

Unfortunately, actual informed-consent materials fall far short of promoting understanding. Informed-consent documents are used both for legal reasons and to explain research or treatment protocols; as a result, they’re susceptible to being filled with legalese and medicalese—the worst of both worlds. Giuseppina Terranova and her team reviewed consent forms used in various imaging procedures and found that

At qualitative assessment by consensus of the expert panel, the informed consent forms were complex and poorly organized, were written in a jargon style, and contained incomplete content (not including information about treatment options, long-term radiation risk and doses); for outcome probabilities, relevant information was not properly highlighted and easy to find. [3]

Having a complex informed-consent form rife with legalese only stirs distrust among participants and patients. In “Improvement of Informed Consent and the Quality of Consent Documents,” Michael Jefford and Rosemary Moore write:

Informed consent has two main aims: first, to respect and promote participants’ autonomy; and second, to protect them from potential harm. Provision of information in an understandable way lends support to both these aims…

The written informed-consent document (ie, consent form) is an important part of the requirement to disclose and advise participants of the details of a proposed trial. Although the form has been said to give “legal and symbolic documentation of an agreement to participate,” the length and complexity of informed-consent documents hinder participant understanding. Viewing the consent form mainly as a legal document tends to hinder attempts to create reader-friendly documents: “many sponsor and institutions appear to view them primarily as a legal instrument to protect them against litigation.” [4]

Ironically, “The high reading levels of most such forms precludes this understanding, increasing rather than limiting legal liability.” [5] What’s more, if a consent document is hard to understand, research participants will believe researchers are merely covering their own asses rather than prioritizing the participants’ well-being.

The obvious solution to this problem is to use plain language in informed-consent documents. In a test of a standard versus modified (plain language) pediatric consent form for parents, Alan R. Tait and co-investigators found that

Understanding of the protocol, study duration, risks, and direct benefits, together with overall understanding, was greater among parents who received the modified form (P<.001). Additionally, parents reported that the modified form had greater clarity (P = .009) and improved layout compared with the standard form (P<.001). When parents were shown both forms, 81.2% preferred the modified version. [6]

Further, not only do plain language statements (PLS) protect research subjects and patients, but they also benefit researchers:

In the most practical sense, a commitment to producing good quality PLS leads to faster ethics approval—an outcome that will delight researchers. However, the real reward that comes with commitments to high quality PLS is the knowledge that parents and participants are properly informed and that researchers are contributing to a positive change in meeting the information requirements of parents and young research participants.

Plain language information statements need to be clearly understood by research subjects if the ethics process for research approval is to fulfil its objective. [7]

I see an opportunity for plain language experts to advocate for informed consent by promoting clear communication principles at research institutions and health authorities. Although most institutional research ethics boards (REBs) have guidelines for consent forms that recommend using lay language, I would guess that most REB members are unfamiliar with the plain language process. Institutional REBs, such as the one at Simon Fraser University, consist of not only faculty members and students but also members of the wider community, so even if you are unaffiliated with the institution, you may still be able to join an REB and advocate for plain language from the inside. If you’d rather not commit to sitting on an REB, you might want to see if you could give a presentation at an REB meeting about plain language and clear communication principles.

In my ideal world, a plain language review of consent documents would be mandatory for ethics approval, but biostatistician and current chair of SFU’s REB, Charlie Goldsmith, warns that adding a further administrative hurdle to ethics approval probably wouldn’t fly. Most researchers already see the ethics review process as burdensome and a hindrance to their work. But if you could convince researchers that a plain language review before submission to the REB could accelerate approval, as Green and co-investigators had found, you might help open up opportunities for plain language advocates to work with researchers directly to develop understandable consent documents from the outset.

That said, plain language informed-consent forms address only one facet of the interaction and relationship between researcher and study participant, or between clinician and patient. Jefford and Moore write:

There are reasons for putting effort into the production of plain-language participant information and consent forms. However, evidence suggests that these forms should not be relied on solely to ensure that a person understands details about a trial. Plain-language forms should be seen as part of the process that aims to achieve meaningful informed consent. [8]

In other words, clear communication initiatives should extend beyond written materials to in-person interactions: researchers and clinicians should receive training in plain language debriefing and in techniques such as “teach-back” (asking someone to repeat the information they’ve just been given in their own words) to ensure that they are fulfilling their ethical obligations and are doing all they can to help patients and study participants become truly informed.

To learn more about research ethics, including informed consent, take the Course on Research Ethics, developed by Canada’s Panel on Research Ethics.

Sources

[1] JB Green et al., “Putting the ‘Informed’ into ‘Consent’: A Matter of Plain Language,” Journal of Paediatrics and Child Health 39, no. 9 (December 2003): 700–703, doi:10.1046/j.1440-1754.2003.00273.x.

[2] Julie A Gazmararian et al., “Public Health Literacy in America: An Ethical Imperative,” American Journal of Preventive Medicine 28, no. 3 (April 2005): 317–22, doi:10.1016/j.amepre.2004.11.004.

[3] Giuseppina Terranova et al., “Low Quality and Lack of Clarity of Current Informed Consent Forms in Cardiology: How to Improve Them,” JACC. Cardiovascular Imaging 5, no. 6 (June 1, 2012): 649–55, doi:10.1016/j.jcmg.2012.03.007.

[4] Michael Jefford and Rosemary Moore, “Improvement of Informed Consent and the Quality of Consent Documents,” The Lancet. Oncology 9, no. 5 (May 2008): 485–93, doi:10.1016/S1470-2045(08)70128-1.

[5] Sue Stableford and Wendy Mettger, “Plain Language: A Strategic Response to the Health Literacy Challenge,” Journal of Public Health Policy 28, no. 1 (January 1, 2007): 71–93, doi:10.1057/palgrave.jphp.3200102.

[6] Alan R Tait et al., “Improving the Readability and Processability of a Pediatric Informed Consent Document: Effects on Parents’ Understanding,” Archives of Pediatrics & Adolescent Medicine 159, no. 4 (April 1, 2005): 347–52, doi:10.1001/archpedi.159.4.347.

[7] JB Green et al., 2003.

[8] Michael Jefford and Rosemary Moore, 2008.

***

This post is an excerpt (heavily edited to provide context) of a paper I wrote for one of my courses about the role of plain language in health literacy. Plain language experts might find some of the references useful in their advocacy work.

Access to information: The role of editors (EAC-BC meeting)

At the November EAC-BC meeting, Shana Johnstone, principal of Uncover Editorial + Design, moderated a panel discussion that offered rich and diverse perspectives on accessibility. (She deftly kept the conversation flowing with thematic questions, so although her words don’t show up much in my summary here, she was critical to the evening’s success.)

Introductions

Panel members included:

The Crane Library, Nygard explained, is named after Charles Crane, who in 1931 became the first deafblind student to attend university in Canada. Over his life he accumulated ten thousand volumes of works in Braille, and when he died, his family donated the collection to the Vancouver Public Library, which then donated it to UBC. Paul Thiele, a visually impaired doctoral student, and his wife, Judith, who was the first blind library student (and later the first blind librarian) in Canada, helped set up the space for the Crane Library, including a Braille card catalogue and Braille spine labels so that students could find materials on their own. Today the Crane Library is part of Access and Diversity at UBC and offers exam accommodations, narration services (it has an eight-booth recording studio to record readings of print materials), and materials in a variety of formats, including PDF, e-text, and Braille.

Gray, who has a background in recreational therapy, used to work with people who had brain injuries, and for her, it was “a trial-and-error process to communicate with them just to do my job,” she said. Through that work she developed communication strategies that take into account not only the language but also formats that will most likely appeal to her audience. To reach a community, Gray said, it’s important to understand its language and conventions. “It’s about getting off on the right foot with people. If you turn people off with a phrase that is outside their community, they stop reading.” It’s also important to know who in a community is doing the reading. In the Down syndrome community, she said, “people are still writing as if the caregivers are the ones reading” even though more people with developmental disability are now reading for themselves.

Booth works with forty-five groups (such as the Writers’ Exchange) that provide literacy support in the Downtown Eastside, which he emphasized is “a neighbourhood, not a pejorative.” He defined literacy as the “knowledge, skills, and confidence to participate fully in life,” and he told us that “There is more stigma around illiteracy than there is around addiction.”

Busting misconceptions

Within the Downtown Eastside, said Booth, there are “multiple populations with multiple challenges and multiple experiences—sometimes bad—with learning.” Residents may be reluctant to get involved with structured educational opportunities, and so they rely on community organizations to reach out to them. The media does the Downtown Eastside a disservice by portraying it as the “poorest postal code in Canada,” Booth says. To him, all of his clients, regardless of their background, bring skills and experience to the table.

Gray agreed, adding that it’s easy to make judgments based on appearance. She knows that her three-year-old son, who has Down syndrome, is taking in more than he’s putting back out. The same holds for people who have had strokes or people with cerebral palsy. Some people may not speak well, but they may read and understand well. She acknowledges that we all bring preconceptions to every interaction, but it’s important to set them aside and ask questions to get to know your audience.

“What do we think of, when we think of a person with a disability?” said Nygard. “Not all disabilities are visible.” People assume that text-to-speech services are just for the visually impaired, but often they are for students with learning disabilities who prefer human voice narration. The students who use the Crane Library’s services are simply university students who need a little more support to be able to do certain academic activities. They are people with access to resources and technology that will help them get a university education.

People also assume that technology has solved the accessibility problem. Although a lot of accessibility features are now built into our technology, like VoiceOver for Macs and Ease of Access on Windows, computers aren’t the answer for everyone. For some people, technology hasn’t obviated Braille.

Their work—The specifics

Gray said that although she works primarily with print materials, she’s started writing as though the text would destined for the web. “I’m no longer assuming that people are reading entire chunks of material. I’m not assuming they’re following along from beginning to end or reading the whole thing. I’m using a lot more headings to break up the material and am continually giving people context. I’m not assuming people remember the topic, so I’m constantly reintroducing it.” People with Down syndrome have poor short-term memory, she said, so she never assumes that a reader will refer to earlier text where a concept was first introduced. “Don’t dumb it down,” she said, “but use plain language. Keep it simple and to the point.” Some writers enjoy adding variety to their writing to spice things up, she said. “Take the spice out. Keep to the facts.”

That said, editors also have to keep in mind that when people read, they’re not just absorbing facts; they’re approaching the material with a host of emotions. For people who have children with Down syndrome, she said, “everything they’re reading is judging them as a parent.”

“We don’t know where people are at and where their heads are when they’re taking the materials in,” Gray said.

To connect with the audience, said Booth, listening is a vital skill to develop. “Storytelling is a really important art form. Everybody has a story, and everybody will tell you their story if you give them the opportunity.”

Nygard compares her work to directing traffic—making sure resources flow to to people who need them. She explained the process of creating alternate formats: students have to buy a new textbook and give Nygard the receipt, at which point she can request a PDF from the publisher. But is it fair, she asked, to make these students buy the book at full price when their classmates can get a used copy for a discount? Another inequity is in the license agreement; often they allow students to use the PDF for the duration for the course only, when other students can keep their books for future reference. Image-only or locked PDFs are problematic because text-to-speech software like JAWS can’t read it.

For books that exist only in print, the conversion process involves cutting out the pages and manually scanning them to PDF, then running them through an OCR program to create a rough Word document. These documents then get sent to student assistants who clean them up for text-to-speech software. Otherwise, columns, running heads, footnotes, and other design features can lead to confusing results. We get a lot of context from the way text is laid out and organized on the page, said Nygard, but that context is lost when the text is read aloud.

Editors as advocates

Gray said she’d never considered herself an advocate per se. “I do think it’s part of my role to advise clients about the level of content and the way it’s presented. We need to make sure we can reach the audience.”

When we make decisions, said Nygard, we have to look out for people in the margins that we might not be addressing.

Booth said, “We’re all very privileged in this room. We have a responsibility to be advocates. Our tool is language.” As he spoke he passed out copies of Decoda Literacy Manifesto to each member of the audience.

Resources on accessibility

Nygard suggested we check out the Accessibility for Ontarians with Disabilities Act. Ontario has been a leader in this arena. She also mentioned the National Network for Equitable Library Service (NNELS), which allows collection sharing between various libraries. Many public libraries don’t find out about the Crane Library’s services, because it’s at an academic institution, but its collection is available to the general public. The NNELS site also has a section of tutorials for creating alternate-format materials. SNOW, the Inclusive Design Centre at OCAD, also has some excellent resources.

Compared with Ontario, said Nygard, BC lags behind in its commitment to accessibility. The BC government released Accessibility 2024, a ten-year plan to make the province the most progressive within Canada. But both Nygard and Booth call it “embarrassing.” “How they’ve set their priorities is a horror show,” said Nygard. One of the benchmarks for success in this accessibility plan, for example, is to have government websites be accessible by 2016, without addressing the concerns of whether people with disabilities have the skills, literacy, or access to technology to use that information. Meanwhile, disability rates haven’t gone up since 2007.

Booth agreed. The province has cut funding for high-school equivalency programs (GED), ESL, literacy, and adult basic education, choosing instead to focus on “job creation in extractive industries and training people to do specific jobs. What’s going to happen in a decade from now for people who don’t have education?”

In response to a question from the audience, Nygard acknowledged that  Project Gutenberg and Project Gutenberg Canada are great for accessible text of works in the public domain. She also mentioned that LibriVox has public domain audiobooks.

Grey matters: Why NGOs should start thinking like self-publishers

Among my favourite clients are nonprofit advocacy groups that champion causes I care about. These organizations pour an astounding amount of effort into their research, which usually culminates in reports destined for media, key policy makers, and the general public. These reports, which can contain a wealth of information that researchers would find valuable, are generally available for free on the NGO’s website. Unfortunately, more often than not, there they languish.

Policy reports and other NGO publications that don’t have ISBNs inhabit the murky world of grey literature—written research material that’s not formally published and hence not catalogued. As a result, they’re almost impossible to discover. Sometimes even Google won’t find them unless you know exactly what you’re looking for and use very specific search terms.

The Canadian Public Policy Collection and Canadian Health Research Collection are two databases that aggregate grey literature and are decent places for researchers to start looking, but these curated archives are far from exhaustive. For instance, one advocacy group I work with, Pivot Legal Society, has more than twenty publications, but only four are listed in the CPPC.

Grey literature’s poor discoverability means that these important publications don’t have the reach or longevity that they could have. A source of the problem is that most NGOs don’t consider themselves publishers. Abby Deshman, at the Canadian Civil Liberties Association, says, “We just don’t have publishers on staff… so that expertise about what we could do and what we could gain by doing it is not generally available.” Both Deshman and Tracy Torchetti at the Canadian Cancer Society told me that they’d love to increase their publications’ reach.

So what can these organizations do? The first step is to take advantage of the infrastructure built to accommodate the legions of self-publishers:

1. Get an ISBN (or ISSN) for your report

ISBNs cost up to $125 per title in the U.S. but are free in Canada for all publishers, self-publishers included. (If your publication is a serial, consider getting an ISSN). Assigning an ISBN to your title automatically plucks it out of the realm of grey literature and allows you to…

2. Submit metadata to Bowker’s Books in Print

Once you have your ISBN, you can fill in a form to have your bibliographic information listed for free in Bowker’s Books in Print—one of the major databases that libraries consult for their acquisitions.

3. Upload your metadata to a print-on-demand (POD) provider with wide distribution

When I rebuilt the Government of Canada’s plain language guides, I made them available through CreateSpace, Amazon’s POD platform. I set the list prices at their lowest possible, which would cover printing, binding, and Amazon’s cut of any sale. That said, I never expected to sell any copies through CreateSpace; in fact, in my descriptive copy, I included the URL of a site where people could download a free PDF. By putting the guides on CreateSpace, though, I made their metadata discoverable through Amazon’s network, and Amazon’s listing would in turn come up more readily in Google searches.

Another POD provider for independent publishers is IngramSpark, which will also make metadata available on its worldwide network, but, unlike CreateSpace, it has some modest upfront set-up and market access costs.

4. Send copies of your reports to Library and Archives Canada (LAC) for legal deposit

Legal deposit probably doesn’t enhance discoverability, but (perhaps for idealistic, sentimental reasons) I do kind of like that what you send them “becomes the record of the nation’s published heritage.” Once a publication has been added to the LAC collection, its metadata is entered into LAC’s database and can be retrieved through a search. LAC also accepts digital-only publications.

5. Use SEO techniques for your content

Most NGO publications that I’ve worked on end up as PDFs, which search engines can be reluctant to index (compared with HTML). Find PDF optimization tips here and here to increase the chances that they will show up on Google searches.

***

Finally, take advantage of the databases available to advocacy groups:

6. Submit your publication to the Canadian Public Policy Collection…

…and, if your publication is health related, the Canadian Health Research Collection. Despite their lack of comprehensiveness, the Canadian Public Policy Collection and the Canadian Health Research Collection are “still way better than anything else out there,” says academic librarian Franklin Sayre. These collections are home to a lot of grey literature, but they also house publications that have ISBNs and ISSNs. They are open to suggestions for publications to add to their databases. Although the full search and retrieval functions for these databases are available only to libraries that have paid for access, you can download Excel files with the list of titles available in each database. These files (this one for CPPC, and this one for CHRC) list URLs for the full text.

***

Those of us who work with these organizations on the publishing end could help our clients add value to their publications by letting them know about these options. Given what Deshman and Torchetti have told me, NGOs may not be aware of some of the steps they could take to maximize the lifespan and reach of their painstaking research.

Huge thanks to

  • Abby Deshman, for giving me the scoop on the Canadian Civil Liberties Association’s publishing practices;
  • Frank Sayre, for invaluable insights into grey literature, the CPPC, and the CHRC;
  • Tracy Torchetti, for canvassing her colleagues at the Canadian Cancer Society about their publishing practices; and
  • Trena White of Page Two, for confirming details about Books in Print.

Stefan Dollinger—Forks in the road: Dictionaries and the radically changing English-language ecosystem (EAC-BC meeting)

Stefan Dollinger, faculty member in the English and linguistic departments at the University of British Columbia, is editor-in-chief of the Dictionary of Canadianisms on Historical Principles (DCHP), and he spoke to the EAC-BC crowd about the role of dictionaries in the global English landscape.

His fascinating talk covered some of the same territory that I wrote about when I first saw him speak last year, so I’ll focus on his new content here.

English, said Dollinger, is unique in that it is the only language in the world with more second-language speakers than native speakers, the former outnumbering the latter by five to one. This ratio will only grow as more people in China, Russia, continental Europe, and South America use English for trade and diplomacy. Until recently, the study of English—particularly for dictionaries—had focused on native speakers, but scholars such as Barbara Seidlhofer, of the University of Vienna, have argued that English as a lingua franca (ELF) is the “real” English.

This shifting view influences how we approach dictionary making, which has generally used one of two methods:

  • In the literary tradition, lexicographers collect works from the best authors and compiled excerpts showing usage.
  • In the linguistic method, lexicographers empirically study language users.

One of the best examples of dictionaries compiled using the linguist method is the Dictionary of American Regional English (DARE), which Dollinger said is based on superb empirical data, including historical sources as well as a national survey of about three thousand users. The dictionary includes only “non-standard” regional words that are not used nationally in the United States and hence isn’t a comprehensive compilation of English words, but for researchers like Dollinger, the detail on regional, social, and historical uses is more important than the number of entries.

In contrast, the first edition of the Oxford English Dictionary (OED) used the literary tradition, and, as the preface to the third edition admits,

The Dictionary has in the past been criticized for its apparent reliance on literary texts to illustrate the development of the vocabulary of English over the centuries. A closer examination of earlier editions shows that this view has been overstated, though it is not entirely without foundation.

Although the OED has become more linguistic in its methodology, residues of the literary tradition persist: Dolliger said that about 50 percent of the entries the current edition, OED-3, are unchanged from the original edition, and although the OED employs a New Word Unit, a group of lexicographers who read content on the web and compile new words and senses, such a reading program is still not empirical and will fail to capture the usage of everyday speakers.

Going completely online, however, has allowed the OED to respond more nimbly to changes in the language: corrections to existing entries can now be made immediately, and the dictionary issues quarterly updates, adding a few hundred new words, phrases, and senses each time.

Dollinger feels that if the OED wants to keep claiming to be the “definitive record of the English language,” though, it will have to reorient its approach to include more fieldwork to study linguistic variation across the globe, focusing not only on what linguist Braj Kachru defined as the “inner circle,” where the majority of people are native English speakers (e.g., the U.S., U.K., Canada, Australia, New Zealand) but also on the “outer circle” of former British colonies like India, Singapore, etc., and especially on the “expanding circle” of countries, like Russia and China, with no historical ties to England—not to mention English-based pidgins and creoles. Although some native speakers may consider this shift threatening, Dollinger quoted H.G. Widdowson, who in 1993 wrote:

How English develops in the world is no business whatever of native speakers in England, the United States, or anywhere else. They have no say in the matter, no right to intervene or pass judgement. They are irrelevant. The very fact that English is an international language means that no nation can have custody over it. To grant such custody of the language is necessarily to arrest its development and so undermine its international status.

How, then, do lexicographers distinguish innovations from errors? World Englshes are replete with words that are unfamiliar to the native speaker, like

  • stingko, meaning “smelly” in Singapore English;
  • teacheress, a female teacher, in Indian English;
  • peelhead, a bald-headed person, in Jamaican English; or
  • high hat, a snob in Philippine English

Whether these are right depends only on the variety of English in question. Linguist Ayo Bamgbose suggested using the following criteria to judge whether a word or phrase is an error or innovation:

  • The demographic factor: How many acrolectal speakers speak it?
  • The geographical factor: Where is it used?
  • The authoritative factor: Who sanctions its use?
  • Codification: Does it appear in dictionaries and reference books?
  • The acceptability factor: What are the attitudes of users an non-users toward the word?

Dollinger is applying some of these principles to his work on DCHP, the first edition of which (now known as DCHP-1) began as a bit of a pet project for American lexicographer Charles Lovell. As a researcher for A Dictionary of Americanisms, published in 1951, Lovell began collecting Canadianisms. In 1958, Gage Educational Publishing asked Lovell to compile a dictionary for the Canadian Linguistic Association. After Lovell’s sudden death in 1960, Gage approached Walter S. Avis, known as “the pioneer of the study of Canadian English” and Matthew H. Scargill to continue his work. Together they finished and edited the dictionary and published it in 1967. That dictionary became the basis of Gage’s Canadian dictionary.

The 1990s saw a “Canadian Dictionary War,” with too many publishers—Gage Canadian, ITP Nelson, and the Canadian Oxford—competing in one market. Backed by a fierce marketing campaign, the Canadian Oxford won out.

In March 2006, Dollinger became editor-in-chief of the second edition of the Dictionary of Canadianisms on Historical Principles (DCHP-2), with Nelson Education providing seed funding. In 2013, DCHP-1 was released online, and Dollinger expects DCHP-2 to be complete in early 2016. Owing to time constraints, some entries from DCHP-1, which dug deep into the history of the fur trade for much of its content, will persist in DCHP-2, but these will be clearly marked as being from the original edition and annotated if necessary.

In compiling DCHP-2, Dollinger has noticed that some terms have considerable regional variation and wonders whether we should be considering national isoglosses at all, considering the U.S. and Canada have the world’s longest undefended border. As an example, he showed that whereas Western Canadians prefer the term “running shoes” or “runners,” those in Eastern Canada prefer “sneakers,” which mirrors the regional variation across the northern United States. He also noted that these kinds of variations would be much harder to identify through the literary method of dictionary making.

Another interesting feature of the entries in DCHP-2 is that 70 percent of the entries are compound nouns. “Butter isn’t uniquely Canadian, tart isn’t Canadian, but butter tart is,” said Dollinger. “Cube isn’t Canadian, and van isn’t Canadian, but cube van is.”

Dollinger wondered too if it was time for lexicographers to get even more granular and consider the variation within regional Englishes. In what ways, for example, might English spoken by a Chinese Canadian be unique?

As part of his research, Dollinger is asking British Columbians to complete a twenty-minute survey to help him and his students understand how they use English.