Accessible documents for people with print disabilities

In prepping a PubPro 2015 talk about editorial and production considerations when creating accessible documents, I ran into information about both the Centre for Equitable Library Access (CELA) and the National Network for Equitable Library Service (NNELS). Confused about the differences between them, I emailed NNELS for clarification, and librarian Sabina Iseli-Otto wrote back: “Would it be alright to call you? I know it’s getting late in the day but 5 minutes on the phone would save 20 minutes of typing (seriously).”

That five-minute chat turned into an impromptu phone interview, and Iseli-Otto gave me permission to share with you what I’ve learned. (The information in most of this post I got from her, but I’m also including a bit of what I found through my own research for my talk.)

Print disabilities and copyright

Print disabilities include:

  • blindness or visual impairments,
  • physical impairments that prevent a person from holding or manipulating print materials, and
  • cognitive impairments, like ADHD, dyslexia, or learning or memory problems due to a brain injury, that impede reading and understanding.

Although colourblindness isn’t considered a print disability, documents should be created with colourblindness in mind.

About 10 percent (a conservative estimate) of Canadians have a print disability, but only about 5 percent of published works are accessible. Most people with print disabilities aren’t using public libraries.

Section 32(1) of Canada’s Copyright Act spells out an exception to copyright that lets people with print disabilities, and those acting on their behalf, create and use alternate formats of copyrighted print materials (with the exception of large-print books and commercially available titles).

Accessible formats

The following are some of the accessible formats for people with print disabilities:

  • E-text: plain text (.txt), rich text (.rtf), Word (.docx)
  • EPUB 2 & 3
  • Accessible PDFs
  • DAISY
  • MP3s
  • large-print
  • Braille

E-text, EPUB, and accessible PDFs can be read by screen readers such as JAWS and VoiceOver. Not all PDFs are accessible—Adobe offers a way to check a document’s accessibility and has guidelines for creating accessible PDFs.

CELA

CELA formed about a year ago following a change to the funding structure at CNIB (formerly the Canadian National Institute for the Blind). CNIB had, over the past hundred years, amassed Canada’s largest collection of alternate-format books in its library, and CELA, with the support of the Canadian Urban Libraries Council, took over administrating this collection. The CNIB library still offers services to existing clients but will refer new clients to their local public library to access CELA’s services.

The shift of oversight from CNIB to CELA will hopefully allow more people to discover and use this extensive collection. Although it was always available to everyone with print disabilities, given that it was under the purview of CNIB, people who didn’t have visual impairments may not have realized that they could access it.

CELA has also partnered with Bookshare, an American online library for people with print disabilities. Rather than owning its content, Bookshare operates on more of a licensing model, controlling pricing and the licensing fees.

NNELS

NNELS is also about a year old, with a lean staff of only four people, and, unlike CELA and Bookshare, is funded exclusively by provincial governments, which gives it more transparency. It has a much smaller collection but owns perpetual rights to everything in it. NNELS takes patron requests and works directly with publishers to add to their collection. Nova Scotia helped negotiate a fixed rate for NNELS with publishers in the Atlantic provinces, and Saskatchewan has funded an initiative to create accessible EPUBs for all Saskatchewan books, which will be added to the NNELS collection. Whereas CELA focuses on partnerships with public libraries, NNELS also works with public schools and universities—for example, it has a content-exchange agreement with the Crane Library at UBC .

Recent policy changes relevant to people with print disabilities

Accessibility for Ontarians with Disabilities Act

According to the Accessibility for Ontarians with Disabilities Act (AODA),

Organizations will have to…provide accessible formats and communications supports as quickly as possible and at no additional cost when a person with a disability asks for them.

The law was enacted in 2005, but the regulations for information and communications didn’t come into effect until 2012, when all sectors had to make all emergency procedures and public safety information accessible upon request. For other types of communications, the AODA requirements were phased in beginning in 2013 for the public sector and beginning in 2013 and 2015 for private and non-profit sectors. (Respectively, I think? The website doesn’t make that bit clear.) If you work with Ontario businesses, you may be called on to provide accessible communications.

The Marrakesh Treaty

The Marrakesh Treaty to Facilitate Access to Published Works by Visually Impaired Persons and Persons with Print Disabilities laid out exceptions to copyright so that signatories could freely import and export accessible content, obviating the need to duplicate efforts to convert works to accessible formats in different countries. Although Canada was instrumental in writing the treaty, it hasn’t ratified or signed it. However, in its 2015 budget, unveiled last week, the Government of Canada announced that it would accede to the treaty, meaning that people with print disabilities could soon have access to a lot more content.

Publishers and accessible content

I asked Sabina Iseli-Otto how publishers can make her job easier.

“We’d prefer to get EPUB files or accessible PDFs directly from the publisher. Actually, I’ve been really, pleasantly surprised at how often publishers will say yes when we ask for them. I mean, they can always say no—they’re doing it out of the goodness of their hearts—but it saves public funds if they send us those files directly.”

If a publisher refuses to provide accessible files, the copyright exception still applies, which means that NNELS would still be able to create an accessible format, but it would have to:

  1. acquire a hard copy,
  2. scan in the pages,
  3. run optical character recognition (OCR) on the scans,
  4. clean up the text file (e.g., deleting running headers and footers),
  5. proof the text.

“More than anything,” Iseli-Otto said, “we want to hear back quickly” from publishers, regardless of what they decide.

I asked if the files NNELS provides to patrons have digital right management (DRM) on them. “No,” she said, “but we make it very clear to them that if they abuse them that they’re putting our whole operation in jeopardy. Some of them appreciate having the access so much that they’re actually quite protective of their files.”

Our conversation had focused on books. What about periodicals and grey literature? “There’s certainly demand for it,” said Iseli-Otto. “We’d love to do more of that. And I’d like to turn your question around: what can we do for publishers to make it easier to collaborate with us? I’m not sure how to build those relationships.”

(Can you guess who I’ve invited to PubPro 2016?)

Publishers who’ve been in business for longer than a decade will recognize the steps NNELS has to take to create accessible formats from a print-only book: they’re identical to what publishers have to do if they want to reissue a backlist title that has no retrievable digital files. Could Canadian publishers partner with an organization dedicated to creating accessible formats so that, in exchange for digitizing the backlist for publishers, the organization could add those files to its collection at no additional cost?

Editorial, design, and production considerations for creating accessible files

In my PubPro 2015 talk, I mentioned a few things publishers should keep in mind through the editorial and production process so that the output will be accessible—especially since having to retrofit an existing document to adhere to accessibility standards is more labour intensive and expensive than producing an accessible file from the outset. I focused mostly on the effect of editing and production on screen readers.

Style considerations

Screen readers will not always read all symbols. The Deque Blog has a summary of how three of the most popular screen readers interpret different symbols. (It’s a bit out of date but still a good place to start; thanks to Ashley Bischoff for that link.) Testing on VoiceOver, I found that although the screen reader is smart enough to read “Henry VIII” as “Henry the eighth,” “Chapter VIII” as “chapter eight,” and “World War II” and “World War two,” it reads each letter in “WWII” as if it were an initialism. And it reads 12,000 as “twelve thousand” but “12 000” as “twelve zero zero zero.” I also found that it doesn’t read the en dash before a numeral if the dash is used as a minus sign, saying “thirty-four degrees” for “–34°.”  It’s best to use the actual minus sign symbol − (U+2112), which my version of VoiceOver reads as “minus sign.” The same goes for the letter x used in place of the real multiplication symbol × (U+00D7). My version of VoiceOver doesn’t read a tilde before a numeral, so ~8 mL would be “eight millilitres” instead of  the intended “approximately eight millilitres.”

In any case, if you’re editing and deciding between styles, why not choose the most accessible?

Language considerations

Plain language best practices apply here:

  • chunk text and use heading styles,
  • break up long, complex sentences, and
  • aim for a natural, conversational style.

Headings and short chunks of text offer context and digestible content to the listener. Screen readers are actually already quite adept at putting the stress on the right syllables depending on whether a word like reject is used as a verb or noun—when the word is in a short sentence. It can get confused in longer sentences.

Image concerns

For images:

  • Offer alt text—text that is rendered if the image cannot be seen—for substantive images but not decorative ones. (Add an alt attribute in the code, but leave it blank—i.e., alt = “”—or the screen reader will read the filename. You can add alt text directly in InDesign.)
  • Don’t use colour as the only way to convey information. Make sure colours you choose to distinguish between two lines on a graphs, say, will not occupy the same grey space when converted to greyscale. Alternatively, use different styles for those lines or label them clearly directly on the graph.
  • Don’t turn text into an image to fix its appearance. We often see this practice with equations. Screen readers do not read LaTeX. If you have equations or mathematical expressions, convert them to MathML or offer alt text using the Nemeth MathSpeak system.

In essence, because ebooks are like websites, applying the Web Content Accessibility Guidelines 2.0 will ensure that your ebook will be accessible. The BC Open Textbook Accessibility Toolkit also has useful guidelines for publishers. I would recommend at least spot checking a document with a screen reader to uncover possible ambiguities or reasons for misapprehension.

***

Huge thanks to Sabina Iseli-Otto for her eye-opening insights!

Kelly Maxwell—Transcription, captioning, and subtitling (EAC-BC meeting)

Kelly Maxwell gave us a peek into the fascinating world of captioning and subtitling at April’s EAC-BC meeting. Maxwell, along with Carolyn Vetter Hicks, founded Vancouver-based Line 21 Media Services in 1994 to provide captioning, subtitling, and transcription services for movies, television, and digital media.

Not very many people knew what captioning was in the 1980s and ’90s, Maxwell said. But the Americans with Disabilities Act, passed in 1990, required all televisions distributed in the U.S. to have decoders for closed-captioning built in, and Canada, as a close trading partner, reaped the benefits. Captioning become ubiquitous and is now a CRTC requirement.

Line 21 works with post-production coordinators—those who see a movie or TV show through editing and colour correction. Captioning is often the last thing that has to be done before these coordinators get paid, so the deadlines are tight. Maxwell and her colleagues may receive a script from the client, in which case they load it into their CaptionMaker software and clean it up, or they may have to do their own transcription using Inqscribe, a simple, free transcription program. They aim to transcribe verbatim, and they rely on Google (in the ‘90s, they depended on reference librarians) to fact check and get the correct spelling for everything. Punctuation, too, is very important, and Maxwell uses it to maximize clarity: “People have to understand instantaneously when they see a caption,” she said. “I won’t ever give up the Oxford comma. We’re sticklers for old-fashioned, fairly heavy comma use. It can make a difference to someone understanding on the first pass.” She also edits for reading rate so that people with a range of literacy levels will understand. “Hearing people are the number-one users of captioning,” she said.

Although HD televisions now accommodate a 40-character line, Line 21 continues to caption in 32-character lines. “Captioners like to think of the lowest common denominator,” Maxwell said. They need to consider all of the people who still have older technology. Her company doesn’t do live captioning, which is done by court reporters taking one-hour shifts and is still characterized by a three-line block of all-caps text rolling on the screen. Today the captioning can pop onto the screen and be positioned to show who’s talking. The timing is done by ear but is also timecoded to the frame. Maxwell and her colleagues format captions into readable chunks—for example, whole clauses—to make them comprehensible. Once the captions have all been input, she watches the program the whole way through to make sure nothing has been missed, including descriptions of sound effects or music.

Subtitling is similar to closed captioning, but in this case, “You assume people can hear.” Maxwell first creates a timed transcript in English and relies on the filmmakers to forge relationships with translators they can trust. Knowing the timelines, translators can match up word counts and create a set of subtitles that line up with the original script. Maxwell then swaps in these subtitles for the English ones and, after proofing the video, sends it back to the translators for a final look. How do you proofread in a language you don’t know? “You can actually do a lot of proofing and find a lot of mistakes just by watching the punctuation,” said Maxwell. “You can hear the periods,” she added. “Sometimes they [translators] change or reorder the lines.”

Before the proliferation of digital video, Maxwell told us, they couldn’t do subtitling, which had to be done directly on the film. Today, they have a massive set of tools at their disposal to do their work. “In the early ‘90s,” she said, “there were two kinds of captioning.” In contrast, today “we have 80 different delivery formats,” and each broadcaster has its own requirements for formats and sizes. “People ask me if I’m worried about the ubiquity of the tools,” said Maxwell. “No. Just because I have a pencil doesn’t mean I’m a Picasso.”

As for voice-recognition software, such as YouTube’s automatic captioning feature, Maxwell says it just isn’t sophisticated enough and can produce captions riddled with errors. “You do need a human for captioning, I’m afraid.”

Maxwell prides herself on her company’s focus of providing quality captioning. One of her projects was captioning a four-part choral performance of a mass in Latin. According the to CRTC regulations, all she had to do was add musical notes (♪♫), but she wanted to do better. She bought the score and figured out who was singing what.

In another project, she captioned a speech by the Dalai Lama. “Do you change people’s grammar, change people’s words?” The Dalai Lama probably didn’t say some of the articles or some of the verbs (like to be) that appear in the final captions, Maxwell said, but captioners sometimes will make quiet changes to clarify meaning without changing the intent of the message.

Captioning involves “a lot, a lot, a lot of googling,” she said, “and a lot of random problem solving.” She’s well practiced in the “micro-discernment of phonemes.” Sometimes when she’s unable to tell what someone has said, all it takes is to get someone else to listen to it and say what they hear. Over the years, Maxwell and her team have developed tricks like these to help them help their clients reach as wide an audience as possible.

Writers on editors: an evening of eavesdropping (EAC-BC meeting)

What do writers really think of editors? Journalist and editor Jenny Lee moderated a discussion on that topic with authors Margo Bates and Daniel Francis at last week’s EAC-BC meeting. Bates, self-published author of P.S. Don’t Tell Your Mother and The Queen of a Gated Community, is president of the Vancouver branch of the Canadian Authors Association. Francis is a columnist for Geist magazine and a prolific author of two dozen books, including the Encyclopedia of British Columbia and the Connections Canada social studies textbook.

Francis told us that in the 1980s, he’d had one of his books published by a major Toronto-based publisher, who asked him about his next project. Francis pitched the concept for what became Imaginary Indian: the image of the Indian in Canadian culture back to 1850. His Toronto publisher turned it down, concerned about appropriation of voice. “I took the idea to friends in Vancouver,” said Francis, “and in some ways it’s my most successful book.” He learned from the experience that he’d rather work with smaller publishers close to home, many of which were run by people he considered friends. He thought his book with the larger publisher would be the ticket, but it was among his worst-selling titles, and he was particularly dismayed that the editor didn’t seem to have paid much attention to his text. “To me, this is a collaborative process, working with an editor,” said Francis. “I’m aware that I’m no genius and that this is not a work of genius,” but his editor “barely even read the thing.” He found the necessary depth in editing when he worked with his friends at smaller presses. “Friends can be frank,” Francis said.

Bates, whose P.S. Don’t Tell Your Mother has sold more than 7,500 copies, became familiar with how much editors can do when she hired them through her work in public relations. For her own writing, Bates knew she could take care of most of the copy editing and proofreading but wanted an objective but understanding professional who would advise her about structure and subject matter. She looked for someone who would tighten up her book and make it saleable. “I’m not that smart a writer that I can go without help,” she said. “I wouldn’t do anything without an editor.” In fact, she allocated the largest portion of her publishing budget to editing. After speaking with several candidates, Bates selected an editor who understood the social context of her book and help her “tell the story of prejudice in a humorous way.”

Frances Peck mentioned an article she read about a possible future where self-publishers would have editors’ imprints on their books—in other words, editors’ reputations would lend marketability to a book. “Is that a dream?” she asked. “The sooner, the better, as far as I’m concerned,” Bates said. “There’s a lot of crap out there,” she added, referring to story lines, point of view, grammar, spelling and other dimensions of writing that an editor could help authors improve.

What sets good editors apart from the rest? Francis says that he most appreciates those who have good judgment about when to correct something and when to query. Some strategies for querying suggested by the audience include referring often to the reader (“Will your reader understand?”) and referring to the text as something separate from the author (i.e., using “it says on page 26” rather than “you say on page 26”). Bates said that she really appreciated when her editor expressed genuine enthusiasm for her story. Her editor had told her, “I’m rooting for the characters, and so are your fans.”

Lee asked whether the popular strategy of the sandwich—beginning and ending an editorial letter with compliments, with the potentially ego-deflating critique in the middle—was effective. Francis said, “I hope I’m beyond the need for coddling. I guess you have to know who you’re dealing with, when you’re an editor.” Some editors in the room said that the sandwich is a reliable template for corresponding with someone with whom you haven’t yet established trust. We have to be encouraging as well as critical.

Both Bates and Francis urged editors to stop beating around the bush. Francis said, “You get insulted all the time as a textbook writer. You have to grow a pretty thick skin.” That said, Francis wasn’t a big fan of the book’s process of editing by committee and says it’s one reason he stopped writing textbooks. In addition to producing a coherent text, the textbook’s author and editors had to adhere to strict representation guidelines (e.g., the balance of males to females depicted in photographs had to be exactly 1:1).

Lee asked the two authors how they found their editors. Francis said that his publishers always assign his editors, and “I get the editor that I get.” So far his editors have worked out for him, but if he’d had any profound differences, he’d have approached the publisher about it or, in extreme cases, parted ways with the publisher.

Bates said that for self-published authors, the onus is on them to do their research and look at publications an editor has previously worked on. “There will always be inexperienced writers who don’t see the need for editors,” she said, but at meetings of the Federation of BC Writers and the Canadian Authors Association, she always advocates that authors get an editor. Bates suggested that the Editors’ Association of Canada forge closer ties with writers’ organizations so that we could readily educate authors about what editors do.

Time to leave academic writing to communications experts?

In the Lancet’s 2014 series about preventing waste in biomedical research, Paul Glasziou et al. pointed to “poorly written text” as a major reason a staggering 50% of biomedical reports are unusable [1], effectively squandering the research behind them. According to psycholinguist Steven Pinker [2], bad academic writing persists partly because there aren’t many incentives for scholars to change their ways:

Few academic journals stipulate clarity among their criteria for acceptance, and few reviewers and editors enforce it. While no academic would confess to shoddy methodology or slapdash reading, many are blasé about their incompetence at writing.

He adds:

Enough already. Our indifference to how we share the fruits of our intellectual labors is a betrayal of our calling to enhance the spread of knowledge. In writing badly, we are wasting each other’s time, sowing confusion and error, and turning our profession into a laughingstock.

The problem of impenetrable academese is undeniable. How do we fix it?

In “Writing Intelligible English Prose for Biomedical Journals,” John Ludbrook proposes seven strategies [3]:

  • greater emphasis on good writing by students in schools and by university schools,
  • making use of university service courses and workshops on writing plain and scientific English,
  • consulting books on science writing,
  • one-on-one mentoring,
  • using “scientific” measures to reveal lexical poverty (i.e., readability metrics),
  • making use of freelance science editors, and
  • encouraging the editors of biomedical journals to pay more attention to the problem.

Many institutions have implemented at least some of these strategies. For instance, SFU’s graduate student orientation in summer 2014 introduced incoming students to the library’s writing facilitators and open writing commons. And at UBC, Eric Jandciu, strategist for teaching and learning initiatives in the Faculty of Science, has developed communication courses and resources specifically for science students, training them early in their careers “to stop thinking of communication as separate from their science.” [4]

Although improving scholars’ writing is a fine enough goal, the growth in the past fifteen years of research interdisciplinarity [5], where experts from different fields contribute their strengths to a project, has me wondering whether we would be more productive if we took the responsibility of writing entirely away from researchers. Rather than forcing academics to hone a weak skill, maybe we’d be better off bringing in communications professionals whose writing is already sharp.

This model is already a reality in several ways (though not all of them aboveboard):

  • Many journals encourage authors to have their papers professionally edited before submission [6]. From personal experience, I can confirm that this “editing” can involve heavy rewriting.
  • The pharmaceutical industry has long used ghostwriters to craft journal articles on a researcher’s behalf, turning biomedical journals into marketing vehicles [7]. We could avoid the ethical problems this arrangement poses—including plagiarism and conflict of interest—with a more transparent process that reveals a writer’s identity and affiliations.
  • Funding bodies such as CIHR have begun emphasizing the importance of integrated knowledge translation (KT) [8], to ensure knowledge users have timely access to research findings. Although much of KT focuses on disseminating research knowledge to stakeholders outside of academia, including patients, practitioners, and policy makers, reaching fellow researchers is also an important objective.

To ensure high-quality publications, Glasziou et al. suggest the following:

Many research institutions already employ grants officers to increase research input, but few employ a publication officer to improve research outputs, including attention to publication ethics and research integrity, use of reporting guidelines, and development of different publication models such as open access. Ethics committees and publication officers could also help to ensure that all research methods and results are completely and transparently reported and published.

Such a publication officer would effectively serve as an in-house editor and production manager. Another possibility is for each group or department to hire an in-house technical communicator. Technical communicators are trained in interviewing subject matter experts and using that information to draft documents for diverse audiences. In the age of big data, one could also make a convincing case for hiring a person who specializes in data visualization to create images and animations that complement the text.

That said, liberating scientists from writing should not absolve them of the responsibility of learning how to communicate. At a minimum, they would still need to understand the publication process enough to effectively convey their ideas to the writers.

Separating out the communication function within research would also raise questions about whether we should also abolish the research–teaching–service paradigm on which academic tenure is based. If we leave the writing to strong writers, perhaps only strong teachers should teach and only strong administrators should administrate.

Universities’ increasing dependence on sessional and adjunct faculty is a hint that this fragmentation is already happening [9], though in a way that reinforces institutional hierarchies and keeps these contract workers from being fairly compensated. If these institutions continue to define ever more specialized roles, whether for dedicated instructors, publication officers, or research communicators, they’ll have to reconsider how best to acknowledge these experts’ contributions so that they feel their skills are appropriately valued.

Sources

[1] Paul Glasziou et al., “Reducing Waste from Incomplete or Unusable Reports of Biomedical Research,” Lancet 383, no. 9913 (January 18, 2014): 267–76, doi:10.1016/S0140-6736(13)62228-X.

[2] Steven Pinker, “Why Academics Stink at Writing,” The Chronicle of Higher Education, September 26, 2014, http://chronicle.com/article/Why-Academics-Writing-Stinks/148989/

[3] John Ludbrook, “Writing Intelligible English Prose for Biomedical Journals,” Clinical and Experimental Pharmacology & Physiology 34, no. 5–6 (January ): 508–14, doi:10.1111/j.1440-1681.2007.04603.x.

[4] Iva Cheung, “Communication Convergence 2014,” Iva Cheung [blog], October 8, 2014, https://ivacheung.com/2014/10/communication-convergence-2014/.

[5] B.C. Choi and A.W. Pak, “Multidisciplinarity, Interdisciplinarity, and Transdisciplinarity in Health Research, Services, Education and Policy: 1. definitions, objectives, and evidence of effectiveness. Clinical and Investigative Medicine 29 (2006): 351–64.

[6] “Author FAQs,” Wiley Open Access, http://www.wileyopenaccess.com/details/content/12f25e4f1aa/Author-FAQs.html.

[7] Katie Moisse, “Ghostbusters: Authors of a New Study Propose a Strict Ban on Medical Ghostwriting,” Scientific American, February 4, 2010, http://www.scientificamerican.com/article/ghostwriter-science-industry/.

[8] “Guide to Knowledge Translation Planning at CIHR: Integrated and End-of-Grant Approaches,” Canadian Institutes of Health Research, Modified June 12, 2012, http://www.cihr-irsc.gc.ca/e/45321.html.

[9] “Most University Undergrads Now Taught by Poorly Paid Part-Timers,” CBC.ca, September 7, 2014, http://www.cbc.ca/news/canada/most-university-undergrads-now-taught-by-poorly-paid-part-timers-1.2756024.

***

This post was adapted from a paper I wrote for one of my courses. I don’t necessarily believe that a technical communication–type workflow is the way to go, but the object of the assignment was to explore a few “what-if” situations, and I thought this topic was close enough to editing and publishing to share here.

Access to information: The role of editors (EAC-BC meeting)

At the November EAC-BC meeting, Shana Johnstone, principal of Uncover Editorial + Design, moderated a panel discussion that offered rich and diverse perspectives on accessibility. (She deftly kept the conversation flowing with thematic questions, so although her words don’t show up much in my summary here, she was critical to the evening’s success.)

Introductions

Panel members included:

The Crane Library, Nygard explained, is named after Charles Crane, who in 1931 became the first deafblind student to attend university in Canada. Over his life he accumulated ten thousand volumes of works in Braille, and when he died, his family donated the collection to the Vancouver Public Library, which then donated it to UBC. Paul Thiele, a visually impaired doctoral student, and his wife, Judith, who was the first blind library student (and later the first blind librarian) in Canada, helped set up the space for the Crane Library, including a Braille card catalogue and Braille spine labels so that students could find materials on their own. Today the Crane Library is part of Access and Diversity at UBC and offers exam accommodations, narration services (it has an eight-booth recording studio to record readings of print materials), and materials in a variety of formats, including PDF, e-text, and Braille.

Gray, who has a background in recreational therapy, used to work with people who had brain injuries, and for her, it was “a trial-and-error process to communicate with them just to do my job,” she said. Through that work she developed communication strategies that take into account not only the language but also formats that will most likely appeal to her audience. To reach a community, Gray said, it’s important to understand its language and conventions. “It’s about getting off on the right foot with people. If you turn people off with a phrase that is outside their community, they stop reading.” It’s also important to know who in a community is doing the reading. In the Down syndrome community, she said, “people are still writing as if the caregivers are the ones reading” even though more people with developmental disability are now reading for themselves.

Booth works with forty-five groups (such as the Writers’ Exchange) that provide literacy support in the Downtown Eastside, which he emphasized is “a neighbourhood, not a pejorative.” He defined literacy as the “knowledge, skills, and confidence to participate fully in life,” and he told us that “There is more stigma around illiteracy than there is around addiction.”

Busting misconceptions

Within the Downtown Eastside, said Booth, there are “multiple populations with multiple challenges and multiple experiences—sometimes bad—with learning.” Residents may be reluctant to get involved with structured educational opportunities, and so they rely on community organizations to reach out to them. The media does the Downtown Eastside a disservice by portraying it as the “poorest postal code in Canada,” Booth says. To him, all of his clients, regardless of their background, bring skills and experience to the table.

Gray agreed, adding that it’s easy to make judgments based on appearance. She knows that her three-year-old son, who has Down syndrome, is taking in more than he’s putting back out. The same holds for people who have had strokes or people with cerebral palsy. Some people may not speak well, but they may read and understand well. She acknowledges that we all bring preconceptions to every interaction, but it’s important to set them aside and ask questions to get to know your audience.

“What do we think of, when we think of a person with a disability?” said Nygard. “Not all disabilities are visible.” People assume that text-to-speech services are just for the visually impaired, but often they are for students with learning disabilities who prefer human voice narration. The students who use the Crane Library’s services are simply university students who need a little more support to be able to do certain academic activities. They are people with access to resources and technology that will help them get a university education.

People also assume that technology has solved the accessibility problem. Although a lot of accessibility features are now built into our technology, like VoiceOver for Macs and Ease of Access on Windows, computers aren’t the answer for everyone. For some people, technology hasn’t obviated Braille.

Their work—The specifics

Gray said that although she works primarily with print materials, she’s started writing as though the text would destined for the web. “I’m no longer assuming that people are reading entire chunks of material. I’m not assuming they’re following along from beginning to end or reading the whole thing. I’m using a lot more headings to break up the material and am continually giving people context. I’m not assuming people remember the topic, so I’m constantly reintroducing it.” People with Down syndrome have poor short-term memory, she said, so she never assumes that a reader will refer to earlier text where a concept was first introduced. “Don’t dumb it down,” she said, “but use plain language. Keep it simple and to the point.” Some writers enjoy adding variety to their writing to spice things up, she said. “Take the spice out. Keep to the facts.”

That said, editors also have to keep in mind that when people read, they’re not just absorbing facts; they’re approaching the material with a host of emotions. For people who have children with Down syndrome, she said, “everything they’re reading is judging them as a parent.”

“We don’t know where people are at and where their heads are when they’re taking the materials in,” Gray said.

To connect with the audience, said Booth, listening is a vital skill to develop. “Storytelling is a really important art form. Everybody has a story, and everybody will tell you their story if you give them the opportunity.”

Nygard compares her work to directing traffic—making sure resources flow to to people who need them. She explained the process of creating alternate formats: students have to buy a new textbook and give Nygard the receipt, at which point she can request a PDF from the publisher. But is it fair, she asked, to make these students buy the book at full price when their classmates can get a used copy for a discount? Another inequity is in the license agreement; often they allow students to use the PDF for the duration for the course only, when other students can keep their books for future reference. Image-only or locked PDFs are problematic because text-to-speech software like JAWS can’t read it.

For books that exist only in print, the conversion process involves cutting out the pages and manually scanning them to PDF, then running them through an OCR program to create a rough Word document. These documents then get sent to student assistants who clean them up for text-to-speech software. Otherwise, columns, running heads, footnotes, and other design features can lead to confusing results. We get a lot of context from the way text is laid out and organized on the page, said Nygard, but that context is lost when the text is read aloud.

Editors as advocates

Gray said she’d never considered herself an advocate per se. “I do think it’s part of my role to advise clients about the level of content and the way it’s presented. We need to make sure we can reach the audience.”

When we make decisions, said Nygard, we have to look out for people in the margins that we might not be addressing.

Booth said, “We’re all very privileged in this room. We have a responsibility to be advocates. Our tool is language.” As he spoke he passed out copies of Decoda Literacy Manifesto to each member of the audience.

Resources on accessibility

Nygard suggested we check out the Accessibility for Ontarians with Disabilities Act. Ontario has been a leader in this arena. She also mentioned the National Network for Equitable Library Service (NNELS), which allows collection sharing between various libraries. Many public libraries don’t find out about the Crane Library’s services, because it’s at an academic institution, but its collection is available to the general public. The NNELS site also has a section of tutorials for creating alternate-format materials. SNOW, the Inclusive Design Centre at OCAD, also has some excellent resources.

Compared with Ontario, said Nygard, BC lags behind in its commitment to accessibility. The BC government released Accessibility 2024, a ten-year plan to make the province the most progressive within Canada. But both Nygard and Booth call it “embarrassing.” “How they’ve set their priorities is a horror show,” said Nygard. One of the benchmarks for success in this accessibility plan, for example, is to have government websites be accessible by 2016, without addressing the concerns of whether people with disabilities have the skills, literacy, or access to technology to use that information. Meanwhile, disability rates haven’t gone up since 2007.

Booth agreed. The province has cut funding for high-school equivalency programs (GED), ESL, literacy, and adult basic education, choosing instead to focus on “job creation in extractive industries and training people to do specific jobs. What’s going to happen in a decade from now for people who don’t have education?”

In response to a question from the audience, Nygard acknowledged that  Project Gutenberg and Project Gutenberg Canada are great for accessible text of works in the public domain. She also mentioned that LibriVox has public domain audiobooks.

Stefan Dollinger—Forks in the road: Dictionaries and the radically changing English-language ecosystem (EAC-BC meeting)

Stefan Dollinger, faculty member in the English and linguistic departments at the University of British Columbia, is editor-in-chief of the Dictionary of Canadianisms on Historical Principles (DCHP), and he spoke to the EAC-BC crowd about the role of dictionaries in the global English landscape.

His fascinating talk covered some of the same territory that I wrote about when I first saw him speak last year, so I’ll focus on his new content here.

English, said Dollinger, is unique in that it is the only language in the world with more second-language speakers than native speakers, the former outnumbering the latter by five to one. This ratio will only grow as more people in China, Russia, continental Europe, and South America use English for trade and diplomacy. Until recently, the study of English—particularly for dictionaries—had focused on native speakers, but scholars such as Barbara Seidlhofer, of the University of Vienna, have argued that English as a lingua franca (ELF) is the “real” English.

This shifting view influences how we approach dictionary making, which has generally used one of two methods:

  • In the literary tradition, lexicographers collect works from the best authors and compiled excerpts showing usage.
  • In the linguistic method, lexicographers empirically study language users.

One of the best examples of dictionaries compiled using the linguist method is the Dictionary of American Regional English (DARE), which Dollinger said is based on superb empirical data, including historical sources as well as a national survey of about three thousand users. The dictionary includes only “non-standard” regional words that are not used nationally in the United States and hence isn’t a comprehensive compilation of English words, but for researchers like Dollinger, the detail on regional, social, and historical uses is more important than the number of entries.

In contrast, the first edition of the Oxford English Dictionary (OED) used the literary tradition, and, as the preface to the third edition admits,

The Dictionary has in the past been criticized for its apparent reliance on literary texts to illustrate the development of the vocabulary of English over the centuries. A closer examination of earlier editions shows that this view has been overstated, though it is not entirely without foundation.

Although the OED has become more linguistic in its methodology, residues of the literary tradition persist: Dolliger said that about 50 percent of the entries the current edition, OED-3, are unchanged from the original edition, and although the OED employs a New Word Unit, a group of lexicographers who read content on the web and compile new words and senses, such a reading program is still not empirical and will fail to capture the usage of everyday speakers.

Going completely online, however, has allowed the OED to respond more nimbly to changes in the language: corrections to existing entries can now be made immediately, and the dictionary issues quarterly updates, adding a few hundred new words, phrases, and senses each time.

Dollinger feels that if the OED wants to keep claiming to be the “definitive record of the English language,” though, it will have to reorient its approach to include more fieldwork to study linguistic variation across the globe, focusing not only on what linguist Braj Kachru defined as the “inner circle,” where the majority of people are native English speakers (e.g., the U.S., U.K., Canada, Australia, New Zealand) but also on the “outer circle” of former British colonies like India, Singapore, etc., and especially on the “expanding circle” of countries, like Russia and China, with no historical ties to England—not to mention English-based pidgins and creoles. Although some native speakers may consider this shift threatening, Dollinger quoted H.G. Widdowson, who in 1993 wrote:

How English develops in the world is no business whatever of native speakers in England, the United States, or anywhere else. They have no say in the matter, no right to intervene or pass judgement. They are irrelevant. The very fact that English is an international language means that no nation can have custody over it. To grant such custody of the language is necessarily to arrest its development and so undermine its international status.

How, then, do lexicographers distinguish innovations from errors? World Englshes are replete with words that are unfamiliar to the native speaker, like

  • stingko, meaning “smelly” in Singapore English;
  • teacheress, a female teacher, in Indian English;
  • peelhead, a bald-headed person, in Jamaican English; or
  • high hat, a snob in Philippine English

Whether these are right depends only on the variety of English in question. Linguist Ayo Bamgbose suggested using the following criteria to judge whether a word or phrase is an error or innovation:

  • The demographic factor: How many acrolectal speakers speak it?
  • The geographical factor: Where is it used?
  • The authoritative factor: Who sanctions its use?
  • Codification: Does it appear in dictionaries and reference books?
  • The acceptability factor: What are the attitudes of users an non-users toward the word?

Dollinger is applying some of these principles to his work on DCHP, the first edition of which (now known as DCHP-1) began as a bit of a pet project for American lexicographer Charles Lovell. As a researcher for A Dictionary of Americanisms, published in 1951, Lovell began collecting Canadianisms. In 1958, Gage Educational Publishing asked Lovell to compile a dictionary for the Canadian Linguistic Association. After Lovell’s sudden death in 1960, Gage approached Walter S. Avis, known as “the pioneer of the study of Canadian English” and Matthew H. Scargill to continue his work. Together they finished and edited the dictionary and published it in 1967. That dictionary became the basis of Gage’s Canadian dictionary.

The 1990s saw a “Canadian Dictionary War,” with too many publishers—Gage Canadian, ITP Nelson, and the Canadian Oxford—competing in one market. Backed by a fierce marketing campaign, the Canadian Oxford won out.

In March 2006, Dollinger became editor-in-chief of the second edition of the Dictionary of Canadianisms on Historical Principles (DCHP-2), with Nelson Education providing seed funding. In 2013, DCHP-1 was released online, and Dollinger expects DCHP-2 to be complete in early 2016. Owing to time constraints, some entries from DCHP-1, which dug deep into the history of the fur trade for much of its content, will persist in DCHP-2, but these will be clearly marked as being from the original edition and annotated if necessary.

In compiling DCHP-2, Dollinger has noticed that some terms have considerable regional variation and wonders whether we should be considering national isoglosses at all, considering the U.S. and Canada have the world’s longest undefended border. As an example, he showed that whereas Western Canadians prefer the term “running shoes” or “runners,” those in Eastern Canada prefer “sneakers,” which mirrors the regional variation across the northern United States. He also noted that these kinds of variations would be much harder to identify through the literary method of dictionary making.

Another interesting feature of the entries in DCHP-2 is that 70 percent of the entries are compound nouns. “Butter isn’t uniquely Canadian, tart isn’t Canadian, but butter tart is,” said Dollinger. “Cube isn’t Canadian, and van isn’t Canadian, but cube van is.”

Dollinger wondered too if it was time for lexicographers to get even more granular and consider the variation within regional Englishes. In what ways, for example, might English spoken by a Chinese Canadian be unique?

As part of his research, Dollinger is asking British Columbians to complete a twenty-minute survey to help him and his students understand how they use English.