Inspired by Yvonne Van Ruskenveld
Author: Iva Cheung
Accessible documents for people with print disabilities
In prepping a PubPro 2015 talk about editorial and production considerations when creating accessible documents, I ran into information about both the Centre for Equitable Library Access (CELA) and the National Network for Equitable Library Service (NNELS). Confused about the differences between them, I emailed NNELS for clarification, and librarian Sabina Iseli-Otto wrote back: “Would it be alright to call you? I know it’s getting late in the day but 5 minutes on the phone would save 20 minutes of typing (seriously).”
That five-minute chat turned into an impromptu phone interview, and Iseli-Otto gave me permission to share with you what I’ve learned. (The information in most of this post I got from her, but I’m also including a bit of what I found through my own research for my talk.)
Print disabilities and copyright
Print disabilities include:
- blindness or visual impairments,
- physical impairments that prevent a person from holding or manipulating print materials, and
- cognitive impairments, like ADHD, dyslexia, or learning or memory problems due to a brain injury, that impede reading and understanding.
Although colourblindness isn’t considered a print disability, documents should be created with colourblindness in mind.
About 10 percent (a conservative estimate) of Canadians have a print disability, but only about 5 percent of published works are accessible. Most people with print disabilities aren’t using public libraries.
Section 32(1) of Canada’s Copyright Act spells out an exception to copyright that lets people with print disabilities, and those acting on their behalf, create and use alternate formats of copyrighted print materials (with the exception of large-print books and commercially available titles).
Accessible formats
The following are some of the accessible formats for people with print disabilities:
- E-text: plain text (.txt), rich text (.rtf), Word (.docx)
- EPUB 2 & 3
- Accessible PDFs
- DAISY
- MP3s
- large-print
- Braille
E-text, EPUB, and accessible PDFs can be read by screen readers such as JAWS and VoiceOver. Not all PDFs are accessible—Adobe offers a way to check a document’s accessibility and has guidelines for creating accessible PDFs.
CELA
CELA formed about a year ago following a change to the funding structure at CNIB (formerly the Canadian National Institute for the Blind). CNIB had, over the past hundred years, amassed Canada’s largest collection of alternate-format books in its library, and CELA, with the support of the Canadian Urban Libraries Council, took over administrating this collection. The CNIB library still offers services to existing clients but will refer new clients to their local public library to access CELA’s services.
The shift of oversight from CNIB to CELA will hopefully allow more people to discover and use this extensive collection. Although it was always available to everyone with print disabilities, given that it was under the purview of CNIB, people who didn’t have visual impairments may not have realized that they could access it.
CELA has also partnered with Bookshare, an American online library for people with print disabilities. Rather than owning its content, Bookshare operates on more of a licensing model, controlling pricing and the licensing fees.
NNELS
NNELS is also about a year old, with a lean staff of only four people, and, unlike CELA and Bookshare, is funded exclusively by provincial governments, which gives it more transparency. It has a much smaller collection but owns perpetual rights to everything in it. NNELS takes patron requests and works directly with publishers to add to their collection. Nova Scotia helped negotiate a fixed rate for NNELS with publishers in the Atlantic provinces, and Saskatchewan has funded an initiative to create accessible EPUBs for all Saskatchewan books, which will be added to the NNELS collection. Whereas CELA focuses on partnerships with public libraries, NNELS also works with public schools and universities—for example, it has a content-exchange agreement with the Crane Library at UBC .
Recent policy changes relevant to people with print disabilities
Accessibility for Ontarians with Disabilities Act
According to the Accessibility for Ontarians with Disabilities Act (AODA),
Organizations will have to…provide accessible formats and communications supports as quickly as possible and at no additional cost when a person with a disability asks for them.
The law was enacted in 2005, but the regulations for information and communications didn’t come into effect until 2012, when all sectors had to make all emergency procedures and public safety information accessible upon request. For other types of communications, the AODA requirements were phased in beginning in 2013 for the public sector and beginning in 2013 and 2015 for private and non-profit sectors. (Respectively, I think? The website doesn’t make that bit clear.) If you work with Ontario businesses, you may be called on to provide accessible communications.
The Marrakesh Treaty
The Marrakesh Treaty to Facilitate Access to Published Works by Visually Impaired Persons and Persons with Print Disabilities laid out exceptions to copyright so that signatories could freely import and export accessible content, obviating the need to duplicate efforts to convert works to accessible formats in different countries. Although Canada was instrumental in writing the treaty, it hasn’t ratified or signed it. However, in its 2015 budget, unveiled last week, the Government of Canada announced that it would accede to the treaty, meaning that people with print disabilities could soon have access to a lot more content.
Publishers and accessible content
I asked Sabina Iseli-Otto how publishers can make her job easier.
“We’d prefer to get EPUB files or accessible PDFs directly from the publisher. Actually, I’ve been really, pleasantly surprised at how often publishers will say yes when we ask for them. I mean, they can always say no—they’re doing it out of the goodness of their hearts—but it saves public funds if they send us those files directly.”
If a publisher refuses to provide accessible files, the copyright exception still applies, which means that NNELS would still be able to create an accessible format, but it would have to:
- acquire a hard copy,
- scan in the pages,
- run optical character recognition (OCR) on the scans,
- clean up the text file (e.g., deleting running headers and footers),
- proof the text.
“More than anything,” Iseli-Otto said, “we want to hear back quickly” from publishers, regardless of what they decide.
I asked if the files NNELS provides to patrons have digital right management (DRM) on them. “No,” she said, “but we make it very clear to them that if they abuse them that they’re putting our whole operation in jeopardy. Some of them appreciate having the access so much that they’re actually quite protective of their files.”
Our conversation had focused on books. What about periodicals and grey literature? “There’s certainly demand for it,” said Iseli-Otto. “We’d love to do more of that. And I’d like to turn your question around: what can we do for publishers to make it easier to collaborate with us? I’m not sure how to build those relationships.”
(Can you guess who I’ve invited to PubPro 2016?)
Publishers who’ve been in business for longer than a decade will recognize the steps NNELS has to take to create accessible formats from a print-only book: they’re identical to what publishers have to do if they want to reissue a backlist title that has no retrievable digital files. Could Canadian publishers partner with an organization dedicated to creating accessible formats so that, in exchange for digitizing the backlist for publishers, the organization could add those files to its collection at no additional cost?
Editorial, design, and production considerations for creating accessible files
In my PubPro 2015 talk, I mentioned a few things publishers should keep in mind through the editorial and production process so that the output will be accessible—especially since having to retrofit an existing document to adhere to accessibility standards is more labour intensive and expensive than producing an accessible file from the outset. I focused mostly on the effect of editing and production on screen readers.
Style considerations
Screen readers will not always read all symbols. The Deque Blog has a summary of how three of the most popular screen readers interpret different symbols. (It’s a bit out of date but still a good place to start; thanks to Ashley Bischoff for that link.) Testing on VoiceOver, I found that although the screen reader is smart enough to read “Henry VIII” as “Henry the eighth,” “Chapter VIII” as “chapter eight,” and “World War II” and “World War two,” it reads each letter in “WWII” as if it were an initialism. And it reads 12,000 as “twelve thousand” but “12 000” as “twelve zero zero zero.” I also found that it doesn’t read the en dash before a numeral if the dash is used as a minus sign, saying “thirty-four degrees” for “–34°.” It’s best to use the actual minus sign symbol − (U+2112), which my version of VoiceOver reads as “minus sign.” The same goes for the letter x used in place of the real multiplication symbol × (U+00D7). My version of VoiceOver doesn’t read a tilde before a numeral, so ~8 mL would be “eight millilitres” instead of the intended “approximately eight millilitres.”
In any case, if you’re editing and deciding between styles, why not choose the most accessible?
Language considerations
Plain language best practices apply here:
- chunk text and use heading styles,
- break up long, complex sentences, and
- aim for a natural, conversational style.
Headings and short chunks of text offer context and digestible content to the listener. Screen readers are actually already quite adept at putting the stress on the right syllables depending on whether a word like reject is used as a verb or noun—when the word is in a short sentence. It can get confused in longer sentences.
Image concerns
For images:
- Offer alt text—text that is rendered if the image cannot be seen—for substantive images but not decorative ones. (Add an alt attribute in the code, but leave it blank—i.e., alt = “”—or the screen reader will read the filename. You can add alt text directly in InDesign.)
- Don’t use colour as the only way to convey information. Make sure colours you choose to distinguish between two lines on a graphs, say, will not occupy the same grey space when converted to greyscale. Alternatively, use different styles for those lines or label them clearly directly on the graph.
- Don’t turn text into an image to fix its appearance. We often see this practice with equations. Screen readers do not read LaTeX. If you have equations or mathematical expressions, convert them to MathML or offer alt text using the Nemeth MathSpeak system.
In essence, because ebooks are like websites, applying the Web Content Accessibility Guidelines 2.0 will ensure that your ebook will be accessible. The BC Open Textbook Accessibility Toolkit also has useful guidelines for publishers. I would recommend at least spot checking a document with a screen reader to uncover possible ambiguities or reasons for misapprehension.
***
Huge thanks to Sabina Iseli-Otto for her eye-opening insights!
Kelly Maxwell—Transcription, captioning, and subtitling (EAC-BC meeting)
Kelly Maxwell gave us a peek into the fascinating world of captioning and subtitling at April’s EAC-BC meeting. Maxwell, along with Carolyn Vetter Hicks, founded Vancouver-based Line 21 Media Services in 1994 to provide captioning, subtitling, and transcription services for movies, television, and digital media.
Not very many people knew what captioning was in the 1980s and ’90s, Maxwell said. But the Americans with Disabilities Act, passed in 1990, required all televisions distributed in the U.S. to have decoders for closed-captioning built in, and Canada, as a close trading partner, reaped the benefits. Captioning become ubiquitous and is now a CRTC requirement.
Line 21 works with post-production coordinators—those who see a movie or TV show through editing and colour correction. Captioning is often the last thing that has to be done before these coordinators get paid, so the deadlines are tight. Maxwell and her colleagues may receive a script from the client, in which case they load it into their CaptionMaker software and clean it up, or they may have to do their own transcription using Inqscribe, a simple, free transcription program. They aim to transcribe verbatim, and they rely on Google (in the ‘90s, they depended on reference librarians) to fact check and get the correct spelling for everything. Punctuation, too, is very important, and Maxwell uses it to maximize clarity: “People have to understand instantaneously when they see a caption,” she said. “I won’t ever give up the Oxford comma. We’re sticklers for old-fashioned, fairly heavy comma use. It can make a difference to someone understanding on the first pass.” She also edits for reading rate so that people with a range of literacy levels will understand. “Hearing people are the number-one users of captioning,” she said.
Although HD televisions now accommodate a 40-character line, Line 21 continues to caption in 32-character lines. “Captioners like to think of the lowest common denominator,” Maxwell said. They need to consider all of the people who still have older technology. Her company doesn’t do live captioning, which is done by court reporters taking one-hour shifts and is still characterized by a three-line block of all-caps text rolling on the screen. Today the captioning can pop onto the screen and be positioned to show who’s talking. The timing is done by ear but is also timecoded to the frame. Maxwell and her colleagues format captions into readable chunks—for example, whole clauses—to make them comprehensible. Once the captions have all been input, she watches the program the whole way through to make sure nothing has been missed, including descriptions of sound effects or music.
Subtitling is similar to closed captioning, but in this case, “You assume people can hear.” Maxwell first creates a timed transcript in English and relies on the filmmakers to forge relationships with translators they can trust. Knowing the timelines, translators can match up word counts and create a set of subtitles that line up with the original script. Maxwell then swaps in these subtitles for the English ones and, after proofing the video, sends it back to the translators for a final look. How do you proofread in a language you don’t know? “You can actually do a lot of proofing and find a lot of mistakes just by watching the punctuation,” said Maxwell. “You can hear the periods,” she added. “Sometimes they [translators] change or reorder the lines.”
Before the proliferation of digital video, Maxwell told us, they couldn’t do subtitling, which had to be done directly on the film. Today, they have a massive set of tools at their disposal to do their work. “In the early ‘90s,” she said, “there were two kinds of captioning.” In contrast, today “we have 80 different delivery formats,” and each broadcaster has its own requirements for formats and sizes. “People ask me if I’m worried about the ubiquity of the tools,” said Maxwell. “No. Just because I have a pencil doesn’t mean I’m a Picasso.”
As for voice-recognition software, such as YouTube’s automatic captioning feature, Maxwell says it just isn’t sophisticated enough and can produce captions riddled with errors. “You do need a human for captioning, I’m afraid.”
Maxwell prides herself on her company’s focus of providing quality captioning. One of her projects was captioning a four-part choral performance of a mass in Latin. According the to CRTC regulations, all she had to do was add musical notes (♪♫), but she wanted to do better. She bought the score and figured out who was singing what.
In another project, she captioned a speech by the Dalai Lama. “Do you change people’s grammar, change people’s words?” The Dalai Lama probably didn’t say some of the articles or some of the verbs (like to be) that appear in the final captions, Maxwell said, but captioners sometimes will make quiet changes to clarify meaning without changing the intent of the message.
Captioning involves “a lot, a lot, a lot of googling,” she said, “and a lot of random problem solving.” She’s well practiced in the “micro-discernment of phonemes.” Sometimes when she’s unable to tell what someone has said, all it takes is to get someone else to listen to it and say what they hear. Over the years, Maxwell and her team have developed tricks like these to help them help their clients reach as wide an audience as possible.
Authoring and delivery platforms for open educational resources (webinar)
The Community College Consortium for Open Educational Resources (CCCOER) hosted a webinar about a few platforms for authoring and delivering open educational resources (OER). CCCOER was founded almost eight years ago to expand access to openly licensed material, support faculty choice, and improve student success. It has more than 250 member colleges in twenty-one states. The organization understands that faculty need user-friendly authoring tools, institutions had to integrate OER into their existing course management infrastructures, and students had to be able to easily search and use OER. Representatives from three OER platforms explained their tools in this webinar. I’ll cover all three, but my focus will be on Clint Lalonde’s presentation about Pressbooks Textbook, because it’s the most relevant to publishing in BC. (Slides of the session are on Slideshare.)
Courseload Engage, presented by Etienne Pelaprat, User Experience Director at Courseload Inc.
Courseload is a platform that offers students access to text-based OER, video, audio, journal articles, library content and catalogues, proprietary content, and other uploaded content through a single application that can be integrated into existing learning management systems. Courseload has the flexibility of allowing institutions to curate their own content based on learning objectives, and it manages all of the metadata (including library catalogue data and ONIX feeds). This metadata allows institutions to generate custom catalogues and course packs, and the system tracks content use via analytics that may help institutions optimize discoverability and respond to student demand to improve their learning outcomes.
PressBooks Textbook, presented by Clint Lalonde, Open Education Manager at BCcampus
BCcampus’s Open Textbook Project was launched to provide BC post-secondary students with access to free textbooks in the forty subject areas with the highest enrolment. Rather than start from scratch, said Lalonde, BCcampus wanted to take advantage of existing textbook content already in the commons. The focus would be on adaptation, although they would also create some new content.
For students, the open textbooks had to be free for students to use and retain and available in several formats. For faculty, open textbooks had to be high-quality material that would be easy to find and adapt.
Hugh McGuire had predicted that the book would merge with the web and that books would be created web first; he founded PressBooks with that idea in mind. PressBooks is an open source WordPress plugin that allows authors to write once but output in many different formats, including HTML, EPUB, and PDF.
BCcampus worked with a programmer to customize PressBooks for easy textbook authoring, and the result is the PressBooks Textbook plug-in. It works together with Hypothes.is to allow students and faculty to annotate content. Lalonde and his team also added an application program interface (API) that facilitates searching and sharing with others on different platforms and allows the textbooks to become more than just static content. Unfortunately, Lalonde explained, PressBooks Textbook isn’t fully open source at the moment, because it relies on a proprietary PDF output engine, the license for which institutions would have to pay.
BCcampus’s next steps with this plug-in include
- integrating accessibility features via the FLOE Project
- finding an open source PDF engine to replace Prince XML
- expanding the output formats to include Word-compatible ODT files.
Lalonde has blogged about PressBook Textbook’s architecture.
Open Assembly, presented by founder and CEO Domi Enders
Non-traditional students and adjunct instructors are less likely to be reached by OER initiatives because they may work remotely much of the time and are poorly integrated into an institution. As a result, they have limited access to their peer communities. Domi Enders wanted to develop open learning system that would not only give users access to OER but also give students or adjunct faculty the continuity and agency they need to remain engaged with their learning and teaching. Open Assembly can be integrated into existing learning management systems and allows users to collaborate in content curation. By offering users a space to meet and create new knowledge, it facilitates peer-to-peer learning in a way that helps remote students and faculty stay connected.
Writing about First Nations (Read Local BC)
As part of the Association of Book Publishers of British Columbia’s Read Local BC campaign, Laraine Coates of UBC Press hosted a panel discussion on writing about First Nations, featuring:
- Paige Raibmon, an associate professor in UBC’s history department and co-author, with Elsie Paul and Harmony Johnson, of Written As I Remember It: Teachings from the Life of a Sliammon Elder;
- Jean Barman, a nationally recognized historian, professor emeritus at UBC, and author of French Canadians, Furs, and Indigenous Women in the Making of the Pacific Northwest; and
- Jennifer Kramer, associate professor of anthropology, Pacific Northwest curator at the Museum of Anthropology, and co-editor of Native Art of the Northwest Coast: A History of Changing Ideas.
After Coates acknowledged that the evening’s event was taking place on unceded Coast Salish territories, she launched into the program by asking each panellist to describe their books.
Written as I Remember It was Elsie Paul’s idea, said Raibmon, and consists primarily of teachings and historical stories from Paul’s life. Paul, one of the last remaining mother-tongue speakers of Sliammon, wanted to create a booklet of teachings to share with her family. Raibmon thought Paul’s stories would interest a wider audience, and they decided to work together, along with Paul’s granddaughter, Harmony Johnson, to turn the booklet into a UBC Press book, which was organized into chapters based on key themes, including grief, education, spirituality, and pregnancy. “All of these stories were told and lived in a completely different language,” said Raibmon. “Elsie has lived a fascinating life, and she has a lot of interesting stories to tell.”
Jean Barman has written about BC history before, but “I’d always acted as if French Canadians didn’t exist in the province,” she said. She wanted to redress this deficiency and find out more about them. “That’s the nice thing about being an academic,” she said. “I get paid to find out!” As she did research for the book, her focus expanded from the French Canadians themselves to the fur trade that brought them to the province and the indigenous women who kept them here.
Jennifer Kramer co-edited Native Art of the Northwest Coast with art historian Charlotte Townsend-Gault and Nuuchaanulth historian Ḳi-ḳe-in. They wanted to challenge the “one monolithic idea of what native Northwest Coast art is”—the red, black, and white ovoids and formlines we so often see. The book unearths 250 years’ worth of commentary about Northwest Coast art from multiple perspectives, beginning chronologically with writings by Captain James Cook and including contemporary native artist–authors, to show the heterogeneity and richness of the region’s artistic past and present.
Coates noted that although the three books are different, they all deal with Aboriginal lives and legacy. She asked the panellists what they learned in their research.
Barman said that although over 90 percent of the men and all of the women she researched for the book were illiterate, she could still find traces of them in fur trade records or in the work of other people who had written about them. Barman looked at the relationships Aboriginal groups forged with the newcomers—particularly the way indigenous men encouraged their daughters to interact with the fur traders so that they could get access to trade goods—as well as the motivations French Canadian men had to stay rather than return to Quebec.
Raibmon said that unlike Barman’s project, hers “came with a workaround of the problem of finding traces.” Elsie Paul invited Raibmon to pull together audio material to create a book and allowed her to learn from the inside out, interconnecting teachings with history.
Kramer’s goal with her book was to consciously and actively address the problem that the majority of writing about Northwest Coast art has been by non-native authors. She wanted to bring in as many voices as possible to undermine the narratives repeated by Western, non-Aboriginal authors. “As an anthropologist, my number-one concern is, ‘Who am I to write about someone who isn’t me?’ We have this chronic problem or paradox: museums represent people who want to represent themselves. How do we get around that power imbalance?”
Kramer described the critical shift in the 1990s toward reflexivity, making the research process open to reflection and collaboration. “First Nations don’t have just one perspective, either,” said Kramer. “They’ll have many opinions. There’s no one way to write this. It’s not about correcting an incorrect history—it’s about acknowledging all the ways of knowing.” Kramer saw the draft of the book as a living, breathing archive, and she expressed apprehension about taking it to press and fixing it to a page. “It might have been better as an online blog, like Wikipedia, with many people engaging. We’re in this engagement together, and we’re co-creating these products of representation.” She also mentioned the discomfort that some of the artists felt, having the huge responsibility of representing not only their own artwork but also their culture, by extension.
Raibmon’s experience uncovered a bit of that tension as well. “Elsie did not get permission from the Sliammon people to write the book. She didn’t want to be seen as taking authority or speaking for her community.” She added that the university set up procedures requiring researchers who work with First Nations communities to get band approval, but “that’s not always appropriate. Elsie found it offensive that UBC wanted to get band council agreement so that she could tell her story.”
As a historian, said Barman, “I carefully document where all the bits and pieces come from so that others can add to them or challenge them.” She wants to make it clear that she’s telling a story, not the story, and there will always be pieces that are right to some and wrong to others. But if we don’t risk criticism and put your work out there, we’ll never learn, and our knowledge will never grow. “You’re doing something, but at least you’re doing something.”
Barman described a perennial difficulty that comes with historical research and writing: what to do about names. “What do we mean by the Northwest Coast?” To Americans, it includes Alaska and Washington but sometimes also Oregon and northern California. “What do you do before we had borders? What was something named in the past, and how have names changed? These issues can get you into conflict.”
Kramer agreed that names carry a lot of weight, and people can react strongly to them. She wanted her book to take an unconventional look at Northwest Coast art, which would naturally entail unconventional names and terms, yet still be discoverable to people using more familiar search terms. “That one would be accused of cultural appropriation is always a fear,” she said. Many First Nations groups have a very real fear of theft, given the historical theft of their land, their children, their sovereignty. But she had to grapple with the reality that no one member of the community could tell her that what she was doing was acceptable or give her a blank cheque. “You have to know you’re doing it with a good heart, that your intentions are clean.”
Kramer asked Raibmon if she had a voice in her book or if she felt as though she had to keep quiet and let Paul take the lead. The approach to narrative was different from her usual approaches, said Raibmon, but “the goal was to get Elsie’s voice on the page.” She still made a historical argument, but in an engaging way that foreground’s Paul’s voice. “I hope people who read the book will still see the historical connections, the connecting themes.” She added that she didn’t consider herself to be the historian and Paul to be her subject. “We were two historians working together, from different historical traditions. Personally I didn’t feel any tension from letting Elsie decide what topics would go in.”
“I didn’t actually understand why certain topics were off limits,” Raibmon continued. “Why are certain stories so important? There were chapters that were super important to them, but I didn’t understand it at the time. I learned how long it can take to let go of our assumptions that block our understanding… I understand now. But if my authority had trumped Elsie’s, I wouldn’t even have remembered the question, let along learn what I’ve learned.
“Elsie had stories of other families, but she didn’t feel that was appropriate to have in the book. She didn’t want to assume the stories would offend them. Cultural difference is understanding human difference.”
Multiverse
Writers on editors: an evening of eavesdropping (EAC-BC meeting)
What do writers really think of editors? Journalist and editor Jenny Lee moderated a discussion on that topic with authors Margo Bates and Daniel Francis at last week’s EAC-BC meeting. Bates, self-published author of P.S. Don’t Tell Your Mother and The Queen of a Gated Community, is president of the Vancouver branch of the Canadian Authors Association. Francis is a columnist for Geist magazine and a prolific author of two dozen books, including the Encyclopedia of British Columbia and the Connections Canada social studies textbook.
Francis told us that in the 1980s, he’d had one of his books published by a major Toronto-based publisher, who asked him about his next project. Francis pitched the concept for what became Imaginary Indian: the image of the Indian in Canadian culture back to 1850. His Toronto publisher turned it down, concerned about appropriation of voice. “I took the idea to friends in Vancouver,” said Francis, “and in some ways it’s my most successful book.” He learned from the experience that he’d rather work with smaller publishers close to home, many of which were run by people he considered friends. He thought his book with the larger publisher would be the ticket, but it was among his worst-selling titles, and he was particularly dismayed that the editor didn’t seem to have paid much attention to his text. “To me, this is a collaborative process, working with an editor,” said Francis. “I’m aware that I’m no genius and that this is not a work of genius,” but his editor “barely even read the thing.” He found the necessary depth in editing when he worked with his friends at smaller presses. “Friends can be frank,” Francis said.
Bates, whose P.S. Don’t Tell Your Mother has sold more than 7,500 copies, became familiar with how much editors can do when she hired them through her work in public relations. For her own writing, Bates knew she could take care of most of the copy editing and proofreading but wanted an objective but understanding professional who would advise her about structure and subject matter. She looked for someone who would tighten up her book and make it saleable. “I’m not that smart a writer that I can go without help,” she said. “I wouldn’t do anything without an editor.” In fact, she allocated the largest portion of her publishing budget to editing. After speaking with several candidates, Bates selected an editor who understood the social context of her book and help her “tell the story of prejudice in a humorous way.”
Frances Peck mentioned an article she read about a possible future where self-publishers would have editors’ imprints on their books—in other words, editors’ reputations would lend marketability to a book. “Is that a dream?” she asked. “The sooner, the better, as far as I’m concerned,” Bates said. “There’s a lot of crap out there,” she added, referring to story lines, point of view, grammar, spelling and other dimensions of writing that an editor could help authors improve.
What sets good editors apart from the rest? Francis says that he most appreciates those who have good judgment about when to correct something and when to query. Some strategies for querying suggested by the audience include referring often to the reader (“Will your reader understand?”) and referring to the text as something separate from the author (i.e., using “it says on page 26” rather than “you say on page 26”). Bates said that she really appreciated when her editor expressed genuine enthusiasm for her story. Her editor had told her, “I’m rooting for the characters, and so are your fans.”
Lee asked whether the popular strategy of the sandwich—beginning and ending an editorial letter with compliments, with the potentially ego-deflating critique in the middle—was effective. Francis said, “I hope I’m beyond the need for coddling. I guess you have to know who you’re dealing with, when you’re an editor.” Some editors in the room said that the sandwich is a reliable template for corresponding with someone with whom you haven’t yet established trust. We have to be encouraging as well as critical.
Both Bates and Francis urged editors to stop beating around the bush. Francis said, “You get insulted all the time as a textbook writer. You have to grow a pretty thick skin.” That said, Francis wasn’t a big fan of the book’s process of editing by committee and says it’s one reason he stopped writing textbooks. In addition to producing a coherent text, the textbook’s author and editors had to adhere to strict representation guidelines (e.g., the balance of males to females depicted in photographs had to be exactly 1:1).
Lee asked the two authors how they found their editors. Francis said that his publishers always assign his editors, and “I get the editor that I get.” So far his editors have worked out for him, but if he’d had any profound differences, he’d have approached the publisher about it or, in extreme cases, parted ways with the publisher.
Bates said that for self-published authors, the onus is on them to do their research and look at publications an editor has previously worked on. “There will always be inexperienced writers who don’t see the need for editors,” she said, but at meetings of the Federation of BC Writers and the Canadian Authors Association, she always advocates that authors get an editor. Bates suggested that the Editors’ Association of Canada forge closer ties with writers’ organizations so that we could readily educate authors about what editors do.
Open textbooks and the BC Open Textbook Accessibility Toolkit (webinar)
In fall 2012, the BC Open Textbook Project was launched to reduce the financial burden on post-secondary students, who spend an average of $1,200 per year on textbooks. As part of Open Education Week, BCcampus hosted a webinar about the project as well as the associated BC Open Textbook Accessibility Toolkit, created to help people who develop learning resources to make them as accessible as possible from the outset.
Open Textbook Project (presented by Amanda Coolidge)
In 2012, the BC Open Textbook Project received a grant of $1 million to develop open textbooks for the top-forty enrolled subject areas. It received another $1 million in 2014 to create resources for skills and trades training. BC has now committed to working together with Alberta and Saskatchewan to develop and share open textbooks.
Many people think open textbooks are e-textbooks, but what makes them open is their Creative Commons (CC) license: they can be copied, modified, and redistributed for no charge. Instructors can therefore change open textbooks to suit their courses, and students are able to get these books for free. In two years the project has saved more than five thousand students over $700,000 in textbook costs.
BCcampus carried out the Open Textbook Project in three phases:
- First, they collected existing textbooks with CC licenses and asked faculty to review them.
- Second, they modified these books based on faculty reviews. At the end of this process, they had covered thirty-six of the top-forty subject areas.
- Finally, they funded the creation of four textbooks from scratch.
Open textbooks are now being used in fourteen post-secondary institutions across the province, and BCcampus has eighty-one textbooks in its collection. To create these materials, they use Pressbooks, a plugin that lets you write once and publish to many different formats.
Accessibility testing (presented by Tara Robertson)
Tara Robertson helps run CAPER-BC, which provides alternate formats of learning materials to twenty institutions across the province. They specialize in accommodations, including remediating textbooks for people with print disabilities. One reason the Open Textbook Project is exciting, said Robertson, is that instead of taking something broken and fixing it, she now has the opportunity to make the textbooks accessible from the start.
Seven students with special needs volunteered to test the open textbook resources for accessibility, reading selected chapters from textbooks in five subject areas and offering feedback on their usability. Robertson also ran a focus group with five students. She found recruiting testers challenging, and she acknowledges that the students who participated in the focus group, all of whom had visual impairments, were not representative of the many students that had other print disabilities. Still, the testers offered a lot of constructive feedback.
The chapters the students reviewed each had features that might interfere with assistive technology like text-to-speech software: formatted poetry, tables, images, quizzes, and so on. Testing revealed that the software would skip over embedded YouTube videos, so the textbooks would have to include URLs; formatted poems were problematic when enlarged because readers would have to scroll to read each line; and layout sometimes led to a confused reading order.
Robertson sees the accessibility consultation with students as an ongoing process to refine accessibility best practices.
BC Open Textbook Accessibility Toolkit (presented by Sue Doner)
BCcampus has just launched an accessibility toolkit for faculty, content creators, instructional designers, and others who “don’t know what they don’t know about accessible design.” Their aim is to build faculty capacity for universal design and to highlight the distinctions between accommodations and accessibility. Accommodations involve individualizing resources and providing alternative learning options for students who identify as having a disability. If we were proactive about creating materials that were accessible from day one, we’d have no need for accommodations.
Universal design recognizes that different students learn differently—some prefer visual materials, whereas others prefer text, for example. It offers students multiple access points to the content, and it’s better for all students, not just those who register with their disability resource centre. For example, aging students may appreciate being able to enlarge text, and international students may benefit from captions to visual material.
The toolkit offers plain language guidelines for creating different types of textbook content with a student-centred focus, using user personas to inform key design concepts and best practices. It asks content developers to think about what assumptions they’re making of the end users and how those assumptions might affect the way they present the material.
It might take a bit of time for creators of some types of content to catch up with all accessibility features—for example, video and audio should, as a rule, come with transcripts, but a lot of YouTube content doesn’t, and you may run into copyright issues if you try to offer material in different formats.
The next steps for BCcampus are to incorporate the toolkit into the development process for all new open textbooks they create, to modify existing textbooks for accessibility, and to encourage the province’s post-secondary community to formally adopt these guidelines. The toolkit, like the open textbooks, are available under a CC license and can be thought of as a living document that will change and grow as different types of content (e.g., math) becomes amenable to accessible design.
Doner sees these steps as “an opportunity to create a community of practice—a new literacy skill.”
***
This webinar (along with others offered during Open Education Week) is archived on the BCcampus site.
Ghost of editor past
Inspired by @Mededitor and Jonathon Owen
Lorna Fadden—Language Detectives II (EAC-BC meeting)
After speaking at well-attended EAC-BC meeting in 2012, forensic linguist Lorna Fadden returned to the stage last week for a highly anticipated follow-up. “I hope I don’t disappoint you,” she said. “You know when a sequel comes out, and it sucks?”
With an opening like that, Fadden had no cause for concern.
Fadden lectures in the department of linguistics at SFU, where she studies sociolinguistics and discourse analysis, as well as First Nations languages. She also runs a consulting practice in forensic discourse analysis, examining language evidence for investigations or trials in criminal and civil cases. These cases may involve hate speech, defamation, bribery, internet luring, plagiarism, and extortion, among other types of language-related crimes. She analyzes both linguistic form—grammatical structure, word choice, and prosodics—and linguistic function—meaning, social context, and pragmatics.
Fadden presented a historical case in which forensic linguistics played a starring role: in 1989, the Exxon Valdez, captained by Joseph Hazelwood, struck Prince William Sound’s Bligh Reef and spilled its crude oil cargo, resulting in one of the worst environmental disasters in history. There was wide speculation that Hazelwood was intoxicated at the time, and forensic linguists analyzed his recorded exchanges with the Coast Guard to find evidence that he was impaired.
Alcohol depresses the central nervous system, Fadden explained, impairing coordination, reflexes, and nerve transmission—basically everything you use when you talk. It also impedes your ability to recall words, as well as your ability to utter words in the correct sequence. Intoxication leads to misarticulation of certain speech segments: r and l sounds can become blended, and s and ts sounds can be palatalized to become sh. Suprasegmental effects of intoxication include slower speech, lower mean pitch, a wider pitch range, vowel lengthening, and the lengthening of consonants in unstressed syllables.
According to the Coast Guard’s recordings, Hazelwood’s speech had all of these characteristics 1 hour before, immediately after, and 1 hour after the Exxon Valdez ran aground but was normal 33 hours before and 9 hours after the accident.
At the time, forensic linguistics as a social science was relatively new. Hazelwood’s trial was the first time this kind of evidence was used in court, but because no witnesses could remember seeing Hazelwood drink and the jury may have been uncomfortable with this means of demonstrating drunkenness, he was acquitted.
Fadden then told us about some of her cases, one of which involved a series of menacing and highly critical letters being sent to a large company’s board of directors. These letters were sent anonymously, but the writer claimed to be a member of the company’s front-line staff or a mid-level manager. The language in the letters, accusing the directors of having “zero business acumen” and referring the company’s “value proposition,” as well as referring to “our managers”—unlikely for a low-ranking staff member to do—betrayed the writer’s higher rank. With Fadden’s help, the investigation uncovered that the writer was a high-ranking executive who’d been fired, and he was sent a cease-and-desist letter.
In another case, the mother in a custody dispute received a series of letters, supposedly from her kids, telling her they wanted nothing to do with her. Fadden’s role was to determine whether the children genuinely wrote the letters themselves. One letter, in her 7-year-old’s handwriting, mentioned that the kids did not “fully trust” their mother and agreed that they would spend time with her only on supervised visits. “Kids that age don’t use adverbs like ‘fully,’” said Fadden, and she doesn’t believe that kids have the meta-awareness implied by the letter. Occasionally we write something addressed to one person, knowing it will have a larger audience. In this case, the letters were written in a style that suggested the writer realized that others—lawyers, psychologists, and so on—may read them. Fadden’s analysis, along with a social worker’s assessment and psychologists’ assessments, led to a favourable outcome for the mother, who’d been accused of nefarious things that hadn’t been proven. “You have to be careful asking kids questions, because the questions we ask them often already suggest the answers,” said Fadden. “We rarely ask children information-seeking questions.”
Fadden’s third case was a more complex one: a woman had accused a man of drugging and sexually assaulting her, but eyewitness accounts, video surveillance, and toxicology suggested that her allegation was false. She faced a charge of public mischief, but she claimed she didn’t understand what happened during the police interview. Fadden had to assess whether she was legally competent by comparing her linguistic performance with what we’d expect from a native speaker in the same context. In her doctoral dissertation, Fadden had characterized a series of police interviews of first-time suspects, so she had a robust set of measures as benchmarks.
Cognitive deficiency is correlated with a slow speech rate, but the suspect had a relatively high speech rate, and it didn’t drop significantly from the beginning of the interview to the end (so not much of a fatigue effect). Fadden also looked at her turn latency (how much time elapses between the end of the interviewer’s question and her answer) and her pause ratio (how much she pauses compared with how much she speaks). All of these temporal elements were within normal ranges; nothing suggested that she was incompetent.
A stronger indicator of the suspect’s competence was in the way she manipulated specificity associated with details. Take, for instance, “this talk,” from most generic to most specific:
- Fadden’s giving a talk on Wednesday (type identifiable, not specific)
- There’s this talk on forensic linguistics on Wednesday (referential)
- The talk on forensic linguistics on forensic linguistics will be on Wednesday (uniquely identifiable)
- This/that talk on forensic linguistics is on Wednesday (familiar)
- I’ll be at that on Wednesday (activated)
- It’s on Wednesday (in focus—you don’t even have to name it)
(Fadden made sure we noted the distinction in specificity between “this talk” and “this talk.”) Through 2 hours of interviews, the suspect was adept at adjusting the level of specificity based on context, using generic language to describe what she claimed to have witnessed when she allegedly found herself in an unfamiliar environment but specific language when talking about details that the police officer had told her. Fadden concluded that she had normal cognitive status. The suspect eventually confessed to fabricating the story because she didn’t want her husband to find out she’d willingly slept with another man.
To end the evening Fadden challenged us to an exercise of authorship analysis. She gave us two writing samples from different blogs with similar topics and writing styles. We had to figure out who’d authored a third sample. From a superficial reading, most people in the room guessed that the first blogger was responsible, but Fadden showed that by comparing features like
- the number of words per sentence,
- the length of words,
- the use of adjectives and adverbs,
- the use of parentheticals,
- the use of discourse markers,
- the use of conjoined phrases, and
- the use of independent clauses,
her analysis showed that the second blogger was the likely author. Authorship analysis is a contentious field now because its effectiveness and accuracy aren’t completely understood, and there’s no standard method for carrying it out. As a result, it’s not admissible in court. But, like a polygraph, authorship analysis may help steer the direction of an investigation.