Blog
Writers on editors: an evening of eavesdropping (EAC-BC meeting)
What do writers really think of editors? Journalist and editor Jenny Lee moderated a discussion on that topic with authors Margo Bates and Daniel Francis at last week’s EAC-BC meeting. Bates, self-published author of P.S. Don’t Tell Your Mother and The Queen of a Gated Community, is president of the Vancouver branch of the Canadian Authors Association. Francis is a columnist for Geist magazine and a prolific author of two dozen books, including the Encyclopedia of British Columbia and the Connections Canada social studies textbook.
Francis told us that in the 1980s, he’d had one of his books published by a major Toronto-based publisher, who asked him about his next project. Francis pitched the concept for what became Imaginary Indian: the image of the Indian in Canadian culture back to 1850. His Toronto publisher turned it down, concerned about appropriation of voice. “I took the idea to friends in Vancouver,” said Francis, “and in some ways it’s my most successful book.” He learned from the experience that he’d rather work with smaller publishers close to home, many of which were run by people he considered friends. He thought his book with the larger publisher would be the ticket, but it was among his worst-selling titles, and he was particularly dismayed that the editor didn’t seem to have paid much attention to his text. “To me, this is a collaborative process, working with an editor,” said Francis. “I’m aware that I’m no genius and that this is not a work of genius,” but his editor “barely even read the thing.” He found the necessary depth in editing when he worked with his friends at smaller presses. “Friends can be frank,” Francis said.
Bates, whose P.S. Don’t Tell Your Mother has sold more than 7,500 copies, became familiar with how much editors can do when she hired them through her work in public relations. For her own writing, Bates knew she could take care of most of the copy editing and proofreading but wanted an objective but understanding professional who would advise her about structure and subject matter. She looked for someone who would tighten up her book and make it saleable. “I’m not that smart a writer that I can go without help,” she said. “I wouldn’t do anything without an editor.” In fact, she allocated the largest portion of her publishing budget to editing. After speaking with several candidates, Bates selected an editor who understood the social context of her book and help her “tell the story of prejudice in a humorous way.”
Frances Peck mentioned an article she read about a possible future where self-publishers would have editors’ imprints on their books—in other words, editors’ reputations would lend marketability to a book. “Is that a dream?” she asked. “The sooner, the better, as far as I’m concerned,” Bates said. “There’s a lot of crap out there,” she added, referring to story lines, point of view, grammar, spelling and other dimensions of writing that an editor could help authors improve.
What sets good editors apart from the rest? Francis says that he most appreciates those who have good judgment about when to correct something and when to query. Some strategies for querying suggested by the audience include referring often to the reader (“Will your reader understand?”) and referring to the text as something separate from the author (i.e., using “it says on page 26” rather than “you say on page 26”). Bates said that she really appreciated when her editor expressed genuine enthusiasm for her story. Her editor had told her, “I’m rooting for the characters, and so are your fans.”
Lee asked whether the popular strategy of the sandwich—beginning and ending an editorial letter with compliments, with the potentially ego-deflating critique in the middle—was effective. Francis said, “I hope I’m beyond the need for coddling. I guess you have to know who you’re dealing with, when you’re an editor.” Some editors in the room said that the sandwich is a reliable template for corresponding with someone with whom you haven’t yet established trust. We have to be encouraging as well as critical.
Both Bates and Francis urged editors to stop beating around the bush. Francis said, “You get insulted all the time as a textbook writer. You have to grow a pretty thick skin.” That said, Francis wasn’t a big fan of the book’s process of editing by committee and says it’s one reason he stopped writing textbooks. In addition to producing a coherent text, the textbook’s author and editors had to adhere to strict representation guidelines (e.g., the balance of males to females depicted in photographs had to be exactly 1:1).
Lee asked the two authors how they found their editors. Francis said that his publishers always assign his editors, and “I get the editor that I get.” So far his editors have worked out for him, but if he’d had any profound differences, he’d have approached the publisher about it or, in extreme cases, parted ways with the publisher.
Bates said that for self-published authors, the onus is on them to do their research and look at publications an editor has previously worked on. “There will always be inexperienced writers who don’t see the need for editors,” she said, but at meetings of the Federation of BC Writers and the Canadian Authors Association, she always advocates that authors get an editor. Bates suggested that the Editors’ Association of Canada forge closer ties with writers’ organizations so that we could readily educate authors about what editors do.
Open textbooks and the BC Open Textbook Accessibility Toolkit (webinar)
In fall 2012, the BC Open Textbook Project was launched to reduce the financial burden on post-secondary students, who spend an average of $1,200 per year on textbooks. As part of Open Education Week, BCcampus hosted a webinar about the project as well as the associated BC Open Textbook Accessibility Toolkit, created to help people who develop learning resources to make them as accessible as possible from the outset.
Open Textbook Project (presented by Amanda Coolidge)
In 2012, the BC Open Textbook Project received a grant of $1 million to develop open textbooks for the top-forty enrolled subject areas. It received another $1 million in 2014 to create resources for skills and trades training. BC has now committed to working together with Alberta and Saskatchewan to develop and share open textbooks.
Many people think open textbooks are e-textbooks, but what makes them open is their Creative Commons (CC) license: they can be copied, modified, and redistributed for no charge. Instructors can therefore change open textbooks to suit their courses, and students are able to get these books for free. In two years the project has saved more than five thousand students over $700,000 in textbook costs.
BCcampus carried out the Open Textbook Project in three phases:
- First, they collected existing textbooks with CC licenses and asked faculty to review them.
- Second, they modified these books based on faculty reviews. At the end of this process, they had covered thirty-six of the top-forty subject areas.
- Finally, they funded the creation of four textbooks from scratch.
Open textbooks are now being used in fourteen post-secondary institutions across the province, and BCcampus has eighty-one textbooks in its collection. To create these materials, they use Pressbooks, a plugin that lets you write once and publish to many different formats.
Accessibility testing (presented by Tara Robertson)
Tara Robertson helps run CAPER-BC, which provides alternate formats of learning materials to twenty institutions across the province. They specialize in accommodations, including remediating textbooks for people with print disabilities. One reason the Open Textbook Project is exciting, said Robertson, is that instead of taking something broken and fixing it, she now has the opportunity to make the textbooks accessible from the start.
Seven students with special needs volunteered to test the open textbook resources for accessibility, reading selected chapters from textbooks in five subject areas and offering feedback on their usability. Robertson also ran a focus group with five students. She found recruiting testers challenging, and she acknowledges that the students who participated in the focus group, all of whom had visual impairments, were not representative of the many students that had other print disabilities. Still, the testers offered a lot of constructive feedback.
The chapters the students reviewed each had features that might interfere with assistive technology like text-to-speech software: formatted poetry, tables, images, quizzes, and so on. Testing revealed that the software would skip over embedded YouTube videos, so the textbooks would have to include URLs; formatted poems were problematic when enlarged because readers would have to scroll to read each line; and layout sometimes led to a confused reading order.
Robertson sees the accessibility consultation with students as an ongoing process to refine accessibility best practices.
BC Open Textbook Accessibility Toolkit (presented by Sue Doner)
BCcampus has just launched an accessibility toolkit for faculty, content creators, instructional designers, and others who “don’t know what they don’t know about accessible design.” Their aim is to build faculty capacity for universal design and to highlight the distinctions between accommodations and accessibility. Accommodations involve individualizing resources and providing alternative learning options for students who identify as having a disability. If we were proactive about creating materials that were accessible from day one, we’d have no need for accommodations.
Universal design recognizes that different students learn differently—some prefer visual materials, whereas others prefer text, for example. It offers students multiple access points to the content, and it’s better for all students, not just those who register with their disability resource centre. For example, aging students may appreciate being able to enlarge text, and international students may benefit from captions to visual material.
The toolkit offers plain language guidelines for creating different types of textbook content with a student-centred focus, using user personas to inform key design concepts and best practices. It asks content developers to think about what assumptions they’re making of the end users and how those assumptions might affect the way they present the material.
It might take a bit of time for creators of some types of content to catch up with all accessibility features—for example, video and audio should, as a rule, come with transcripts, but a lot of YouTube content doesn’t, and you may run into copyright issues if you try to offer material in different formats.
The next steps for BCcampus are to incorporate the toolkit into the development process for all new open textbooks they create, to modify existing textbooks for accessibility, and to encourage the province’s post-secondary community to formally adopt these guidelines. The toolkit, like the open textbooks, are available under a CC license and can be thought of as a living document that will change and grow as different types of content (e.g., math) becomes amenable to accessible design.
Doner sees these steps as “an opportunity to create a community of practice—a new literacy skill.”
***
This webinar (along with others offered during Open Education Week) is archived on the BCcampus site.
Ghost of editor past
Inspired by @Mededitor and Jonathon Owen
Lorna Fadden—Language Detectives II (EAC-BC meeting)
After speaking at well-attended EAC-BC meeting in 2012, forensic linguist Lorna Fadden returned to the stage last week for a highly anticipated follow-up. “I hope I don’t disappoint you,” she said. “You know when a sequel comes out, and it sucks?”
With an opening like that, Fadden had no cause for concern.
Fadden lectures in the department of linguistics at SFU, where she studies sociolinguistics and discourse analysis, as well as First Nations languages. She also runs a consulting practice in forensic discourse analysis, examining language evidence for investigations or trials in criminal and civil cases. These cases may involve hate speech, defamation, bribery, internet luring, plagiarism, and extortion, among other types of language-related crimes. She analyzes both linguistic form—grammatical structure, word choice, and prosodics—and linguistic function—meaning, social context, and pragmatics.
Fadden presented a historical case in which forensic linguistics played a starring role: in 1989, the Exxon Valdez, captained by Joseph Hazelwood, struck Prince William Sound’s Bligh Reef and spilled its crude oil cargo, resulting in one of the worst environmental disasters in history. There was wide speculation that Hazelwood was intoxicated at the time, and forensic linguists analyzed his recorded exchanges with the Coast Guard to find evidence that he was impaired.
Alcohol depresses the central nervous system, Fadden explained, impairing coordination, reflexes, and nerve transmission—basically everything you use when you talk. It also impedes your ability to recall words, as well as your ability to utter words in the correct sequence. Intoxication leads to misarticulation of certain speech segments: r and l sounds can become blended, and s and ts sounds can be palatalized to become sh. Suprasegmental effects of intoxication include slower speech, lower mean pitch, a wider pitch range, vowel lengthening, and the lengthening of consonants in unstressed syllables.
According to the Coast Guard’s recordings, Hazelwood’s speech had all of these characteristics 1 hour before, immediately after, and 1 hour after the Exxon Valdez ran aground but was normal 33 hours before and 9 hours after the accident.
At the time, forensic linguistics as a social science was relatively new. Hazelwood’s trial was the first time this kind of evidence was used in court, but because no witnesses could remember seeing Hazelwood drink and the jury may have been uncomfortable with this means of demonstrating drunkenness, he was acquitted.
Fadden then told us about some of her cases, one of which involved a series of menacing and highly critical letters being sent to a large company’s board of directors. These letters were sent anonymously, but the writer claimed to be a member of the company’s front-line staff or a mid-level manager. The language in the letters, accusing the directors of having “zero business acumen” and referring the company’s “value proposition,” as well as referring to “our managers”—unlikely for a low-ranking staff member to do—betrayed the writer’s higher rank. With Fadden’s help, the investigation uncovered that the writer was a high-ranking executive who’d been fired, and he was sent a cease-and-desist letter.
In another case, the mother in a custody dispute received a series of letters, supposedly from her kids, telling her they wanted nothing to do with her. Fadden’s role was to determine whether the children genuinely wrote the letters themselves. One letter, in her 7-year-old’s handwriting, mentioned that the kids did not “fully trust” their mother and agreed that they would spend time with her only on supervised visits. “Kids that age don’t use adverbs like ‘fully,’” said Fadden, and she doesn’t believe that kids have the meta-awareness implied by the letter. Occasionally we write something addressed to one person, knowing it will have a larger audience. In this case, the letters were written in a style that suggested the writer realized that others—lawyers, psychologists, and so on—may read them. Fadden’s analysis, along with a social worker’s assessment and psychologists’ assessments, led to a favourable outcome for the mother, who’d been accused of nefarious things that hadn’t been proven. “You have to be careful asking kids questions, because the questions we ask them often already suggest the answers,” said Fadden. “We rarely ask children information-seeking questions.”
Fadden’s third case was a more complex one: a woman had accused a man of drugging and sexually assaulting her, but eyewitness accounts, video surveillance, and toxicology suggested that her allegation was false. She faced a charge of public mischief, but she claimed she didn’t understand what happened during the police interview. Fadden had to assess whether she was legally competent by comparing her linguistic performance with what we’d expect from a native speaker in the same context. In her doctoral dissertation, Fadden had characterized a series of police interviews of first-time suspects, so she had a robust set of measures as benchmarks.
Cognitive deficiency is correlated with a slow speech rate, but the suspect had a relatively high speech rate, and it didn’t drop significantly from the beginning of the interview to the end (so not much of a fatigue effect). Fadden also looked at her turn latency (how much time elapses between the end of the interviewer’s question and her answer) and her pause ratio (how much she pauses compared with how much she speaks). All of these temporal elements were within normal ranges; nothing suggested that she was incompetent.
A stronger indicator of the suspect’s competence was in the way she manipulated specificity associated with details. Take, for instance, “this talk,” from most generic to most specific:
- Fadden’s giving a talk on Wednesday (type identifiable, not specific)
- There’s this talk on forensic linguistics on Wednesday (referential)
- The talk on forensic linguistics on forensic linguistics will be on Wednesday (uniquely identifiable)
- This/that talk on forensic linguistics is on Wednesday (familiar)
- I’ll be at that on Wednesday (activated)
- It’s on Wednesday (in focus—you don’t even have to name it)
(Fadden made sure we noted the distinction in specificity between “this talk” and “this talk.”) Through 2 hours of interviews, the suspect was adept at adjusting the level of specificity based on context, using generic language to describe what she claimed to have witnessed when she allegedly found herself in an unfamiliar environment but specific language when talking about details that the police officer had told her. Fadden concluded that she had normal cognitive status. The suspect eventually confessed to fabricating the story because she didn’t want her husband to find out she’d willingly slept with another man.
To end the evening Fadden challenged us to an exercise of authorship analysis. She gave us two writing samples from different blogs with similar topics and writing styles. We had to figure out who’d authored a third sample. From a superficial reading, most people in the room guessed that the first blogger was responsible, but Fadden showed that by comparing features like
- the number of words per sentence,
- the length of words,
- the use of adjectives and adverbs,
- the use of parentheticals,
- the use of discourse markers,
- the use of conjoined phrases, and
- the use of independent clauses,
her analysis showed that the second blogger was the likely author. Authorship analysis is a contentious field now because its effectiveness and accuracy aren’t completely understood, and there’s no standard method for carrying it out. As a result, it’s not admissible in court. But, like a polygraph, authorship analysis may help steer the direction of an investigation.
Biomedical research reports as structured data: Toward greater efficiency and interoperability
I’ve been working on this paper since September, and I was hoping to publish it in a journal, but I learned today I’ve been scooped. So I see no harm now in publishing it here. I want to thank Frank Sayre and Charlie Goldsmith for their advice on it, which I clearly took too long to act on. I’m posting it as is for now but will probably refine it in the weeks to come.
Apologies to my regular readers for this extra-long and esoteric post.
Comments welcome!
***
Introduction
Reporting guidelines such as CONSORT,[1] PRISMA,[2] STARD,[3] and others on the EQUATOR Network [4] set out the minimum standards for what a biomedical research report must include to be usable. Each guideline has an associated checklist, and the implication is that every item in the checklist should appear in a paragraph or section of the final report text.
But what if, rather than a paragraph, each item could be a datum in a database?
Moving to a model of research reports as structured or semi-structured data would mean that, instead of writing reports as narrative prose, researchers could submit their research findings by answering an online questionnaire. Checklist items would be required fields, and incomplete reports would not be accepted by the journal’s system. For some items—such as participant inclusion and exclusion criteria—the data collection could be even more granular: each criterion, including sex, the lower and upper limits of the age range, medical condition, and so on, could be its own field. Once the journals receive a completed online form, they would simply generate a report of the fields in a specified order to create a paper suitable for peer review.
The benefits of structured reporting have long been acknowledged, Andrew’s proposal in 1994[5] for structured reporting of clinical trials formed the basis of the CONSORT guidelines. However, although in 2006 Wager did suggest electronic templates for reports and urged researchers to openly share their research results as datasets,[6] to date neither researchers nor publishers have made the leap to structuring the components of a research article as data.
Structured data reporting is already becoming a reality for practitioners: radiologists, for example, have explored the best practices for structured reporting, including using a standardized lexicon for easy translation.[7] A study involving a focus group of radiologists discussing structured reporting versus free text found that the practitioners were open to the idea of reporting templates as long as they could be involved in their development.[8] They also wanted to retain expressive power and the ability to personalize their reports, suggesting that a hybrid model of structured and unstructured reporting may work best. In other scientific fields, including chemistry, researchers are recognizing the advantage of structured reporting to share models and data and have proposed possible formats for these “datuments.”[9] The biomedical research community is in an excellent position to learn from these studies to develop its own structured data reporting system.
Reports as structured data, submitted through a user-friendly, flexible interface, coupled with a robust database, could solve or mitigate many of the problems threatening the efficiency and interoperability of the existing research publication system.
Problems with biomedical research reporting and benefits of a structured data alternative
Non-compliance with reporting guidelines
Although reporting guidelines do improve the quality of research reports,[10],[11] Glasziou et al. maintain that they “remain much less adhered to than they should be”[12] and recommend that journal reviewers and editors actively enforce the guidelines. Many researchers may still not be aware that these guidelines exist, a situation that motivated the 2013 work of Christensen et al. to promote them among rheumatology researchers.[13] Research reports as online forms based on the reporting guidelines would raise awareness of reporting guidelines and reduce the need for human enforcement: a report missing any required fields would not be accepted by the system.
Inefficiency of systematic reviews
As the PRISMA flowchart attests, performing a systematic review is a painstaking, multi-step process that involves scouring the research literature for records that may be relevant, sorting through those records to select articles, then reading and selecting among those articles for studies that meet the criteria of the topic being reviewed before data analysis can even begin. Often researchers isolate records based on eligibility criteria and intervention. If that information were stored as discrete data rather than buried in a narrative paragraph, relevant articles could be isolated much more efficiently. Such a system would also facilitate other types of literature reviews, including rapid reviews.[14]
What’s more, the richness of the data would open up avenues of additional research. For example, a researcher interested in studying the effectiveness of recruitment techniques in pediatric trials could easily isolate a search to the age and size of the study population, and recruitment methods.
Poorly written text
Glasziou et al. point to poorly written text as one of the reasons a biomedical research report may become unusable. Although certain parts of the report—the abstract, for instance, and the discussion—should always be prose, information design research has long challenged the primacy of the narrative paragraph as the optimal way to convey certain types of information.[15],[16],[17] Data such as inclusion and exclusion criteria are best presented as a table; a procedure, such as a method or protocol, would be easiest for readers to follow as a numbered list of discrete steps. Asking researchers to enter much of that information as structured data would minimize the amount of prose they would have to write (and that editors would have to read), and the presentation of that information as blocks of lists or tables would in fact accelerate information retrieval and comprehension.
Growth of journals in languages other than English
According to Chan et al.,[18] more than 2,500 biomedical journals are published in Chinese. The growth of these and other publications in languages other than English means that systematic reviews done using English-language articles alone will not capture the full story.[19] Reports that use structured data will be easier to translate: not only will the text itself—and thus its translation—be kept to a minimum, but, assuming journals in other languages adopt the same reporting guidelines and database structure, the data fields can easily be mapped between them, improving interoperability between languages. Further interoperability would be possible if the questionnaires restricted users to controlled vocabularies, such as the International Classification of Diseases (ICD) and the International Classification of Health Interventions (ICHI) being developed.
Resistance to change among publishers and researchers
Smith noted in 2004 that the scientific article has barely changed in the past five decades.[20] Two years later Wager called on the research community to embrace the opportunity that technology offered and publish results on publicly funded websites, effectively transforming the role of for-profit publishers to one of “producing lively and informative reviews and critiques of the latest findings” or “providing information and interpretation for different audiences.” Almost a decade after Wager’s proposals, journals are still the de facto publishers of primary reports, and, without a momentous shift in the academic reward system, that scenario is unlikely to change.
Moving to structured data reporting would change the interface between researchers and journals, as well as the journal’s archival infrastructure, but it wouldn’t alter the fundamental role of journals as gatekeepers and arbiters of research quality; they would still mediate the article selection and peer review processes and provide important context and forums for discussion.
The ubiquity of online forms may help researchers overcome their reluctance to adapt to a new, structured system of research reporting. Many national funding agencies now require grant applications to be submitted online,[21],[22] and researchers will become familiar with the interface and process.
A model interface
To offer a sense of how a reporting questionnaire might look, I present mock-ups of select portions of a form for a randomized trial. I do not submit that they are the only—or even the best—way to gather reporting details from researchers; these minimalist mock-ups are merely the first step toward a proof of concept. The final design would have to be developed and tested in consultation with users.
In the figures that follow the blue letters are labels for annotations and would not appear on the interface.
Other considerations
Archives
When journals moved from print to online dissemination, publishers recognized the value of digitizing their archives so that older articles could also be searched and accessed. Analogously, if publishers not only accepted new articles as structured data but also committed to converting their archives, the benefits would be enormous. First, achieving the eventual goal of completely converting all existing biomedical articles would help researchers perform accelerated systematic reviews on a comprehensive set of data. Second, the conversion process would favour published articles that already comply with the reporting guidelines; after conversion, researchers would be able to search a curated dataset of high-quality articles.
I recognize that the resources needed for this conversion would be considerable, and I see the development of a new class of professionals trained in assessing and converting existing articles. For articles that meet almost but not quite all reporting guidelines, particularly more recent publications, these professionals may succeed in acquiring missing data from some authors.[24] Advances in automating the systematic review process[25] may also help expedite conversion.
Software development for the database and interface
In “Reducing waste from incomplete or unusable reports of biomedical research,” Glasziou et al. call on the international community to find ways to decrease the time and financial burden of systematic reviews and urge funders to take responsibility for developing infrastructure that would improve reporting and archiving. To ensure interoperability and encourage widespread adoption of health reports as structured data, I urge the international biomedical research community to develop and agree to a common set of standards for the report databases, in analogy to the effort to create standards for trial registration that culminated in the World Health Organization’s International Standards for Clinical Trial Registries.[26] An international consortium dedicated to developing a robust database and flexible interface to accommodate reporting structured data would also be more likely to secure the necessary license to use a copyrighted controlled vocabulary such as the ICD.
Implementation
Any new system with wide-ranging effects must be developed in consultation with a representative sample of users and adquately piloted. The users of the report submission interface will largely be researchers, but the report generated by the journal could be consulted by a diverse group of stakeholders—not only researchers but also clinicians, patient groups, advocacy groups, and policy makers, among others. A parallel critical review of the format of this report would provide an opportunity to assess how best to reach audiences that are vested in discovering new research.
Although reporting guidelines exist for many different types of reports can each serve as the basis of a questionnaire, I recommend a review of all existing biomedical reporting guidelines together to harmonize them as much as possible before a database for reports is designed, perhaps in collaboration with the BioSharing initiative[27] and in an effort similar to the MIBBI Foundry project to “synthesize reporting guidelines from various communities into a suite of orthogonal standards” in the biological sciences.[28] For example, whereas recruitment methods are required according to the STARD guidelines, they are not in CONSORT. Ensuring that all guidelines have similar basic requirements would ensure better interoperability among article types and more homogeneity in the richness of the data.
Conclusions
Structuring biomedical research reports as data will improve report quality, decrease the time and effort it takes to perform systematic reviews, and facilitate translations and interoperability with existing data-driven sysetms in health care. The technology exists to realize this shift, and we, like Glazsiou et al., urge funders and publishers to collaborate on the development, in consultation with users, of a robust reporting database system and flexible interface. The next logical step for research in this area would be to build a prototype and for researchers to use while running a usability study.
Reports as structured data aren’t a mere luxury—they’re an imperative; without them, biomedical research is unlikely to become well integrated into existing health informatics infrastructure clinicians use to make decisions about their practice and about patient care.
Sources
[1] “CONSORT Statement,” accessed October 04, 2014, http://www.consort-statement.org/.
[2] “PRISMA Statement,” accessed October 04, 2014, http://www.prisma-statement.org/index.htm.
[3] “STARD Statement,” n.d., http://www.stard-statement.org/.
[4] “The EQUATOR Network | Enhancing the QUAlity and Transparency Of Health Research,” accessed September 26, 2014, http://www.equator-network.org/.
[5] Erik Andrew, “A Proposal for Structured Reporting of Randomized Controlled Trials,” JAMA: The Journal of the American Medical Association 272, no. 24 (December 28, 1994): 1926, doi:10.1001/jama.1994.03520240054041.
[6] Elizabeth Wager, “Publishing Clinical Trial Results: The Future Beckons.,” PLoS Clinical Trials 1, no. 6 (January 27, 2006): e31, doi:10.1371/journal.pctr.0010031.
[7] Roberto Stramare et al., “Structured Reporting Using a Shared Indexed Multilingual Radiology Lexicon.,” International Journal of Computer Assisted Radiology and Surgery 7, no. 4 (July 2012): 621–33, doi:10.1007/s11548-011-0663-4.
[8] J M L Bosmans et al., “Structured Reporting: If, Why, When, How-and at What Expense? Results of a Focus Group Meeting of Radiology Professionals from Eight Countries.,” Insights into Imaging 3, no. 3 (June 2012): 295–302, doi:10.1007/s13244-012-0148-1.
[9] Henry S Rzepa, “Chemical Datuments as Scientific Enablers.,” Journal of Cheminformatics 5, no. 1 (January 2013): 6, doi:10.1186/1758-2946-5-6.
[10] Robert L Kane, Jye Wang, and Judith Garrard, “Reporting in Randomized Clinical Trials Improved after Adoption of the CONSORT Statement.,” Journal of Clinical Epidemiology 60, no. 3 (March 2007): 241–49, doi:10.1016/j.jclinepi.2006.06.016.
[11] N Smidt et al., “The Quality of Diagnostic Accuracy Studies since the STARD Statement: Has It Improved?,” Neurology 67, no. 5 (September 12, 2006): 792–97, doi:10.1212/01.wnl.0000238386.41398.30.
[12] Paul Glasziou et al., “Reducing Waste from Incomplete or Unusable Reports of Biomedical Research.,” Lancet 383, no. 9913 (January 18, 2014): 267–76, doi:10.1016/S0140-6736(13)62228-X.
[13] Robin Christensen, Henning Bliddal, and Marius Henriksen, “Enhancing the Reporting and Transparency of Rheumatology Research: A Guide to Reporting Guidelines.,” Arthritis Research & Therapy 15, no. 1 (January 2013): 109, doi:10.1186/ar4145.
[14] Sara Khangura et al., “Evidence Summaries: The Evolution of a Rapid Review Approach.,” Systematic Reviews 1, no. 1 (January 10, 2012): 10, doi:10.1186/2046-4053-1-10.
[15] Patricia Wright and Fraser Reid, “Written Information: Some Alternatives to Prose for Expressing the Outcomes of Complex Contingencies.,” Journal of Applied Psychology 57, no. 2 (1973).
[16] Karen A. Schriver, Dynamics in Document Design: Creating Text for Readers (New York: Wiley, 1997).
[17] Robert E. Horn, Mapping Hypertext: The Analysis, Organization, and Display of Knowledge for the Next Generation of On-Line Text and Graphics (Lexington Institute, 1989).
[18] An-Wen Chan et al., “Increasing Value and Reducing Waste: Addressing Inaccessible Research.,” Lancet 383, no. 9913 (January 18, 2014): 257–66, doi:10.1016/S0140-6736(13)62296-5.
[19] Andra Morrison et al., “The Effect of English-Language Restriction on Systematic Review-Based Meta-Analyses: A Systematic Review of Empirical Studies.,” International Journal of Technology Assessment in Health Care 28, no. 2 (April 2012): 138–44, doi:10.1017/S0266462312000086.
[20] R. Smith, “Scientific Articles Have Hardly Changed in 50 Years,” BMJ 328, no. 7455 (June 26, 2004): 1533–1533, doi:10.1136/bmj.328.7455.1533.
[21] Australian Research Council, “Grant Application Management System (GAMS) Information” (corporateName=The Australian Research Council; jurisdiction=Commonwealth of Australia), accessed October 04, 2014, http://www.arc.gov.au/applicants/rms_info.htm.
[22] Canadian Institutes for Health Research, “Acceptable Application Formats and Attachments—CIHR,” November 10, 2005, http://www.cihr-irsc.gc.ca/e/29300.html.
[23] “Structured Abstracts in MEDLINE®,” accessed January 14, 2015, http://structuredabstracts.nlm.nih.gov/.
[24] Shelley S Selph, Alexander D Ginsburg, and Roger Chou, “Impact of Contacting Study Authors to Obtain Additional Data for Systematic Reviews: Diagnostic Accuracy Studies for Hepatic Fibrosis.,” Systematic Reviews 3, no. 1 (September 19, 2014): 107, doi:10.1186/2046-4053-3-107.
[25] Guy Tsafnat et al., “Systematic Review Automation Technologies.,” Systematic Reviews 3, no. 1 (January 09, 2014): 74, doi:10.1186/2046-4053-3-74.
[26] World Health Organization, International Standards for Clinical Trial Registries (Genevia, Switzerland: World Health Organization, 2012), www.who.int/iris/bitstream/10665/76705/1/9789241504294_eng.pdf.
[27] “BioSharing,” accessed October 12, 2014, http://www.biosharing.org/.
[28] “MIBBI: Minimum Information for Biological and Biomedical Investigations,” accessed October 12, 2014, http://mibbi.sourceforge.net/portal.shtml.
Indi Young—Practical empathy: For collaboration and creativity in your work (webinar)
Empathy for your end users can help you create and design something that truly suits their needs, and it’s the basis of usability design and plain language writing. Putting yourself in someone else’s shoes is an example of applying empathy, but UX consultant Indi Young, author of Practical Empathy, says that you first have to develop empathy, and she led a UserTesting.com webinar to show us how.
Empathy, said Young, is usually associated with emotion: it makes you think about sensitivity and warmth or about sympathy and understanding a person’s perspective, sometimes so that you can excuse their behaviours or forgive their actions. As it turns out, that definition describes empathy rather poorly. Dr. Brené Brown created a short animation to explain the differences between sympathy and empathy.
True emotional empathy, Young explained, is when another person’s emotion infects you. “It strikes like lightning,” she said, and “it’s how movies and books work”—you’re struck with the same emotions as the characters. This kind of emotional empathy can be incredibly powerful, but you can’t force it or will it to happen. In our work, we need something more reliable.
Enter cognitive empathy, which can include emotions but focuses on understanding another person’s thinking and reactions. In creative work, we often end up concentrating too much on ideas and neglect the people. By listening to people and deepening our understanding of them, we can develop and apply ideas that support their patterns. This listen » deepen » apply process is iterative.
How is empathy important in our work? Empathy has a lot of uses, said Young, and one she saw a lot was using it to persuade or manipulate, which could be well intentioned but might also problematic. She’d rather focus on using empathy to support the intents and purposes of others—to collaborate and create. “Others” is purposely vague here—it can refer to people in your organization or external to it.
To truly collaborate with someone, you have to listen to them, one on one. “When someone realizes you are really listening to them and you don’t have an ulterior motive, they really open up.” These listening sessions allow you to generate respect for another person’s perspectives and can be the basis for creativity. When a user issues a request, ask about the thinking behind it. Knowing the motivation behind a request might allow you to come up with an even better idea to support your users. You can’t establish empathy based only on a user’s opinions or preferences.
In a listening session, be neutral and let go of any judgments; you can’t properly support someone you’re judging. Purposeful listening can also let you discover what you’re missing—what you don’t know you don’t know. The intent of a listening session isn’t to solve any problems—don’t go into a session with an agenda or a set list of questions, and don’t use the session as a forum to show others how much you know. Become aware of your assumptions and don’t be afraid to ask about them.
Let the other person set boundaries of what to talk about. Don’t bring something up if they don’t bring it up. If they’re not comfortable talking, excuse them. Don’t set a time limit or watch the clock. Finally, don’t take notes. “The act of writing things down in a notebook takes up so much of your brain that you can’t listen as well,” said Young.
What you’re trying to uncover in the listening sessions is the person’s reasoning, intent, and guiding principles. What passes through their mind as they move toward their intent? Instead of asking “How do you go about X?” ask “What went through your mind as you X?”
These guidelines seem simple, said Young, but mastering listening skills takes a lot of practice. Once people start opening up and you see how your ideas can better serve their needs, you’ll see how powerful developing cognitive empathy can be.
***
Indi Young’s webinar will be available on UserTesting.com in a couple of weeks.
Time to leave academic writing to communications experts?
In the Lancet’s 2014 series about preventing waste in biomedical research, Paul Glasziou et al. pointed to “poorly written text” as a major reason a staggering 50% of biomedical reports are unusable [1], effectively squandering the research behind them. According to psycholinguist Steven Pinker [2], bad academic writing persists partly because there aren’t many incentives for scholars to change their ways:
Few academic journals stipulate clarity among their criteria for acceptance, and few reviewers and editors enforce it. While no academic would confess to shoddy methodology or slapdash reading, many are blasé about their incompetence at writing.
He adds:
Enough already. Our indifference to how we share the fruits of our intellectual labors is a betrayal of our calling to enhance the spread of knowledge. In writing badly, we are wasting each other’s time, sowing confusion and error, and turning our profession into a laughingstock.
The problem of impenetrable academese is undeniable. How do we fix it?
In “Writing Intelligible English Prose for Biomedical Journals,” John Ludbrook proposes seven strategies [3]:
- greater emphasis on good writing by students in schools and by university schools,
- making use of university service courses and workshops on writing plain and scientific English,
- consulting books on science writing,
- one-on-one mentoring,
- using “scientific” measures to reveal lexical poverty (i.e., readability metrics),
- making use of freelance science editors, and
- encouraging the editors of biomedical journals to pay more attention to the problem.
Many institutions have implemented at least some of these strategies. For instance, SFU’s graduate student orientation in summer 2014 introduced incoming students to the library’s writing facilitators and open writing commons. And at UBC, Eric Jandciu, strategist for teaching and learning initiatives in the Faculty of Science, has developed communication courses and resources specifically for science students, training them early in their careers “to stop thinking of communication as separate from their science.” [4]
Although improving scholars’ writing is a fine enough goal, the growth in the past fifteen years of research interdisciplinarity [5], where experts from different fields contribute their strengths to a project, has me wondering whether we would be more productive if we took the responsibility of writing entirely away from researchers. Rather than forcing academics to hone a weak skill, maybe we’d be better off bringing in communications professionals whose writing is already sharp.
This model is already a reality in several ways (though not all of them aboveboard):
- Many journals encourage authors to have their papers professionally edited before submission [6]. From personal experience, I can confirm that this “editing” can involve heavy rewriting.
- The pharmaceutical industry has long used ghostwriters to craft journal articles on a researcher’s behalf, turning biomedical journals into marketing vehicles [7]. We could avoid the ethical problems this arrangement poses—including plagiarism and conflict of interest—with a more transparent process that reveals a writer’s identity and affiliations.
- Funding bodies such as CIHR have begun emphasizing the importance of integrated knowledge translation (KT) [8], to ensure knowledge users have timely access to research findings. Although much of KT focuses on disseminating research knowledge to stakeholders outside of academia, including patients, practitioners, and policy makers, reaching fellow researchers is also an important objective.
To ensure high-quality publications, Glasziou et al. suggest the following:
Many research institutions already employ grants officers to increase research input, but few employ a publication officer to improve research outputs, including attention to publication ethics and research integrity, use of reporting guidelines, and development of different publication models such as open access. Ethics committees and publication officers could also help to ensure that all research methods and results are completely and transparently reported and published.
Such a publication officer would effectively serve as an in-house editor and production manager. Another possibility is for each group or department to hire an in-house technical communicator. Technical communicators are trained in interviewing subject matter experts and using that information to draft documents for diverse audiences. In the age of big data, one could also make a convincing case for hiring a person who specializes in data visualization to create images and animations that complement the text.
That said, liberating scientists from writing should not absolve them of the responsibility of learning how to communicate. At a minimum, they would still need to understand the publication process enough to effectively convey their ideas to the writers.
Separating out the communication function within research would also raise questions about whether we should also abolish the research–teaching–service paradigm on which academic tenure is based. If we leave the writing to strong writers, perhaps only strong teachers should teach and only strong administrators should administrate.
Universities’ increasing dependence on sessional and adjunct faculty is a hint that this fragmentation is already happening [9], though in a way that reinforces institutional hierarchies and keeps these contract workers from being fairly compensated. If these institutions continue to define ever more specialized roles, whether for dedicated instructors, publication officers, or research communicators, they’ll have to reconsider how best to acknowledge these experts’ contributions so that they feel their skills are appropriately valued.
Sources
[1] Paul Glasziou et al., “Reducing Waste from Incomplete or Unusable Reports of Biomedical Research,” Lancet 383, no. 9913 (January 18, 2014): 267–76, doi:10.1016/S0140-6736(13)62228-X.
[2] Steven Pinker, “Why Academics Stink at Writing,” The Chronicle of Higher Education, September 26, 2014, http://chronicle.com/article/Why-Academics-Writing-Stinks/148989/
[3] John Ludbrook, “Writing Intelligible English Prose for Biomedical Journals,” Clinical and Experimental Pharmacology & Physiology 34, no. 5–6 (January ): 508–14, doi:10.1111/j.1440-1681.2007.04603.x.
[4] Iva Cheung, “Communication Convergence 2014,” Iva Cheung [blog], October 8, 2014, https://ivacheung.com/2014/10/communication-convergence-2014/.
[5] B.C. Choi and A.W. Pak, “Multidisciplinarity, Interdisciplinarity, and Transdisciplinarity in Health Research, Services, Education and Policy: 1. definitions, objectives, and evidence of effectiveness.” Clinical and Investigative Medicine 29 (2006): 351–64.
[6] “Author FAQs,” Wiley Open Access, http://www.wileyopenaccess.com/details/content/12f25e4f1aa/Author-FAQs.html.
[7] Katie Moisse, “Ghostbusters: Authors of a New Study Propose a Strict Ban on Medical Ghostwriting,” Scientific American, February 4, 2010, http://www.scientificamerican.com/article/ghostwriter-science-industry/.
[8] “Guide to Knowledge Translation Planning at CIHR: Integrated and End-of-Grant Approaches,” Canadian Institutes of Health Research, Modified June 12, 2012, http://www.cihr-irsc.gc.ca/e/45321.html.
[9] “Most University Undergrads Now Taught by Poorly Paid Part-Timers,” CBC.ca, September 7, 2014, http://www.cbc.ca/news/canada/most-university-undergrads-now-taught-by-poorly-paid-part-timers-1.2756024.
***
This post was adapted from a paper I wrote for one of my courses. I don’t necessarily believe that a technical communication–type workflow is the way to go, but the object of the assignment was to explore a few “what-if” situations, and I thought this topic was close enough to editing and publishing to share here.
Crisis
Colin Moorhouse—Editing for the ear (EAC-BC meeting)
Colin Moorhouse has been a freelance speech writer for twenty-five years and has written for clients in government, at NGOs, and in the private sector. “I get to put words in people’s mouths,” he said, “which is a very nice thing.” He also enjoys that speech writing exposes him to a huge variety of topics (much like editing). Some are more interesting than others, but even the boring ones aren’t boring because Moorhouse needs to devote only a short burst of attention to it. “I can be interested for the three days it takes me to write the speech,” he said.
The key difference between speech writing and other kinds of writing is that it’s all about writing for the ear, not the eye. Even if you’re a skilled writer, what you write may not sound natural for someone to say out loud. “Who didn’t say this,” he asked:
Don’t think about all the services you would like to receive from this great nation; think about how you can make your own contribution to a better society.
That’s, of course, a paraphrase of the famous line in John F. Kennedy’s inaugural speech: “ask not what your country can do for you; ask what you can do for your country.” You can tell which would have a bigger impact spoken aloud.
When words are written for the ear, said Moorhouse, they cater to the imagination. We filter those words through our own experience. The kinds of written materials that most closely resemble speech are letters and diaries, and he’ll sometimes use these to help him write a person’s voice into a speech. “People say that to write great speeches, you should read great speeches. I don’t think so. You should listen to great speeches.”
Speeches are not a great way to share information, Moorhouse told us, because we forget what we hear. Instead, speeches are about engaging an audience so that they’ll associate the speaker with that event or topic. Moorhouse listed six considerations when he writes speeches.
1. Oratory
“If your speaker’s a great orator,” he said, “they can almost read the phone book, because there’s something about their voice.” But most speakers aren’t like that. “Ninety-nine percent of my speakers aren’t good. That’s not to criticize them; they’re not trained.” Moorhouse has to find ways to make the words make them better speakers.
2. Event
The nature of the event factors largely into Moorhouse’s speech writing. Does the audience want to be there, or did they have to be? Is the speaker delivering good news or bad? Will there be 30 or 300 people? Will there be a mic or no mic? PowerPoint or no PowerPoint? Will people be live-tweeting or recording the speech? How knowledgeable will the audience be, and what mood will they be in? Each of these elements can change the speech entirely.
3. Story
“Storytelling is an incredibly valuable part of speech writing,” said Moorhouse. “Stories ground all of us to our common humanity.” If you’re editing a speech and you find it boring, ask the writer if the speaker has told them any stories.
4. Humour
“Humour is not joke telling. Jokes never work.” Moorhouse added, “I can make people cry a lot more easily than laugh. We grieve at the same things, but humour is very localized.” Self-effacing humour tends to work well, especially if it’s embedded in a story.
5. Language
“This is where you and I have our strengths,” he said. He advocates using simple language and declarative sentences.
6. Interest
“I believe we should be able to walk into almost any presentation and find it interesting,” Moorhouse said. “All speeches are potentially interesting.” If you’re editing a speech, the best litmus test for whether a speech is interesting is to ask yourself, “Would I want to sit through it?”
The great thing about speech writing is that you don’t need all six of these elements to make a good speech. Maybe the speaker’s not a great orator, for example. You can use the other elements to compensate.
Make sure you home in on a speech’s intended message, he said. If you find that your speech seems to be rambling, you haven’t nailed the message. If you need to, go back to the speaker and ask them to complete the sentence, “Today I want to talk to you about ______.”
In terms of process, Moorhouse suggests asking the client for the invite letter and event agenda, so that you know when the speech will be. He doesn’t always get to meet the speakers, but if he does, he’ll tape his interview with them so that he can listen to their voice and catch the intonation and the words they use. He’ll also interview people who know the speaker or the organization’s front-line staff to glean information and stories. He doesn’t use outlines, although other writers do. The danger with presenting an outline to the client, though, is that the client might want to circulate it to a bunch of people, which would hold up the writing process. Moorhouse spends about an hour on every minute of the speech—20 hours working on a 20-minute keynote.
Moorhouse offers a full online course about speech writing on his website, which includes a 170-page manual and four live webinars. He also offers a short course, with a 50-page manual, a speech-writing checklist, a webinar and a 20-minute consult.