Biomedical research reports as structured data: Toward greater efficiency and interoperability

I’ve been working on this paper since September, and I was hoping to publish it in a journal, but I learned today I’ve been scooped. So I see no harm now in publishing it here. I want to thank Frank Sayre and Charlie Goldsmith for their advice on it, which I clearly took too long to act on. I’m posting it as is for now but will probably refine it in the weeks to come.

Apologies to my regular readers for this extra-long and esoteric post.

Comments welcome!

***

Introduction

Reporting guidelines such as CONSORT,[1] PRISMA,[2] STARD,[3] and others on the EQUATOR Network [4] set out the minimum standards for what a biomedical research report must include to be usable. Each guideline has an associated checklist, and the implication is that every item in the checklist should appear in a paragraph or section of the final report text.

But what if, rather than a paragraph, each item could be a datum in a database?

Moving to a model of research reports as structured or semi-structured data would mean that, instead of writing reports as narrative prose, researchers could submit their research findings by answering an online questionnaire. Checklist items would be required fields, and incomplete reports would not be accepted by the journal’s system. For some items—such as participant inclusion and exclusion criteria—the data collection could be even more granular: each criterion, including sex, the lower and upper limits of the age range, medical condition, and so on, could be its own field. Once the journals receive a completed online form, they would simply generate a report of the fields in a specified order to create a paper suitable for peer review.

The benefits of structured reporting have long been acknowledged, Andrew’s proposal in 1994[5] for structured reporting of clinical trials formed the basis of the CONSORT guidelines. However, although in 2006 Wager did suggest electronic templates for reports and urged researchers to openly share their research results as datasets,[6] to date neither researchers nor publishers have made the leap to structuring the components of a research article as data.

Structured data reporting is already becoming a reality for practitioners: radiologists, for example, have explored the best practices for structured reporting, including using a standardized lexicon for easy translation.[7] A study involving a focus group of radiologists discussing structured reporting versus free text found that the practitioners were open to the idea of reporting templates as long as they could be involved in their development.[8] They also wanted to retain expressive power and the ability to personalize their reports, suggesting that a hybrid model of structured and unstructured reporting may work best. In other scientific fields, including chemistry, researchers are recognizing the advantage of structured reporting to share models and data and have proposed possible formats for these “datuments.”[9] The biomedical research community is in an excellent position to learn from these studies to develop its own structured data reporting system.

Reports as structured data, submitted through a user-friendly, flexible interface, coupled with a robust database, could solve or mitigate many of the problems threatening the efficiency and interoperability of the existing research publication system.

Problems with biomedical research reporting and benefits of a structured data alternative

Non-compliance with reporting guidelines

Although reporting guidelines do improve the quality of research reports,[10],[11] Glasziou et al. maintain that they “remain much less adhered to than they should be”[12] and recommend that journal reviewers and editors actively enforce the guidelines. Many researchers may still not be aware that these guidelines exist, a situation that motivated the 2013 work of Christensen et al. to promote them among rheumatology researchers.[13] Research reports as online forms based on the reporting guidelines would raise awareness of reporting guidelines and reduce the need for human enforcement: a report missing any required fields would not be accepted by the system.

Inefficiency of systematic reviews

As the PRISMA flowchart attests, performing a systematic review is a painstaking, multi-step process that involves scouring the research literature for records that may be relevant, sorting through those records to select articles, then reading and selecting among those articles for studies that meet the criteria of the topic being reviewed before data analysis can even begin. Often researchers isolate records based on eligibility criteria and intervention. If that information were stored as discrete data rather than buried in a narrative paragraph, relevant articles could be isolated much more efficiently. Such a system would also facilitate other types of literature reviews, including rapid reviews.[14]

What’s more, the richness of the data would open up avenues of additional research. For example, a researcher interested in studying the effectiveness of recruitment techniques in pediatric trials could easily isolate a search to the age and size of the study population, and recruitment methods.

Poorly written text

Glasziou et al. point to poorly written text as one of the reasons a biomedical research report may become unusable. Although certain parts of the report—the abstract, for instance, and the discussion—should always be prose, information design research has long challenged the primacy of the narrative paragraph as the optimal way to convey certain types of information.[15],[16],[17] Data such as inclusion and exclusion criteria are best presented as a table; a procedure, such as a method or protocol, would be easiest for readers to follow as a numbered list of discrete steps. Asking researchers to enter much of that information as structured data would minimize the amount of prose they would have to write (and that editors would have to read), and the presentation of that information as blocks of lists or tables would in fact accelerate information retrieval and comprehension.

Growth of journals in languages other than English

According to Chan et al.,[18] more than 2,500 biomedical journals are published in Chinese. The growth of these and other publications in languages other than English means that systematic reviews done using English-language articles alone will not capture the full story.[19] Reports that use structured data will be easier to translate: not only will the text itself—and thus its translation—be kept to a minimum, but, assuming journals in other languages adopt the same reporting guidelines and database structure, the data fields can easily be mapped between them, improving interoperability between languages. Further interoperability would be possible if the questionnaires restricted users to controlled vocabularies, such as the International Classification of Diseases (ICD) and the International Classification of Health Interventions (ICHI) being developed.

Resistance to change among publishers and researchers

Smith noted in 2004 that the scientific article has barely changed in the past five decades.[20] Two years later Wager called on the research community to embrace the opportunity that technology offered and publish results on publicly funded websites, effectively transforming the role of for-profit publishers to one of “producing lively and informative reviews and critiques of the latest findings” or “providing information and interpretation for different audiences.” Almost a decade after Wager’s proposals, journals are still the de facto publishers of primary reports, and, without a momentous shift in the academic reward system, that scenario is unlikely to change.

Moving to structured data reporting would change the interface between researchers and journals, as well as the journal’s archival infrastructure, but it wouldn’t alter the fundamental role of journals as gatekeepers and arbiters of research quality; they would still mediate the article selection and peer review processes and provide important context and forums for discussion.

The ubiquity of online forms may help researchers overcome their reluctance to adapt to a new, structured system of research reporting. Many national funding agencies now require grant applications to be submitted online,[21],[22] and researchers will become familiar with the interface and process.

A model interface

To offer a sense of how a reporting questionnaire might look, I present mock-ups of select portions of a form for a randomized trial. I do not submit that they are the only—or even the best—way to gather reporting details from researchers; these minimalist mock-ups are merely the first step toward a proof of concept. The final design would have to be developed and tested in consultation with users.

In the figures that follow the blue letters are labels for annotations and would not appear on the interface.

QuestionnaireInterface-1
Figure 1: The first screen an author will see after logging in. (A) Each author will have an account and profile, including affiliations; many journals already have author accounts as part of their online submission infrastructure. (B) An autocomplete field with a controlled vocabulary of the types of study supported by the system. (C) Many types of articles either have no associated reporting guidelines or are unlikely to have a set structure (such as commentaries and letters). This button allows authors to submit those articles in the traditional way.
QuestionnaireInterface-2
Figure 2: Once the author selects the type of study, the appropriate associated form will load. If the author had chosen “randomised trial (cluster)” in Figure 1, for example, the CONSORT form with the cluster extension would load.
Figure 3: First page of the CONSORT questionnaire. (A) Because reporting guidelines and checklists vary in length, only after the form loads can the interface indicate progress through the questionnaire. A user-friendly system would also include a way for users to jump to a specific question or page. (B) The help button to the right of each field could bring up the associated section of the CONSORT Explanation and Elaboration document. (C) An autocomplete field with a controlled vocabulary of the possible sections in a structured abstract, such as the one from the National Library of Medicine.[23] (D) Required fields are indicated by an asterisk. (E) Users should be able to navigate to the next page without filling required fields in this particular page. Only at the end of the questionnaire will the system flag empty required fields. (F) Users should be able to save their progress at any time. Better yet, the system could autosave at regular intervals. (G) Users should also be able to exit at any point.
Figure 3: First page of the CONSORT questionnaire. (A) Because reporting guidelines and checklists vary in length, only after the form loads can the interface indicate progress through the questionnaire. A user-friendly system would also include a way for users to jump to a specific question or page. (B) The help button to the right of each field could bring up the associated section of the CONSORT Explanation and Elaboration document. (C) An autocomplete field with a controlled vocabulary of the possible sections in a structured abstract, such as the one from the National Library of Medicine.[23] (D) Required fields are indicated by an asterisk. (E) Users should be able to navigate to the next page without filling required fields in this particular page. Only at the end of the questionnaire will the system flag empty required fields. (F) Users should be able to save their progress at any time. Better yet, the system could autosave at regular intervals. (G) Users should also be able to exit at any point.
Figure 4: Item 3a on the CONSORT checklist. (A) The trial design field could autocomplete with the trial design types in a controlled vocabulary. If the study design is novel, users may click on the “Design type not listed” button to submit their articles traditionally. (B) Each checklist item should allow authors to elaborate if necessary. This box could support free-flowing text with formatting (e.g., Markdown) and LaTeX or MathML capabilities. (C) If a statement needs a citation, users could click on the “Cite” button, which would allow them to input structured bibliographic data. An ideal system would let them import that information from a reference management system. (D) The “+” button generates another “Additional information” box. Content from multiple boxes would be printed in sequence in the final report.
Figure 4: Item 3a on the CONSORT checklist. (A) The trial design field could autocomplete with the trial design types in a controlled vocabulary. If the study design is novel, users may click on the “Design type not listed” button to submit their articles traditionally. (B) Each checklist item should allow authors to elaborate if necessary. This box could support free-flowing text with formatting (e.g., Markdown) and LaTeX or MathML capabilities. (C) If a statement needs a citation, users could click on the “Cite” button, which would allow them to input structured bibliographic data. An ideal system would let them import that information from a reference management system. (D) The “+” button generates another “Additional information” box. Content from multiple boxes would be printed in sequence in the final report.
Figure 5: The user has chosen a full factorial design, and the system automatically brings up a box asking the user to fill in the number of variables and levels.
Figure 5: The user has chosen a full factorial design, and the system automatically brings up a box asking the user to fill in the number of variables and levels.
Figure 6: Completing the number of variables and levels generates a table that the user can use to fill in the allocation ratios.
Figure 6: Completing the number of variables and levels generates a table that the user can use to fill in the allocation ratios.
Figure 7: An example showing how structured inclusion and exclusion critieria might be collected. Sex and age are required fields; researchers may select, Male, Female, both, or Other (for example, if the study population is intersex). (A) Users may fill in other inclusion criteria below. The field to the left is a controlled vocabulary with an “” option. Selecting “condition” will allow the user to select from a controlled vocabulary, for example, the ICD, in the field to the right. (B) The “+” button allows the user to add as many criteria as necessary. The subsequent screen, for exclusion criteria, could be similarly structured (minus the age and sex fields).
Figure 7: An example showing how structured inclusion and exclusion critieria might be collected. Sex and age are required fields; researchers may select, Male, Female, both, or Other (for example, if the study population is intersex). (A) Users may fill in other inclusion criteria below. The field to the left is a controlled vocabulary with an “” option. Selecting “condition” will allow the user to select from a controlled vocabulary, for example, the ICD, in the field to the right. (B) The “+” button allows the user to add as many criteria as necessary. The subsequent screen, for exclusion criteria, could be similarly structured (minus the age and sex fields).
Figure 8: Item 5 on the CONSORT checklist, which asks for “The interventions for each group with sufficient details to allow replication, including how and when they were actually administered.” Subsequent screens would let the user fill in details for Groups a, b, and ab. Because interventions are often procedural, journals may wish to encourage users to enter this information as a numbered list, which would help readability and reproducibility.
Figure 8: Item 5 on the CONSORT checklist, which asks for “The interventions for each group with sufficient details to allow replication, including how and when they were actually administered.” Subsequent screens would let the user fill in details for Groups a, b, and ab. Because interventions are often procedural, journals may wish to encourage users to enter this information as a numbered list, which would help readability and reproducibility.
Figure 9: Participant flow diagram, generated based on study type. The participant flow in each group would have the same fields as the one shown for Group 1. They are collapsed in the figure to save space.
Figure 9: Participant flow diagram, generated based on study type. The participant flow in each group would have the same fields as the one shown for Group 1. They are collapsed in the figure to save space.
Figure 10: Item 15 on the CONSORT checklist asks for “a table showing baseline demographic and clnical characteristics for each group.” Once again, the number of groups is based on the trial design specified earlier. Users could generate their own tables in the system, upload tabular text (.dat, .csv, .tsv) or spreadsheets (e.g., .xlsx, .ods), or link to data-sharing sites. For analysis and discussion sections, the interface would also accommodate uploading figures, much as online journal submission systems already do.
Figure 10: Item 15 on the CONSORT checklist asks for “a table showing baseline demographic and clnical characteristics for each group.” Once again, the number of groups is based on the trial design specified earlier. Users could generate their own tables in the system, upload tabular text (.dat, .csv, .tsv) or spreadsheets (e.g., .xlsx, .ods), or link to data-sharing sites. For analysis and discussion sections, the interface would also accommodate uploading figures, much as online journal submission systems already do.
Figure 11: Final page of the CONSORT questionnaire. (A) A user should be able to preview the paper before submitting. The preview would be generated as a report in the same way as the versions for peer review and for eventual publication—a compilation, in a specific order, of the data entered. (B) Button for final article submission. Once users clicked on this button, they would be alerted to any required fields left empty.
Figure 11: Final page of the CONSORT questionnaire. (A) A user should be able to preview the paper before submitting. The preview would be generated as a report in the same way as the versions for peer review and for eventual publication—a compilation, in a specific order, of the data entered. (B) Button for final article submission. Once users clicked on this button, they would be alerted to any required fields left empty.

Other considerations

Archives

When journals moved from print to online dissemination, publishers recognized the value of digitizing their archives so that older articles could also be searched and accessed. Analogously, if publishers not only accepted new articles as structured data but also committed to converting their archives, the benefits would be enormous. First, achieving the eventual goal of completely converting all existing biomedical articles would help researchers perform accelerated systematic reviews on a comprehensive set of data. Second, the conversion process would favour published articles that already comply with the reporting guidelines; after conversion, researchers would be able to search a curated dataset of high-quality articles.

I recognize that the resources needed for this conversion would be considerable, and I see the development of a new class of professionals trained in assessing and converting existing articles. For articles that meet almost but not quite all reporting guidelines, particularly more recent publications, these professionals may succeed in acquiring missing data from some authors.[24] Advances in automating the systematic review process[25] may also help expedite conversion.

Software development for the database and interface

In “Reducing waste from incomplete or unusable reports of biomedical research,” Glasziou et al. call on the international community to find ways to decrease the time and financial burden of systematic reviews and urge funders to take responsibility for developing infrastructure that would improve reporting and archiving. To ensure interoperability and encourage widespread adoption of health reports as structured data, I urge the international biomedical research community to develop and agree to a common set of standards for the report databases, in analogy to the effort to create standards for trial registration that culminated in the World Health Organization’s International Standards for Clinical Trial Registries.[26] An international consortium dedicated to developing a robust database and flexible interface to accommodate reporting structured data would also be more likely to secure the necessary license to use a copyrighted controlled vocabulary such as the ICD.

Implementation

Any new system with wide-ranging effects must be developed in consultation with a representative sample of users and adquately piloted. The users of the report submission interface will largely be researchers, but the report generated by the journal could be consulted by a diverse group of stakeholders—not only researchers but also clinicians, patient groups, advocacy groups, and policy makers, among others. A parallel critical review of the format of this report would provide an opportunity to assess how best to reach audiences that are vested in discovering new research.

Although reporting guidelines exist for many different types of reports can each serve as the basis of a questionnaire, I recommend a review of all existing biomedical reporting guidelines together to harmonize them as much as possible before a database for reports is designed, perhaps in collaboration with the BioSharing initiative[27] and in an effort similar to the MIBBI Foundry project to “synthesize reporting guidelines from various communities into a suite of orthogonal standards” in the biological sciences.[28] For example, whereas recruitment methods are required according to the STARD guidelines, they are not in CONSORT. Ensuring that all guidelines have similar basic requirements would ensure better interoperability among article types and more homogeneity in the richness of the data.

Conclusions

Structuring biomedical research reports as data will improve report quality, decrease the time and effort it takes to perform systematic reviews, and facilitate translations and interoperability with existing data-driven sysetms in health care. The technology exists to realize this shift, and we, like Glazsiou et al., urge funders and publishers to collaborate on the development, in consultation with users, of a robust reporting database system and flexible interface. The next logical step for research in this area would be to build a prototype and for researchers to use while running a usability study.

Reports as structured data aren’t a mere luxury—they’re an imperative; without them, biomedical research is unlikely to become well integrated into existing health informatics infrastructure clinicians use to make decisions about their practice and about patient care.

Sources

[1] “CONSORT Statement,” accessed October 04, 2014, http://www.consort-statement.org/.

[2] “PRISMA Statement,” accessed October 04, 2014, http://www.prisma-statement.org/index.htm.

[3] “STARD Statement,” n.d., http://www.stard-statement.org/.

[4] “The EQUATOR Network | Enhancing the QUAlity and Transparency Of Health Research,” accessed September 26, 2014, http://www.equator-network.org/.

[5] Erik Andrew, “A Proposal for Structured Reporting of Randomized Controlled Trials,” JAMA: The Journal of the American Medical Association 272, no. 24 (December 28, 1994): 1926, doi:10.1001/jama.1994.03520240054041.

[6] Elizabeth Wager, “Publishing Clinical Trial Results: The Future Beckons.,” PLoS Clinical Trials 1, no. 6 (January 27, 2006): e31, doi:10.1371/journal.pctr.0010031.

[7] Roberto Stramare et al., “Structured Reporting Using a Shared Indexed Multilingual Radiology Lexicon.,” International Journal of Computer Assisted Radiology and Surgery 7, no. 4 (July 2012): 621–33, doi:10.1007/s11548-011-0663-4.

[8] J M L Bosmans et al., “Structured Reporting: If, Why, When, How-and at What Expense? Results of a Focus Group Meeting of Radiology Professionals from Eight Countries.,” Insights into Imaging 3, no. 3 (June 2012): 295–302, doi:10.1007/s13244-012-0148-1.

[9] Henry S Rzepa, “Chemical Datuments as Scientific Enablers.,” Journal of Cheminformatics 5, no. 1 (January 2013): 6, doi:10.1186/1758-2946-5-6.

[10] Robert L Kane, Jye Wang, and Judith Garrard, “Reporting in Randomized Clinical Trials Improved after Adoption of the CONSORT Statement.,” Journal of Clinical Epidemiology 60, no. 3 (March 2007): 241–49, doi:10.1016/j.jclinepi.2006.06.016.

[11] N Smidt et al., “The Quality of Diagnostic Accuracy Studies since the STARD Statement: Has It Improved?,” Neurology 67, no. 5 (September 12, 2006): 792–97, doi:10.1212/01.wnl.0000238386.41398.30.

[12] Paul Glasziou et al., “Reducing Waste from Incomplete or Unusable Reports of Biomedical Research.,” Lancet 383, no. 9913 (January 18, 2014): 267–76, doi:10.1016/S0140-6736(13)62228-X.

[13] Robin Christensen, Henning Bliddal, and Marius Henriksen, “Enhancing the Reporting and Transparency of Rheumatology Research: A Guide to Reporting Guidelines.,” Arthritis Research & Therapy 15, no. 1 (January 2013): 109, doi:10.1186/ar4145.

[14] Sara Khangura et al., “Evidence Summaries: The Evolution of a Rapid Review Approach.,” Systematic Reviews 1, no. 1 (January 10, 2012): 10, doi:10.1186/2046-4053-1-10.

[15] Patricia Wright and Fraser Reid, “Written Information: Some Alternatives to Prose for Expressing the Outcomes of Complex Contingencies.,” Journal of Applied Psychology 57, no. 2 (1973).

[16] Karen A. Schriver, Dynamics in Document Design: Creating Text for Readers (New York: Wiley, 1997).

[17] Robert E. Horn, Mapping Hypertext: The Analysis, Organization, and Display of Knowledge for the Next Generation of On-Line Text and Graphics (Lexington Institute, 1989).

[18] An-Wen Chan et al., “Increasing Value and Reducing Waste: Addressing Inaccessible Research.,” Lancet 383, no. 9913 (January 18, 2014): 257–66, doi:10.1016/S0140-6736(13)62296-5.

[19] Andra Morrison et al., “The Effect of English-Language Restriction on Systematic Review-Based Meta-Analyses: A Systematic Review of Empirical Studies.,” International Journal of Technology Assessment in Health Care 28, no. 2 (April 2012): 138–44, doi:10.1017/S0266462312000086.

[20] R. Smith, “Scientific Articles Have Hardly Changed in 50 Years,” BMJ 328, no. 7455 (June 26, 2004): 1533–1533, doi:10.1136/bmj.328.7455.1533.

[21] Australian Research Council, “Grant Application Management System (GAMS) Information” (corporateName=The Australian Research Council; jurisdiction=Commonwealth of Australia), accessed October 04, 2014, http://www.arc.gov.au/applicants/rms_info.htm.

[22] Canadian Institutes for Health Research, “Acceptable Application Formats and Attachments—CIHR,” November 10, 2005, http://www.cihr-irsc.gc.ca/e/29300.html.

[23] “Structured Abstracts in MEDLINE®,” accessed January 14, 2015, http://structuredabstracts.nlm.nih.gov/.

[24] Shelley S Selph, Alexander D Ginsburg, and Roger Chou, “Impact of Contacting Study Authors to Obtain Additional Data for Systematic Reviews: Diagnostic Accuracy Studies for Hepatic Fibrosis.,” Systematic Reviews 3, no. 1 (September 19, 2014): 107, doi:10.1186/2046-4053-3-107.

[25] Guy Tsafnat et al., “Systematic Review Automation Technologies.,” Systematic Reviews 3, no. 1 (January 09, 2014): 74, doi:10.1186/2046-4053-3-74.

[26] World Health Organization, International Standards for Clinical Trial Registries (Genevia, Switzerland: World Health Organization, 2012), www.who.int/iris/bitstream/10665/76705/1/9789241504294_eng.pdf.

[27] “BioSharing,” accessed October 12, 2014, http://www.biosharing.org/.

[28] “MIBBI: Minimum Information for Biological and Biomedical Investigations,” accessed October 12, 2014, http://mibbi.sourceforge.net/portal.shtml.

Indi Young—Practical empathy: For collaboration and creativity in your work (webinar)

Empathy for your end users can help you create and design something that truly suits their needs, and it’s the basis of usability design and plain language writing. Putting yourself in someone else’s shoes is an example of applying empathy, but UX consultant Indi Young, author of Practical Empathy, says that you first have to develop empathy, and she led a UserTesting.com webinar to show us how.

Empathy, said Young, is usually associated with emotion: it makes you think about sensitivity and warmth or about sympathy and understanding a person’s perspective, sometimes so that you can excuse their behaviours or forgive their actions. As it turns out, that definition describes empathy rather poorly. Dr. Brené Brown created a short animation to explain the differences between sympathy and empathy.

True emotional empathy, Young explained, is when another person’s emotion infects you. “It strikes like lightning,” she said, and “it’s how movies and books work”—you’re struck with the same emotions as the characters. This kind of emotional empathy can be incredibly powerful, but you can’t force it or will it to happen. In our work, we need something more reliable.

Enter cognitive empathy, which can include emotions but focuses on understanding another person’s thinking and reactions. In creative work, we often end up concentrating too much on ideas and neglect the people. By listening to people and deepening our understanding of them, we can develop and apply ideas that support their patterns. This listen » deepen » apply process is iterative.

How is empathy important in our work? Empathy has a lot of uses, said Young, and one she saw a lot was using it to persuade or manipulate, which could be well intentioned but might also problematic. She’d rather focus on using empathy to support the intents and purposes of others—to collaborate and create. “Others” is purposely vague here—it can refer to people in your organization or external to it.

To truly collaborate with someone, you have to listen to them, one on one. “When someone realizes you are really listening to them and you don’t have an ulterior motive, they really open up.” These listening sessions allow you to generate respect for another person’s perspectives and can be the basis for creativity. When a user issues a request, ask about the thinking behind it. Knowing the motivation behind a request might allow you to come up with an even better idea to support your users. You can’t establish empathy based only on a user’s opinions or preferences.

In a listening session, be neutral and let go of any judgments; you can’t properly support someone you’re judging. Purposeful listening can also let you discover what you’re missing—what you don’t know you don’t know. The intent of a listening session isn’t to solve any problems—don’t go into a session with an agenda or a set list of questions, and don’t use the session as a forum to show others how much you know. Become aware of your assumptions and don’t be afraid to ask about them.

Let the other person set boundaries of what to talk about. Don’t bring something up if they don’t bring it up. If they’re not comfortable talking, excuse them. Don’t set a time limit or watch the clock. Finally, don’t take notes. “The act of writing things down in a notebook takes up so much of your brain that you can’t listen as well,” said Young.

What you’re trying to uncover in the listening sessions is the person’s reasoning, intent, and guiding principles. What passes through their mind as they move toward their intent? Instead of asking “How do you go about X?” ask “What went through your mind as you X?”

These guidelines seem simple, said Young, but mastering listening skills takes a lot of practice. Once people start opening up and you see how your ideas can better serve their needs, you’ll see how powerful developing cognitive empathy can be.

***

Indi Young’s webinar will be available on UserTesting.com in a couple of weeks.

Time to leave academic writing to communications experts?

In the Lancet’s 2014 series about preventing waste in biomedical research, Paul Glasziou et al. pointed to “poorly written text” as a major reason a staggering 50% of biomedical reports are unusable [1], effectively squandering the research behind them. According to psycholinguist Steven Pinker [2], bad academic writing persists partly because there aren’t many incentives for scholars to change their ways:

Few academic journals stipulate clarity among their criteria for acceptance, and few reviewers and editors enforce it. While no academic would confess to shoddy methodology or slapdash reading, many are blasé about their incompetence at writing.

He adds:

Enough already. Our indifference to how we share the fruits of our intellectual labors is a betrayal of our calling to enhance the spread of knowledge. In writing badly, we are wasting each other’s time, sowing confusion and error, and turning our profession into a laughingstock.

The problem of impenetrable academese is undeniable. How do we fix it?

In “Writing Intelligible English Prose for Biomedical Journals,” John Ludbrook proposes seven strategies [3]:

  • greater emphasis on good writing by students in schools and by university schools,
  • making use of university service courses and workshops on writing plain and scientific English,
  • consulting books on science writing,
  • one-on-one mentoring,
  • using “scientific” measures to reveal lexical poverty (i.e., readability metrics),
  • making use of freelance science editors, and
  • encouraging the editors of biomedical journals to pay more attention to the problem.

Many institutions have implemented at least some of these strategies. For instance, SFU’s graduate student orientation in summer 2014 introduced incoming students to the library’s writing facilitators and open writing commons. And at UBC, Eric Jandciu, strategist for teaching and learning initiatives in the Faculty of Science, has developed communication courses and resources specifically for science students, training them early in their careers “to stop thinking of communication as separate from their science.” [4]

Although improving scholars’ writing is a fine enough goal, the growth in the past fifteen years of research interdisciplinarity [5], where experts from different fields contribute their strengths to a project, has me wondering whether we would be more productive if we took the responsibility of writing entirely away from researchers. Rather than forcing academics to hone a weak skill, maybe we’d be better off bringing in communications professionals whose writing is already sharp.

This model is already a reality in several ways (though not all of them aboveboard):

  • Many journals encourage authors to have their papers professionally edited before submission [6]. From personal experience, I can confirm that this “editing” can involve heavy rewriting.
  • The pharmaceutical industry has long used ghostwriters to craft journal articles on a researcher’s behalf, turning biomedical journals into marketing vehicles [7]. We could avoid the ethical problems this arrangement poses—including plagiarism and conflict of interest—with a more transparent process that reveals a writer’s identity and affiliations.
  • Funding bodies such as CIHR have begun emphasizing the importance of integrated knowledge translation (KT) [8], to ensure knowledge users have timely access to research findings. Although much of KT focuses on disseminating research knowledge to stakeholders outside of academia, including patients, practitioners, and policy makers, reaching fellow researchers is also an important objective.

To ensure high-quality publications, Glasziou et al. suggest the following:

Many research institutions already employ grants officers to increase research input, but few employ a publication officer to improve research outputs, including attention to publication ethics and research integrity, use of reporting guidelines, and development of different publication models such as open access. Ethics committees and publication officers could also help to ensure that all research methods and results are completely and transparently reported and published.

Such a publication officer would effectively serve as an in-house editor and production manager. Another possibility is for each group or department to hire an in-house technical communicator. Technical communicators are trained in interviewing subject matter experts and using that information to draft documents for diverse audiences. In the age of big data, one could also make a convincing case for hiring a person who specializes in data visualization to create images and animations that complement the text.

That said, liberating scientists from writing should not absolve them of the responsibility of learning how to communicate. At a minimum, they would still need to understand the publication process enough to effectively convey their ideas to the writers.

Separating out the communication function within research would also raise questions about whether we should also abolish the research–teaching–service paradigm on which academic tenure is based. If we leave the writing to strong writers, perhaps only strong teachers should teach and only strong administrators should administrate.

Universities’ increasing dependence on sessional and adjunct faculty is a hint that this fragmentation is already happening [9], though in a way that reinforces institutional hierarchies and keeps these contract workers from being fairly compensated. If these institutions continue to define ever more specialized roles, whether for dedicated instructors, publication officers, or research communicators, they’ll have to reconsider how best to acknowledge these experts’ contributions so that they feel their skills are appropriately valued.

Sources

[1] Paul Glasziou et al., “Reducing Waste from Incomplete or Unusable Reports of Biomedical Research,” Lancet 383, no. 9913 (January 18, 2014): 267–76, doi:10.1016/S0140-6736(13)62228-X.

[2] Steven Pinker, “Why Academics Stink at Writing,” The Chronicle of Higher Education, September 26, 2014, http://chronicle.com/article/Why-Academics-Writing-Stinks/148989/

[3] John Ludbrook, “Writing Intelligible English Prose for Biomedical Journals,” Clinical and Experimental Pharmacology & Physiology 34, no. 5–6 (January ): 508–14, doi:10.1111/j.1440-1681.2007.04603.x.

[4] Iva Cheung, “Communication Convergence 2014,” Iva Cheung [blog], October 8, 2014, https://ivacheung.com/2014/10/communication-convergence-2014/.

[5] B.C. Choi and A.W. Pak, “Multidisciplinarity, Interdisciplinarity, and Transdisciplinarity in Health Research, Services, Education and Policy: 1. definitions, objectives, and evidence of effectiveness. Clinical and Investigative Medicine 29 (2006): 351–64.

[6] “Author FAQs,” Wiley Open Access, http://www.wileyopenaccess.com/details/content/12f25e4f1aa/Author-FAQs.html.

[7] Katie Moisse, “Ghostbusters: Authors of a New Study Propose a Strict Ban on Medical Ghostwriting,” Scientific American, February 4, 2010, http://www.scientificamerican.com/article/ghostwriter-science-industry/.

[8] “Guide to Knowledge Translation Planning at CIHR: Integrated and End-of-Grant Approaches,” Canadian Institutes of Health Research, Modified June 12, 2012, http://www.cihr-irsc.gc.ca/e/45321.html.

[9] “Most University Undergrads Now Taught by Poorly Paid Part-Timers,” CBC.ca, September 7, 2014, http://www.cbc.ca/news/canada/most-university-undergrads-now-taught-by-poorly-paid-part-timers-1.2756024.

***

This post was adapted from a paper I wrote for one of my courses. I don’t necessarily believe that a technical communication–type workflow is the way to go, but the object of the assignment was to explore a few “what-if” situations, and I thought this topic was close enough to editing and publishing to share here.

Colin Moorhouse—Editing for the ear (EAC-BC meeting)

Colin Moorhouse has been a freelance speech writer for twenty-five years and has written for clients in government, at NGOs, and in the private sector. “I get to put words in people’s mouths,” he said, “which is a very nice thing.” He also enjoys that speech writing exposes him to a huge variety of topics (much like editing). Some are more interesting than others, but even the boring ones aren’t boring because Moorhouse needs to devote only a short burst of attention to it. “I can be interested for the three days it takes me to write the speech,” he said.

The key difference between speech writing and other kinds of writing is that it’s all about writing for the ear, not the eye. Even if you’re a skilled writer, what you write may not sound natural for someone to say out loud. “Who didn’t say this,” he asked:

Don’t think about all the services you would like to receive from this great nation; think about how you can make your own contribution to a better society.

That’s, of course, a paraphrase of the famous line in John F. Kennedy’s inaugural speech: “ask not what your country can do for you; ask what you can do for your country.” You can tell which would have a bigger impact spoken aloud.

When words are written for the ear, said Moorhouse, they cater to the imagination. We filter those words through our own experience. The kinds of written materials that most closely resemble speech are letters and diaries, and he’ll sometimes use these to help him write a person’s voice into a speech. “People say that to write great speeches, you should read great speeches. I don’t think so. You should listen to great speeches.”

Speeches are not a great way to share information, Moorhouse told us, because we forget what we hear. Instead, speeches are about engaging an audience so that they’ll associate the speaker with that event or topic. Moorhouse listed six considerations when he writes speeches.

1. Oratory

“If your speaker’s a great orator,” he said, “they can almost read the phone book, because there’s something about their voice.” But most speakers aren’t like that. “Ninety-nine percent of my speakers aren’t good. That’s not to criticize them; they’re not trained.” Moorhouse has to find ways to make the words make them better speakers.

2. Event

The nature of the event factors largely into Moorhouse’s speech writing. Does the audience want to be there, or did they have to be? Is the speaker delivering good news or bad? Will there be 30 or 300 people? Will there be a mic or no mic? PowerPoint or no PowerPoint? Will people be live-tweeting or recording the speech? How knowledgeable will the audience be, and what mood will they be in? Each of these elements can change the speech entirely.

3. Story

“Storytelling is an incredibly valuable part of speech writing,” said Moorhouse. “Stories ground all of us to our common humanity.” If you’re editing a speech and you find it boring, ask the writer if the speaker has told them any stories.

4. Humour

“Humour is not joke telling. Jokes never work.” Moorhouse added, “I can make people cry a lot more easily than laugh. We grieve at the same things, but humour is very localized.” Self-effacing humour tends to work well, especially if it’s embedded in a story.

5. Language

“This is where you and I have our strengths,” he said. He advocates using simple language and declarative sentences.

6. Interest

“I believe we should be able to walk into almost any presentation and find it interesting,” Moorhouse said. “All speeches are potentially interesting.” If you’re editing a speech, the best litmus test for whether a speech is interesting is to ask yourself, “Would I want to sit through it?”

The great thing about speech writing is that you don’t need all six of these elements to make a good speech. Maybe the speaker’s not a great orator, for example. You can use the other elements to compensate.

Make sure you home in on a speech’s intended message, he said. If you find that your speech seems to be rambling, you haven’t nailed the message. If you need to, go back to the speaker and ask them to complete the sentence, “Today I want to talk to you about ______.”

In terms of process, Moorhouse suggests asking the client for the invite letter and event agenda, so that you know when the speech will be. He doesn’t always get to meet the speakers, but if he does, he’ll tape his interview with them so that he can listen to their voice and catch the intonation and the words they use. He’ll also interview people who know the speaker or the organization’s front-line staff to glean information and stories. He doesn’t use outlines, although other writers do. The danger with presenting an outline to the client, though, is that the client might want to circulate it to a bunch of people, which would hold up the writing process. Moorhouse spends about an hour on every minute of the speech—20 hours working on a 20-minute keynote.

Moorhouse offers a full online course about speech writing on his website, which includes a 170-page manual and four live webinars. He also offers a short course, with a 50-page manual, a speech-writing checklist, a webinar and a 20-minute consult.

Craig Morrison—10 fixes for improving your product’s UX (webinar)

UserTesting.com hosted a free seminar featuring usability consultant Craig Morrison of Usability Hour. Morrison began as a web designer, focusing on visual design, but he soon discovered that aesthetics alone aren’t enough to ensure a good user experience. Freelancers often get into the habit of satisfying only their clients’ demands and, once they finish one project, they move on to the next, which means that they don’t get a chance to refine user experience. But positive user experiences translate into user recommendations and business growth, so it’s a good idea to help clients see the importance of placing user needs ahead of their own.

Morrison outlined ten of the most common UX mistakes and how to fix them:

1. Focusing on impressive design instead of usable architecture

It’s tempting to want to make a site that will wow people with its visuals, but aesthetics alone don’t provide value. Morrison offered Craigslist as an example of how a plain-looking site can be popular because it has great functionality. He recommends that you consult a UX consultant first to plan a usable content structure, then focus on visual design.

2. Not removing unvalidated features

If your site has features that nobody is using, all it’s doing is cluttering up the site and making it harder for users to find what they really want from you.

3. Listening to user ideas

This is not to say that you shouldn’t listen to your users at all; listening to their problems is valuable, but often what users suggest as solutions wouldn’t work well. Morrison suggests that you start user testing and watch how people use the product. Seeing where they falter will highlight what you need to work on.

Polling your audience is also a good way to get feedback, particularly for new features, but phrase your questions carefully. You’re looking more for users’ motivations for using a particular feature, as opposed to their opinions about which option they’d prefer.

4. Forcing people to sign up without offering any value

Your landing page can’t be just a logo and a sign-up form. People aren’t willing to exchange their information for nothing. Instead, show why your product is valuable before they sign up. This also goes for credit card numbers: asking for that information during a free trial will turn people off before they’ve even tried your product.

5. Taking user feedback personally

If your dismiss negative feedback by saying “they just don’t get it” or “users are dumb,” you’re sabotaging your business. Complaints are opportunities to improve UX.

6. Poorly designed search function

Half of web users are search oriented and won’t browse. Morrison admits that this bit of advice may sound like a bit of a cop-out, but “follow proper guidelines for designing a usable search function.” There are best practices out there, and he’s written about some of them on his blog.

7. Not optimizing for mobile

“Mobile traffic on the web is 20% and rising,” said Morrison, and you’re driving that traffic away if your site isn’t optimized. People aren’t going to voluntarily spend the time to zoom and navigate through a website meant for larger screens. Invest time and money into a simple mobile site. Morrison says that whatever solution you choose is up to you, but he’s found CSS media queries to be a simple way to ensure your content displays how you want it to, and he prefers it over responsive design.

8. Not offering users help

Despite your best efforts to designing a user-friendly site, inevitably some people will get lost or confused and then won’t come back, out of frustration. Morrison suggests buttressing good content architecture with a searchable wiki and an FAQ page. How-to videos are great, as is live support, if you can offer it.

9. No emotional connection between brand and users

People who feel emotionally connected to your brand will have a better experience. If your users aren’t familiar and comfortable with your brand, they’ll be quick to dislike you for even the smallest flaws. Focus on building your brand early, and get buy-in from all of your employees. For example, if part of what you offer is excellent customer service, ensure that all of your employees live up to that expectation.

10. Not including user onboarding

A user’s first impression is key, and if they get frustrated with using your product, they’ll quit and never come back. You’ve sunk a lot of effort into attracting a new user but you’ll lose it all by not being able to activate them into a long-term user. User onboarding is a way of teaching users how to use your product while demonstrating its value.

At the same time, Morrison recognizes that not everybody loves onboarding. Always offer users the ability to skip it if they’re confident in using your product. At the same time, make sure they can go back whenever they want to do the onboarding if they need to brush up.

According to Morrison, real business growth through UX comes from

  1. getting traffic to the landing page
  2. converting that traffic
  3. activating new users to become long-lasting users

Morrison will be offering an online course through his website to teach people how to meet those goals using great UX. He’s also written an ebook, 5-minute UX Quick Fixes, available free on his site. The webinar I attended will be posted in a couple of weeks at UserTesting.com.

***

I liked that although Morrison’s advice is obviously more geared toward websites or apps, a lot of it applies to other kinds of documents as well. I saw the following parallel mistakes for plain language documents (numbering corresponds to list above):

1. Focusing on aesthetics over functionality. Aesthetic design is important, but usability is paramount: do your choices regarding type, graphics, headings, and white space make the document easier to read and understand?

2. Including too much “nice to know” information. In most plain language documents, you should give readers what they need to know.

3. Listening to users? This point of Morrison’s gave me pause, but his advice of paying attention to the users’ problems rather than their suggested solutions makes sense. For instance, users that consistently fill in a part of a form wrong may not pinpoint poor layout as the reason, but a plain language expert might.

5. Taking user feedback personally. This problem probably applies to the client more than the plain language writer or editor, but the editor may have to go to bat for a user and convince a reluctant client that you have to make certain changes.

6. Poorly designed search function. A good search function is a must-have for websites and apps. The print analogue is an excellent table of contents, descriptive and logical headings and subheadings, and a thorough index.

Have I’ve missed other parallels? Let me know in the comments.

Informed-consent documents: Where legalese meets academic jargon

Ever since the Nuremberg Trials put on display the atrocities of human experimentation at the hands of Nazi doctors, the concept of informed consent has been a cornerstone of both medical treatment and biomedical research. [1] Although no country has adopted the Nuremberg Code in its entirety, most Western nations have acknowledged the importance of informed consent as a pillar of research and medical ethics. But if study participants or patients don’t understand the documents that describe the study protocol or treatment plan, are they truly informed?

For human research subjects, the U.S.’s Code of Federal Regulations states:

46.116 General requirements for informed consent

Except as provided elsewhere in this policy, no investigator may involve a human being as a subject in research covered by this policy unless the investigator has obtained the legally effective informed consent of the subject or the subject’s legally authorized representative. An investigator shall seek such consent only under circumstances that provide the prospective subject or the representative sufficient opportunity to consider whether or not to participate and that minimize the possibility of coercion or undue influence. The information that is given to the subject or the representative shall be in language understandable to the subject or the representative. [Emphasis added]

In “Public Health Literacy in America: An Ethical Imperative,” Julie Gazmararian and her co-authors note that one of the “Characteristics of a health-literate America” is that “Informed consent documents used in health care are written in a way that allow people to give or withhold consent based on information they need and understand.” [2]

Unfortunately, actual informed-consent materials fall far short of promoting understanding. Informed-consent documents are used both for legal reasons and to explain research or treatment protocols; as a result, they’re susceptible to being filled with legalese and medicalese—the worst of both worlds. Giuseppina Terranova and her team reviewed consent forms used in various imaging procedures and found that

At qualitative assessment by consensus of the expert panel, the informed consent forms were complex and poorly organized, were written in a jargon style, and contained incomplete content (not including information about treatment options, long-term radiation risk and doses); for outcome probabilities, relevant information was not properly highlighted and easy to find. [3]

Having a complex informed-consent form rife with legalese only stirs distrust among participants and patients. In “Improvement of Informed Consent and the Quality of Consent Documents,” Michael Jefford and Rosemary Moore write:

Informed consent has two main aims: first, to respect and promote participants’ autonomy; and second, to protect them from potential harm. Provision of information in an understandable way lends support to both these aims…

The written informed-consent document (ie, consent form) is an important part of the requirement to disclose and advise participants of the details of a proposed trial. Although the form has been said to give “legal and symbolic documentation of an agreement to participate,” the length and complexity of informed-consent documents hinder participant understanding. Viewing the consent form mainly as a legal document tends to hinder attempts to create reader-friendly documents: “many sponsor and institutions appear to view them primarily as a legal instrument to protect them against litigation.” [4]

Ironically, “The high reading levels of most such forms precludes this understanding, increasing rather than limiting legal liability.” [5] What’s more, if a consent document is hard to understand, research participants will believe researchers are merely covering their own asses rather than prioritizing the participants’ well-being.

The obvious solution to this problem is to use plain language in informed-consent documents. In a test of a standard versus modified (plain language) pediatric consent form for parents, Alan R. Tait and co-investigators found that

Understanding of the protocol, study duration, risks, and direct benefits, together with overall understanding, was greater among parents who received the modified form (P<.001). Additionally, parents reported that the modified form had greater clarity (P = .009) and improved layout compared with the standard form (P<.001). When parents were shown both forms, 81.2% preferred the modified version. [6]

Further, not only do plain language statements (PLS) protect research subjects and patients, but they also benefit researchers:

In the most practical sense, a commitment to producing good quality PLS leads to faster ethics approval—an outcome that will delight researchers. However, the real reward that comes with commitments to high quality PLS is the knowledge that parents and participants are properly informed and that researchers are contributing to a positive change in meeting the information requirements of parents and young research participants.

Plain language information statements need to be clearly understood by research subjects if the ethics process for research approval is to fulfil its objective. [7]

I see an opportunity for plain language experts to advocate for informed consent by promoting clear communication principles at research institutions and health authorities. Although most institutional research ethics boards (REBs) have guidelines for consent forms that recommend using lay language, I would guess that most REB members are unfamiliar with the plain language process. Institutional REBs, such as the one at Simon Fraser University, consist of not only faculty members and students but also members of the wider community, so even if you are unaffiliated with the institution, you may still be able to join an REB and advocate for plain language from the inside. If you’d rather not commit to sitting on an REB, you might want to see if you could give a presentation at an REB meeting about plain language and clear communication principles.

In my ideal world, a plain language review of consent documents would be mandatory for ethics approval, but biostatistician and current chair of SFU’s REB, Charlie Goldsmith, warns that adding a further administrative hurdle to ethics approval probably wouldn’t fly. Most researchers already see the ethics review process as burdensome and a hindrance to their work. But if you could convince researchers that a plain language review before submission to the REB could accelerate approval, as Green and co-investigators had found, you might help open up opportunities for plain language advocates to work with researchers directly to develop understandable consent documents from the outset.

That said, plain language informed-consent forms address only one facet of the interaction and relationship between researcher and study participant, or between clinician and patient. Jefford and Moore write:

There are reasons for putting effort into the production of plain-language participant information and consent forms. However, evidence suggests that these forms should not be relied on solely to ensure that a person understands details about a trial. Plain-language forms should be seen as part of the process that aims to achieve meaningful informed consent. [8]

In other words, clear communication initiatives should extend beyond written materials to in-person interactions: researchers and clinicians should receive training in plain language debriefing and in techniques such as “teach-back” (asking someone to repeat the information they’ve just been given in their own words) to ensure that they are fulfilling their ethical obligations and are doing all they can to help patients and study participants become truly informed.

To learn more about research ethics, including informed consent, take the Course on Research Ethics, developed by Canada’s Panel on Research Ethics.

Sources

[1] JB Green et al., “Putting the ‘Informed’ into ‘Consent’: A Matter of Plain Language,” Journal of Paediatrics and Child Health 39, no. 9 (December 2003): 700–703, doi:10.1046/j.1440-1754.2003.00273.x.

[2] Julie A Gazmararian et al., “Public Health Literacy in America: An Ethical Imperative,” American Journal of Preventive Medicine 28, no. 3 (April 2005): 317–22, doi:10.1016/j.amepre.2004.11.004.

[3] Giuseppina Terranova et al., “Low Quality and Lack of Clarity of Current Informed Consent Forms in Cardiology: How to Improve Them,” JACC. Cardiovascular Imaging 5, no. 6 (June 1, 2012): 649–55, doi:10.1016/j.jcmg.2012.03.007.

[4] Michael Jefford and Rosemary Moore, “Improvement of Informed Consent and the Quality of Consent Documents,” The Lancet. Oncology 9, no. 5 (May 2008): 485–93, doi:10.1016/S1470-2045(08)70128-1.

[5] Sue Stableford and Wendy Mettger, “Plain Language: A Strategic Response to the Health Literacy Challenge,” Journal of Public Health Policy 28, no. 1 (January 1, 2007): 71–93, doi:10.1057/palgrave.jphp.3200102.

[6] Alan R Tait et al., “Improving the Readability and Processability of a Pediatric Informed Consent Document: Effects on Parents’ Understanding,” Archives of Pediatrics & Adolescent Medicine 159, no. 4 (April 1, 2005): 347–52, doi:10.1001/archpedi.159.4.347.

[7] JB Green et al., 2003.

[8] Michael Jefford and Rosemary Moore, 2008.

***

This post is an excerpt (heavily edited to provide context) of a paper I wrote for one of my courses about the role of plain language in health literacy. Plain language experts might find some of the references useful in their advocacy work.