Blog

Top 10 controversies surrounding cattle

I’m an (extremely) occasional contributor to Listverse. Its quality has been hit or miss as of late, but I usually see one or two lists a week that I find interesting and learn something from. This morning my most recent list, inspired by Florian Werner’s Cow: A Bovine Biography, and a David Rotsztain cheese workshop I attended a couple of months ago, was published.

A hindering hierarchy?

All editors aspiring to work in book publishing know what it takes to climb up the ladder: start off checking inputting and possibly proofreading, and once you’ve proven yourself, you can progress to copy editing. Only after mastering that will more substantive work come and then, if so desired, experience with acquisition.

The advantages of this system are many. First, you get a well-rounded understanding of all steps in the editorial process. Second, by checking corrections and inputting, you get into the heads of more senior editors and learn the tricks of the trade. Third, you develop an appreciation for the roles of all editorial, design, and production team members—an empathy that will serve you well as a mentor or project manager overseeing the copy editing or proofreading work of a more junior editor.

But how valid is this tacit hierarchy? It implies that acquiring and substantive editors are somehow better than copy editors, who themselves have a leg up on proofreaders. This stratification has real consequences: freelance proofreaders typically charge lower rates than copy editors, and substantive editors command the most. Editorial recognition like the Editors’ Association of Canada’s annual Tom Fairley Award for Editorial Excellence generally (by which I mean the overwhelming majority of the time) goes to a substantive editor rather than a copy editor or proofreader.

Although I would agree that no amount of proofreading will ever salvage a poorly structured and awkwardly written piece, I am concerned about the limitations of this rather firmly entrenched paradigm. The fact is that proofreading, copy editing, and substantive editing (the EAC goes as far as to split up the latter into stylistic editing and structural editing) each requires its own unique skill set. Whereas some editors work well with the big-picture stuff, others are adept at the details, and it’s time to stop seeing those editors who devote themselves to copy editing as failed substantive editors. And publishers that adopt this classic “substantive reigns supreme” model may miss out on hiring someone who hasn’t yet “proven herself” at copy editing but may be an astute developmental and structural editor.

One could argue that those who wish to focus on a specific skill would be better off as freelancers and that in-house positions are better suited to generalists who are willing to learn all facets of the editorial—and publishing—process. Many freelancers eschew the hierarchy by charging a flat rate regardless of the type of work they do. And those who hope to do substantive work without having to first perfect proofreading may have better luck finding opportunities at smaller presses, where, owing to a lack of human resources, structural and stylistic editing can often be assigned to whomever is available.

I, for one, am grateful that I did get the opportunity to learn the ins and outs of editing from the ground up. But to me, the ground doesn’t correspond to checking inputting or proofreading—it corresponds to a solid foundation of amazing mentors, high standards, and a drive to keep learning and improving, no matter what kind of editing I’m doing.

Stopping amnesia

Volunteer-run organizations like the Editors’ Association of Canada, the Society for Technical Communication, and the Indexing Society of Canada provide tremendous opportunities to connect with fellow professionals, find work, and develop professionally. But one particular affliction seems to plague these kinds of groups: a lack of memory.

Given that these organizations exist largely because of donated time and energy, it really is amazing that they, for the most part, function so well. But with an executive that changes every year and demanding committee work that sees volunteers drift in and out according to their fluctuating time constraints, it’s no wonder that there can sometimes be a lack in continuity in their programs. Add in complexities like nation- or continent-wide chapters and, without a robust, well-thought-out system to transmit information to a central archive, legacies can be easily lost.

I recently volunteered for a task force to research and develop a specific document; at one of the early conference calls, it became clear that a similar task force had been struck only three years earlier, with exactly the same objective. Who were these people? What did they discuss? Why did they disband? Nobody knew. In another case, one group working on a procedural document knew that a related policy document had been created at some point in the past, but nobody had access to it. This kind of inefficiency does little to serve the organization’s members, not to mention the volunteers offering their time. What’s more damaging in the long term than having a new group reinvent the wheel is that members could feel less inclined to volunteer in the future, no matter how well-intentioned the organization’s mandate. What’s the point, when hard work just gets funnelled into some sink hole?

The saviour is none other than the superhero from last week’s post: the information scientist. I think that all volunteer-run nonprofits with high volunteer and staff turnover—not only those in editing and communication professions—would benefit from soliciting the services of a trained information specialist to

  • digitize all archives in a way that allows them all to be searchable
  • develop a method of indexing the archived material for efficient retrieval (because often it’s not that the information doesn’t exist—it just can’t be found)
  • identify circumstances under which new documents should be created and/or regularly revised (e.g., procedural documents for regularly occurring activities)
  • implement a system for archiving and indexing newly created material

With the ready availability of open-source content management systems today, the excuses not to make these changes just don’t hold water.

If an accredited consultant is too much for the nonprofit to afford, maybe it could consider contacting one of the many schools offering a Master of Library and Information Science program. My understanding is that all students in that program have to complete several units of experiential learning, and partnering with a student who is familiar with the theory of information organization and retrieval could be beneficial to both parties.

A few years ago, I had the privilege of working on the fifiteth-anniversary edition of the Varsity Outdoor Club‘s annual journal, which documents, in text and photos, members’ activities of the past year. Because it was a landmark issue, we devoted half of the almost six-hundred pages to submissions from alumni, as well as notable images and articles from the club’s archives. Discovering the truly fascinating, often hilarious, stories from past members was one of the highlights of the project and gave me a new perspective on the club as a whole—it had depth, history, purpose. (Part of why we were able to find so many high-quality pieces was that the organization recognized early on the importance of keeping records—”archivist” is one of the club’s executive positions—and recent archivists were forward-thinking in their initiatives to index all past issues of the journal.) So for organizations like the EAC, STC, and ISC, it’s not just that a solid archival system and comprehensive records will help volunteers accomplish more and better serve the membership—it’s also that the organization’s very history can be brought forth for current and future members to understand and appreciate.

Indexing—for your information

Last year, when I was taking technical communications courses, one of my required readings was this article by Seth A. Maislin, published in the Society for Technical Communication’s Intercom magazine. Two sentences near the end of the article caught my attention:

So far, indexing is a phenomenon dominant in only the English language, although I don’t know why. (Many French textbooks, for example, have simple tables of contents in the front and deep tables of contents in the back.)

I haven’t had that much experience working with nonfiction publications in languages other than English, so I suppose I just took it for granted that of course those publications must have indexes, too, and I was genuinely surprised to discover otherwise. There’s nothing inherently special about English that makes it more conducive to the process of indexing, and indexes are just so useful that they would add to a reference book or technical manual in any language. (Since reading Maislin’s article I like to imagine that indexes catalyzed English’s becoming the world’s lingua franca. Oh sure, a few centuries of British imperialism followed by American hegemony probably had something to do with that, too, but the ease of information retrieval through indexes just may have played a [tiny] role in the efficient knowledge transfer that spurred so much innovation.)

Within the last decade, though, it seems as though other languages have caught on to the power of indexing. The Deutsches Netzwerk der Indexer/German Network of Indexers formed in 2004, and in 2006, Robert Fugmann wrote Das Buchregister. Methodische Grundlagen und praktische Anwendungen (The book index: methodological foundations and practical applications), what appears to be one of the first rigorous guides to creating back-of-the-book indexes—akin to Nancy Mulvany’s Indexing Books.

The Netherlands Indexing Network began meeting in 2005, and the indexing program TExtract, developed in the Netherlands and useful for creating indexes in both Dutch and English, is gaining ground on the established CINDEX, SKY Index, and Macrex software.

In my poking around for French resources, I found this title—Concevoir l’index d’un livre: histoire, actualité, perspectives (Conceiving the index of a book: history, current practices, perspectives) by Jacques Maniez and Dominique Maniez—which looks fascinating, not only because it is, for French, much like the Fugmann title was for German, one of the first major resources to address the practice and process of indexing but also because half of the book is dedicated to indexing history.

The Maniez title was published by L’association des professionnels de l’information et de la documentation—the Association of Information and Documentation Professionals—which really drives home the point that indexing is information science. Most of the indexers I know also have an editing background; in fact, the Indexing Society of Canada frequently coordinates with the Editors’ Association of Canada to hold its annual conference at around the same time, and the Chicago Manual of Style’s indexing chapter is one of its major components. This close association makes sense logistically—often publishers will ask the proofreader to compile an index concurrently—but it doesn’t really make sense logically. Editing and indexing require incredibly different skill sets, involving different parts of the brain. Indexing is all about organizing information for efficient retrieval, and it would really make more sense for an information science specialist to be doing it. After all, an indexer does with a book’s terms and ideas on a micro level what librarians do with archives and publications on a macro level. Yet, despite the fact that indexing appears to be a core course in most Master of Library and Information Science curricula, I rarely hear of people going into an MLIS degree wanting to be a librarian but emerging a back-of-the-book indexer.

So what can we learn from other branches of information science, in English and in other languages, that could help us shape better indexes? If other languages aren’t accustomed to using indexes, what book-level information retrieval systems do they use, and how can this knowledge inform our indexing practices? Is there a more effective system out there—perhaps one that looks completely different—that those of us working in English simply haven’t discovered yet?

Selling your services to the federal government

Last evening the Editors’ Association of Canada’s B.C. Branch meeting featured speaker Walker Pautz from Public Works and Government Services Canada’s Office of Small and Medium Enterprises (OSME), who gave us some resources to sell our services to the Government of Canada. OSME also gives these presentations monthly at Small Business B.C.

I was at the EAC pre-conference workshop about bidding on government contracts, presented by three EAC members, and I was wondering if the branch meeting’s presentation would essentially be a rehash of that information, but came away from last evening with some information I didn’t know.

Background

PWGSC buys goods and services for all other government departments; individual departments can buy up to $25,000 themselves without going through PWGSC. (I didn’t know about that last part; for individual freelancers who are looking for small contracts, going directly to the departments may be a better strategy than bidding through MERX.)

Finding opportunities

To do business with the federal government, register on the Supplier Registration Information system. This process gives you a Procurement Business Number (PBN), which allows you to register in other databases, bid on contracts, and get paid; a PBN is mandatory for doing business with PWGSC.

Seek out bid opportunities—Requests for Proposals or Requests for Standing Offer, usually—on MERX or Professional Services Online (for contracts up to $76,600). Each good or service is assigned a commodity code, otherwise known as a Good and Service Identification Number (GSIN). You can search the databases by keywords or GSINs.

On MERX, you can sign up for email alerts of relevant opportunities. You can also view who else has downloaded a particular bid opportunity; this allows you to scope out your competition but may also create some opportunities for subcontracting or partnering.

Some government sites like the Translation Bureau will allow you to sign up as a supplier directly.

B.C. doesn’t post on MERX; it uses B.C. Bid, so check there as well.

Bidding

When putting together a proposal, follow the instructions on the RFP or RFSO, keep your pitch clear and simple, and have your proposal edited and/or proofread. Make sure you meet the minimum mandatory requirements, and check the closing dates to make sure you have time to get your bid in. (You are allowed to submit revisions to your bid before the closing date—something I didn’t know.) Don’t assume that evaluators know who you are even if you’ve done business with them in the past.

Each bid will have a single contact to which you can send questions. That person will compile all questions into an amendment to the initial RFP/RFSO.

Some RFPs and RFSOs will leave out some of their legal language and instead refer you to the Standard Acquisition Clauses and Conditions (SACC) Manual.

Most RFPs/RFSOs will ask you to keep your technical and financial proposals separate. Some will require security clearance; you don’t need to get this ahead of time, but you will have to get it if your bid is successful. Once you have it, though, you can use it for other opportunities over a set number of years.

After closing

If your bid isn’t successful, you can request a debriefing from a contracting authority within three weeks of the closing date; the contracting authority will tell you the strengths and weaknesses of your bid.

If you have issues and concerns, you can contact the Office of the Procurement Ombudsman.

Smaller contracts

To get contracts under $25,000, the best thing to do is to market directly to individual departments, the same way you would market to a private client. To find contacts,

On each department’s site, you can see past contracts that have been awarded

Even if you become a prequalified supplier by successfully bidding for an RFSO, you still have to market yourself, because the contract authority is probably not the end user of your services. Mentioning that you’re a prequalified supplier can help things along.

Writing for translation

“I once translated an instruction manual where the French actually ended up being shorter than the English, exactly because there was a lot of redundancy and unnecessary material. I didn’t, for example, translate the first step, which said, ‘Take the product out of the box.’ My client asked, ‘Where’s number one?’ and I said, ‘French people know to take it out of the box!’ ” —Anthony Michael, when asked whether he stylistically edits poorly written English before translating into French.

Last night I attended the Society for Technical Communication Canada West Coast Chapter’s November meeting, where Anthony Michael of Le Mot Juste Translations gave a talk about writing for translation. Here are some highlights:

  • The translation process is often considered an afterthought, but if you know a document will have to be translated, it’s best to take it into consideration from the outset, both so that enough time can be allowed in the schedule and so that the text in the source language can be written to facilitate translation, especially in the case of technical documentation.
  • Be aware that plays on words such as puns are virtually impossible to translate, and metaphors can be culturally specific (he gave an example of having to eliminate or rethink the baseball metaphors—step up to the plate, cover all bases, out in left field—in a business report destined for France). Keeping the sentences in the source language short and unambiguous (not to mention grammatically correct) will facilitate translation and may even make machine translation possible.
  • Despite the prevalence of poor machine translators, good ones do exist. For example, Xerox in the 1980s had a machine translator that did a decent job on its technical documentation. The final product must still be edited by translators, of course.
  • Source and target texts will often differ in length (e.g., French is usually 10 to 15 per cent longer than English); this is a consideration when planning document design. How will the text be presented? How will it flow around visual elements? Other considerations include the effects of target languages that use a different character set or a different direction of text. Michael gave an example of an ad for a brand of laundry detergent that showed, from left to right, dirty clothes, the detergent, a washing machine, and clean clothes. Because the ad consisted only of images and no text, the company thought it had escaped translation issues but didn’t take into account that in Semitic languages, text is read right to left, and in the Middle East, the ad had exactly the opposite meaning to what was intended.
  • In addition to unilingual and bilingual dictionaries, many of which are now online, translators also use specialized dictionaries for particular subjects and grammar references. Other tools of the trade include terminological databases, such as the one on Termium Plus, as well as translation memories, which are essentially concordance databases. An example is Linguee. Translation memories allow you to search existing translations to see how a particular term or phrase was translated in the past. The search results include snippets of text around the term to give the proper context. Software programs often used to create translation memories are MultiTrans and Trados.
  • Don’t forget about confidentiality issues or other legal matters, including copyright ownership and potential for libel, when sending text out for translation. It’s best to have these spelled out in your contract with your translator.
  • Context is everything. Provide as much context as possible to your translator, either in your source text or in an accompanying document. Spell out or explain all acronyms, provide reference material, if possible (e.g., if you have a set of previously translate documents on similar subject matter). Indicate the gender of people where necessary, because that person’s professional title, for example, will have to take on the masculine or feminine in some other languages like French.

Awards news

Thanks to Grace Yaginuma for reminding me that this past week Flavours of Prince Edward Island by Jeff McCourt, Allan Williams, and Austin Clement (Whitecap Books) won Gold at the Canadian Culinary Book Awards in the Canadian Culinary Culture Category, English-Language and Vij’s at Home: Relax, Honey by Meeru Dhalwala and Vikram Vij (Douglas & McIntyre) won Silver in the Cookbook Category, English-Language. Congrats to all authors! The awards were announced November 7; see a list of all of the winners here.

Fact and nonfiction

At a recent editorial retreat, a very experienced editor was telling us about how clients sometimes question why the research for a single piece of information can take what seems like an unreasonable amount of time. “The author had provided a photo of a bridge he wanted to use and a caption for it. I searched the name in the caption, found a photo, and it was the wrong bridge. So I looked at maps of where this bridge was supposed to be and tried to find pictures of landmarks close to it…” She ran into one dead end after another, until finally, after hours of searching, she found another photo of the bridge from a different angle, and a name to go with it. “That’s the bridge. So I changed the caption, but finding the right name took the whole day.”

“What would you have done before the Internet?” another editor asked.

“Nothing. There would have been an error in the printed book.”

That conversation made me think quite a bit about the accuracy of sources we consider reliable and this whole business of fact checking in the editorial process. Editors—copy editors in particular—are expected to check facts within the realm of general knowledge; with Google, though, more and more can be considered to be part of that realm. Does this mean that more of the onus of fact checking falls on the editor rather than the author? Much has been said about the unreliability of online information, but are print sources really any better? Didn’t the past lack of Internet search engines just mean that copy editors of yore simply couldn’t spend the time to track down primary sources of information? I can think of two projects I worked on over the past year that were new editions of print-only books, where authors used the old edition as a basis for the new book and my Internet searches revealed errors in their earlier text. I can only imagine that this now happens all the time, meaning that books, if they are properly fact checked, are probably more reliable than they have ever been.

The flip side, of course, is that there such a deluge of new titles being produced now, especially since anyone can self-publish, that the majority of books can’t possibly be thoroughly vetted. And, of course, the Internet is not without its pitfalls. When I come across a term that’s not in my dictionary or a name that doesn’t appear in the Library of Congress Authorities, I do lean on Google to tell me that one spelling gives me 200,000 hits, whereas an alternative spelling gives me 1,200. And those 1,200 may very well be right, but often in those cases, “truthiness” prevails.

I sometimes feel that fact checking is more for the editors’ benefit than the authors’. Oh sure, we’re saving authors from potential embarrassment, discredit, and maybe, in the case of a misquote, a libel suit. But when we go to great lengths to hunt down the exact punctuation and capitalization of a sixteenth-century title that some ship’s second officer put together from his journal, and we end up finding a scanned copy of the original text in an online archive, it’s all about the satisfaction of sleuthing and getting it right. Maybe the reason fact checking can be particularly satisfying is that it’s so much less subjective than other facets of editing; in most cases, the goal is finding the one right answer, not, say, imposing a style decision. The hunt does take time, though, so I suppose we’ll have to subtly tease out of our authors what standard they expect us to uphold for each project. Does this author want me to spend the afternoon tracking down and watching a YouTube video of a lengthy speech to see if he’s accurately quoted a public figure? Or should I trust his research and simply alert him to the risk of misquoting?

Ultimately, even if we editors flag factual errors, authors are free to reject our suggested changes, and in the end our efforts may not matter. Most people still believe, for instance, that Marie Antoinette said, “Let them eat cake” (she didn’t) and that Philip Sheridan said, “The only good Indian is a dead Indian” (a misquote, if he uttered anything like it at all), showing that even for the most persuasive of editors, the reader’s interpretation is beyond her control.

A celebration of Fred Herzog

“Today’s cameras are not designed by photographers. Today’s cameras are designed by geeks. And geeks do not take good pictures.” —Fred Herzog

Tonight’s event was, hands down, the best book launch I’ve ever attended—probably because it was more than just a book launch. It was also the advance screening of a documentary on Fred Herzog (part of the Snapshot) series, which will air on the Knowledge Network on Monday, November 14, at 10 pm.

The screening was hosted by Knowledge CEO Rudy Buttignol and featured speakers Douglas Coupland, Gary Stephen Ross, Sam Sullivan, Andy Sylvester, and Shelagh Rogers, who each chose one of Herzog’s photos and interpreted the image from his or her own perspective. Rogers had a family emergency and couldn’t attend personally, but, being the pro that she is, recorded her essay in studio for all of us to hear as an MP3. These special presentations were capped off with Herzog himself, an incisively witty and charming man, who gave his take on the photos that the others had commented on.

At the beginning of the evening Scott McIntyre got an opportunity to briefly recount the growth of D&M’s relationship with Herzog, and even gave me and Peter Cocking shoutouts for working on the new book. That was the evening’s first surprise. The second was that I’m shown in a scene of the documentary shaking Herzog’s hand at the opening of his Reading Pictures exhibit at the Equinox Gallery this past February.

Herzog said very explicitly in the documentary that he doesn’t sign books, and so although I’d brought along my copy in the hopes that I could get his autograph, I was a bit too intimidated to ask him at the reception. Zoe Grams and John Burns gently egged me on (the latter even providing the pen), and Herzog was gracious enough to make an exception, even as he was just on his way out.

All in all, it was a spectacular evening and a complete privilege. I’ve thought about contributing to the Knowledge Network for several months now, and tonight has strengthened my resolve.

Free range indexers

A book’s index is an afterthought for most publishers—allocated the pages that are left over from the last signature after the main body has been set. What ends up happening (entirely too frequently) is that indexers are handcuffed by a severe lack of space. I once had to compile a six-page index to a 336-page book—that’s less than 1.8 per cent of the page count—and I was forced to trim so many entries that the index was, for all intents and purposes, useless.

For a reference books or technical manual, the index can be one of the most important components of the publication, and most of the indexers that I know charge by the indexable page rather than the entry, so they’d be charging the same total fee regardless of index length. To severely limit the index space would hurt the book more than it would the indexer (although I’d like to think that most indexers would be disappointed to put forth an inferior product).

What we need, then, is—dare I say it?—a paradigm shift. Publishers and editors and whoever has input into the total extent of a book needs to consider the index integral from the outset. Do a rough cast-off based on the manuscript, and if what’s left over of the last signature is less than, say, 2.5 per cent, consider adding another half or full signature, depending on the total length of the book, and use this new page count in your project budget and P&L.

The American Society for Indexing has some guidelines for index lengths. For trade books, indexes should be about 3 percent of the book, whereas for technical reference books, they could be up to 15 or 20 percent. These figures can be squeezed a little bit, especially if you’re reducing type size in the index, but they shouldn’t be significantly less than what the ASI has recommended, if you want the index to be functional and readable.

On the other end of the spectrum are self-publishing authors or publishers who don’t give any index specs at all and say, “I’ll just do what I need to do to make your index fit.” This scenario is cropping up more frequently as more people are turning to print-on-demand options where they can add pages two at a time rather than worry about a full sig. It sounds like an indexer’s dream, but, in reality, we appreciate constraints. Without a size limit on the index, the temptation to hand over a bloated, unedited draft is entirely too high. Having index specs helps indexers trim the fat—to put careful thought into clear and concise subentries and eliminate redundancies that can lead to clutter.

Basically what I’m saying is that indexers are a lot like chickens (an analogy I’m sure you’ll hear no other indexer repeat). We’re happiest—and we produce the best product—when we’ve got space to roam around and breathe fresh air. But we also understand the need to be penned in, for our own protection. And, of course, getting a generous amount of feed for our troubles doesn’t hurt, either.