Attention points in a written text help to focus readers’ attention on the key points of the research findings. While the inclusion of such points are generally recognised as vital, Patrick Dunleavy finds that in practice, many scholarly writers struggle with basic design strategies. He identifies four top-level design principles to consider when constructing tables, charts, graphs, or diagrams. Particularly in a digital era it makes less and less sense to let such cultural hangovers of poor presentation dominate.
Academics, PhDers and researchers are highly intelligent and conscientious people. Yet they none the less repeatedly commit the same simple and easily avoidable mistakes in presenting ‘ attention points’ — that is, any exhibit that stands out or attracts readers’ focus in their text, especially tables, graphs, charts, flow charts, graphics, diagrams, example boxes, case study boxes, and so on.
Poor academic communication habits result in strange, en masse ‘herding’ behaviors that time and again generate swathes of poorly designed attention points. These defective tables and cack-handed charts often resemble nothing so much as the mass strandings of whales or dolphins on a beach – when these mammals had the whole ocean to swim safely away in. Clearly something often goes wrong with academic authors’ critical guidance systems, because texts that are otherwise well written and clearly thought-through none the less include exhibits where no design thinking has gone on at all. Publishers and book and journals editors then seem to do little to improve matters — in fact they mostly add some bad practices of their own devising, actively helping authors to go on presenting their work in poor ways.
In several related posts (not yet written) I will compile a full checklist of points to watch out for, covering each of the different kinds of attention point. But here I want to make a start by picking out four top-level design principles to consider when you are constructing any attention point or exhibit at all – whether it is a table, a chart, a graph, or a diagram.
1 . Make attention points stand on their own
Every exhibit should be designed to be as intelligible, accessible and self-contained as possible. Attention points focus readers’ attention — that is what they are there for, to attract readers’ notice, to make them spend time understanding what is shown, to help them to crystallize and envision especially key points of the research findings. Rational and experienced readers of academic and research work will typically read the title and abstract of an article, or the start of a chapter or book, and then flip though the body of the text to get a better idea of what it is really all about. While doing this they will normally pay attention to sub-headings or other ‘stand out’ text (such as ‘pull quotes’), and they will especially tend to stop and look critically at any exhibits they find.
In both cases readers are trying ‘hack’ the text, to short-circuit wading through it from start to finish in favor of gleaning an early understanding of what is being argued and how important, useful or convincing the work is. Exhibits are especially salient for experienced and skilled professional readers, the potential users and citers of your work, because they provide direct windows into your thinking or results. These readers are always looking to curtail their information overload and to spend their precious search time on new work that is the most worthwhile for them. If the title and the start of the chapter, paper or book look interesting, and then the exhibits suggest that something novel or worthwhile is going on, then skimming readers will also look at the conclusions to make a final judgement. At this point they either move on, or save the text as a PDF to their library, or immediately commit a big slab of time to a careful, linear ‘serious read’.
So tables, charts, diagrams and other exhibits are often (invariably?) looked at early on, before their surrounding text. This means that every attention point should be as accessible as possible for a skimming (expert) reader. Exhibits should never depend on readers already having plowed through great wodges of the the accompanying main text before they can understand what is going on. Every chart, table and diagram should be labelled as fully and as informatively as possible, using only ordinary language, well-known expert language, or commonly understood abbreviations in this specific research field. Never use a symbol or formula alone where space allows for a fuller descriptive label. And do not include strange, unexplained or author-specific abbreviations or acronyms unless they are unavoidable for space reasons. Even then, accompanying notes (located just below the exhibit) should decode what any strange or obscure acronyms (or unfamiliar formulae) mean. In many contexts (like research reports and books) a brief explanatory note included in the exhibit should also describe to readers what it shows — what the key finding or ‘take away’ message is.
2. Show readers what they need to know — no more, no less
Exhibits must be designed for their audience, not for the text author, and especially not for an author who is trying to show off or demonstrate their esoteric learning. That means thinking through in advance who your audience are, and what their needs or interests are likely to be, at this particular point in your text. Suppose, for instance, that your exhibit makes a point about the growth of a given variable across a period and you could provide this information in table form, as specific numbers, or as a chart. The wrong way to proceed, the no-design way, is to compile a large table with seven effective digits in every cell (e.g 12,293, 417 or 0.0215483 to power -10) just because that is what it says on your computer printout of the raw data, and as a researcher you are hyper-committed to ‘accuracy’ and transparent reporting.
The questions to ask yourself here are: Does any reader really need to know all these numbers to seven effective digits? Would changing the unit numbers to millions and putting 12.3 or 12.29 (instead of 12,293,417) be just as good from (most) readers’ point of view? Would changing the unit allow you to re-present a hard-to-grasp set of numbers (like 0.0215483 to power -10) by expressing it more accessibly? Bear in mind here that the most accessible number ranges for human beings to understand are from 0 to 10, or from 0 to 100.
Alternatively, by presenting the data in a chart or a graph instead of using a table, you will normally radically simplify the numbers involved. You automatically cut out the clutter for readers, and help them find the main message in a series. By choosing the right units to express things in, can you make the numbers in the chart cluster reasonably closely together, in the same number ranges (especially 0 to 100)? If your chart includes some very low and other very high numbers, so that some parts of the data are ‘suppressed’ by linear scaling, can you switch to log scales and get all parts of the message into clearer focus?
Academics and researchers ‘sort of’ know all this already. Somewhere in their education they will have heard these points explained, or even voiced them themselves about colleagues’ work. Yet there are also strong compulsions to over-present data and to over-burden tables and figures with too much information in the name of ‘transparency’ or ‘necessary detail’. Academics often feel they must ‘read into the record’ detailed information, just in case it might matter to someone else following in the researchers’ footpath in the future. Journals’ and publishers’ restrictions on exhibits in the name of page parsimony and minimizing ‘redesign costs’ also still linger on, causing over-stuffed tables or charts.
But in a digital era it makes less and less sense to let such cultural hangovers dominate good presentation, which normally means separating out in-text exhibits from publishing data tables ‘for the record’. Most well-designed exhibits sent to journals or publishers can now be used just as they are. And publishers’ ‘value-added’ processes (such as sending your carefully shaped Figure off to be reset on another continent by someone who knows nothing about the subject) now typically only detract value. Online annexes to journal articles or books can also now hold the detailed data needed for replication purposes, and in enduring ways. All this leaves you free to focus the in-text exhibits single-mindedly on delivering to readers exactly what they need to now at this point in your argument — with nothing more detailed, and no fuzz of irrelevant data.
3. Make the messages in attention points as simple as possible
Lots of academics live with jumbled data or complex understandings of processes throughout their research process. They get data delivered in printouts or regression tables or analysis results in a hyper-detailed way. They map the processes they are analyzing onto complex equations that span blackboards or take a whole wall display to trace out as flow charts or algorithms. Researchers train themselves to operate continuously with these complex numbers, visualizations and messages in their heads, and on their desks, and sprawled across their walls. If they explain things to anyone else, perhaps they do so to people just like them, working in the same lab or team, trained in the same way and working on at least closely analogous problems — and used to coping with the same complexity.
So naturally enough, it often becomes difficult for researchers to then ‘see’ tangles and difficulties present in data or processes description that have become familiar to them. When they come to re-explain to a wider audience, a great deal of ‘automatic’ reproduction of avoidable complexity will take typically take place, without being recognized. When you point examples of this out to academic authors, they will often look at you oddly — dismissing the point as obvious. ‘Well, of course, if you want to go that far’, their expression implies, ‘but I’m not writing a nursery book’.
A good example is the pervasive tendency of academics in the social sciences, and of course governments and public authorities, to reproduce tables or graphs or bar charts showing policy areas, industries, regions or local areas in a country, listed in alphabetical order (or some other ‘customary’ sequence, e.g.moving north to south within a country). Why is a familiar but analytically useless sequence used? Perhaps the researcher or the body producing the data has just always done things this way. Sometimes the producers of alphabetically organized data displays imagine that readers are comparing across exhibits, and that somehow a common sequencing makes sense to them. (This conviction is especially strong amongst statisticians working for national governments, and amongst process-obsessed accountants). Needless to say, there is no evidence that readers behave in this unlikely way.
Using alphabetical or customary orderings within data displays will always create completely jumbled tables and charts, with no ordering principle at all evident, no numerical trends. Readers’ task of grasping what is shown is pointlessly made more difficult — they must just try to hold the whole complex random pattern created in their heads at once. (European Union documents often take this to an extreme by being alphabetically organized by member state names, but in another one of the Union’s 18 official languages from your own. Here even the alphabetic ordering of names behind rows or column sequences disappears and is replaced by a completely opaque name sequence).
The maximum simplicity principle here dictates that alphabetic or customary-sequence listings in tables or charts should always be reorganized so as to create a descending numerical progression in the data displayed. Here the region or locality with the largest number on a key variable comes at the top of the table or chart, the second largest comes next, and so on, with the others arranged in descending order below. If you have several columns in the table or multiple lines in a chart, then maximum simplicity means either showing each series in a separate exhibit,or choosing the most important numerical progression in theory or explanatory terms to organize all the data shown. In tables, always try to achieve a numerical progression down the key column, and if possible a numerical progression across columns as well.
4. Use a single sequence to order attention points
Academics and publishers typically divide exhibits into two or three different categories. Almost universally present is a distinction between Tables, which supposedly show numbers and text only, and Figures, which include some lines or bars or other ‘graphic’ elements. This separation is always thoroughly undesirable for three reasons:
- It is artificial and out of date. In the digital era, ‘Figures’ often include lots of numbers, and ‘Tables’ have lots of graphic elements in them (like boxes, arrows, back-shading etc).
- Creating two sequences of exhibits makes life unnecessarily complex for readers, to no purpose. As they move through the argument they have to go from looking at Figure 16 to next considering Table 9 — why? In searching for information, or following a back reference, they must again cope with two different sequences.
- It is completely unnecessary. Using a single list, where tables, charts and diagrams are all labelled in a single sequence as ‘Figures’ is the practice followed by reputable research organizations like the UK’s National Audit Office. And it works very well also in academic books and journal articles. If you are a journal editor, or on an advisory board, why not implement this change for all authors right now?
If your work also includes short example boxes, case study boxes and photographs, they can be equally well indexed by listing them all as ‘Figures’. Or you might wish to use another label (such as Exhibits) to cover all the different things that that you are displaying for readers in a single, easy-to-follow sequence.
This piece originally appeared on the Writing for Research blog and is reposted with the author’s permission.
Note: This article gives the views of the author, and not the position of the Impact of Social Science blog, nor of the London School of Economics. Please review our Comments Policy if you have any concerns on posting a comment below.
Patrick Dunleavy is Co-Director of Democratic Audit, Chair of the LSE Public Policy Group, and a Professor of Political Science at the LSE.