LSE - Small Logo
LSE - Small Logo

Blog Admin

June 14th, 2013

Regardless of intelligence, expertise or field, the types of human errors holding back research advancement are much the same

4 comments

Estimated reading time: 5 minutes

Blog Admin

June 14th, 2013

Regardless of intelligence, expertise or field, the types of human errors holding back research advancement are much the same

4 comments

Estimated reading time: 5 minutes

ruggHow do you measure the social impact of human error? Research is not immune from failed reasoning directly or indirectly resulting in negative impact. But experts’ knowledge isn’t always explicit or easily accessible. Gordon Rugg outlines the Verifier approach as a method for analysing expert reasoning. Human beings make much the same types of error regardless of their intelligence, expertise and field and this is demonstrated by applying the Verifier approach to previously undecipherable manuscripts.

Human error is responsible for a great deal of suffering, and even the most experienced experts make errors. My colleagues and I study errors in expert reasoning. Previous research has focused on active errors, such as a pilot making a wrong judgment, which results in a plane crashes. We’re interested in errors where something doesn’t happen. For example, the discovery that Helicobacter pylori causes gastric ulcers could have been made decades earlier if previous researchers had spotted the significance of some early reports.

So why do experts still make errors of commission and omission when human error has been studied for centuries, and when life and death incentives are involved? It’s a simple question, with a complex answer. My latest book, Blind Spot, is about our attempt to find the answer. Our project was ambitious, but not impossible. The literature shows that human beings make much the same types of error regardless of their intelligence, expertise and field. When you know what those error types are, you can go looking for them in any field, and that’s what we set out to do.

blind spotAttempts to tackle this topic with formal logic hadn’t got far, because formal logic doesn’t deal with some key issues in real-world problems. One issue is finding out the “ground truth” – for instance, whether or not the expert’s judgment was based on a realistic appraisal of the associated risks in a frequently occurring scenario. Formal logic doesn’t deal with that. The research community dealing with human error, in contrast, routinely tackles this problem.

So, one key part of our approach has been to assemble a virtual toolbox of methods for assessing human reasoning, using approaches such as formal logic and typologies of human error.

Finding out the ground truth takes you into another problem area; gathering valid information. Our second virtual toolbox is derived from a framework used in software requirements engineering. Experts’ knowledge isn’t always explicit or easily accessible, so we map different forms of knowledge – explicit, semi-tacit and tacit – onto appropriate techniques for gathering information, ranging from observation to laddering. For instance, short-term memory has a duration of only a few seconds so it has to be studied in real time, via self-reports or observation; retrospective interviews have no chance of accurately capturing what experts are thinking from moment to moment.

The third toolbox involves different types of representation. It’s based on the way human beings process information, and it uses ideas from measurement theory and set theory to represent the way experts structure their knowledge. For instance, not all categories used by experts have clear boundaries, but the precise statistical level of membership of a fuzzy set can be accurately represented using a greyscale gradient. This draws on work by researchers such as Gerd Gigerenzer, who found, for example, that displaying clinical likelihoods for adverse reactions in medical interventions were more comprehensible to lay people when displayed as natural frequencies. The fourth toolbox maps the various types of knowledge onto appropriate methods for transmitting knowledge e.g. for training, learning and presentation of material.

We named our approach to analysing expert reasoning Verifier. A brief description of our first case study shows how the Verifier process works.

Verifier approach case study: the Voynich Manuscript

The first case study was a problem that at first sight has no significant social impact; it’s the Voynich Manuscript, a mediaeval book, apparently written in code, discovered in 1912. It has never been deciphered despite decades of research by some of the world’s greatest codebreakers. Its illustrations suggest that it is an alchemical herbal, whose contents will be of only minor interest to anyone.

voynich
Voynich Manuscript (credit: D.C.Atty CC-BY)

When I applied the Verifier approach to the Voynich manuscript, one of my first questions was whether the problem had been tackled by experts in all the relevant fields. One gap was immediately apparent. The manuscript’s text had previously been considered too linguistically complex to be a meaningless hoax; however, nobody making that judgment was an expert either in complexity theory or in hoaxing using methods available in the mediaeval period. When I investigated methods for producing quasi-random complex text, I soon found a simple mediaeval method that produced meaningless gibberish very similar to the text in the manuscript. There was no need to postulate a lost supercode.

As a case study, this highlights some key issues about the nature of research, and about any attempt to assess the social impact of a piece of research. At one level, it’s a very media-friendly story, like a real-life Dan Brown mystery. At another level, the media coverage has focused almost exclusively on the mystery, rather than on the broader implications of the Verifier approach producing an answer in a few months to a problem that had defeated previous researchers for ninety years.

Another issue is that I demonstrated that the manuscript probably contains only meaningless gibberish. In practical terms, that doesn’t advance research into new types of code. However, if applied to a new field it could avoid the risk of researchers spending decades looking in the wrong direction. Our second case study, on conceptual models of autism, suggests that looking in the wrong direction has been a significant obstacle in autism research. It’s a significant absence, but how do you measure the social impact of a significant absence?

Blind Spot tells this story in more detail. There’s more information, plus tutorial material and free software, on our blog and web sites. We hope you’ll find them useful.

Note: This article gives the views of the author, and not the position of the Impact of Social Science blog, nor of the London School of Economics.  

About the Author

Dr Gordon Rugg has a multidisciplinary background, including work in social psychology, artificial intelligence, archaeology and computer science. He is particularly interested in software usability and in human error. For more information on his work visit his website, his blog, the Search Visualizer software website, and the Search Visualizer research blog

Print Friendly, PDF & Email

About the author

Blog Admin

Posted In: Academic communication

4 Comments