This conference was organised by The Farr Institute of Health Informatics Research in conjunction with The Association of the British Pharmaceutical Industry and the UK Health Informatics Research Network and was held on Tuesday 16th December 2014 at the Royal College of General Practitioners, 30 Euston Square in London. It was booked out, with just over 130 attendees from a number of academic institutions, pharmaceutical and informatics companies and the NHS.

The programme focused on real world evidence, capacity building for the future and, at the heart of the seminar were a number of research programmes involving the Farr Institute and partners. We heard from a pilot study on acute pancreatitis, a study on coronary disease and case studies of pragmatic trials and stratified medicine. They were all characterised by the innovative use of health informatics to facilitate new sets of research questions.

In particular, Liam Smeeth caught my attention with his analysis of randomization on three levels. Firstly, at the patient level, where potential research participants can be flagged up on electronic systems in GPs surgeries. However, there are many challenges, not least of which is the time constraint on GPs and nurses. The next level is to randomize at the GP surgery level. For example, when evaluating the effectiveness of reminder text messages for flu jabs, GP surgeries can be divided into two, where all patients are texted reminders and the others not. This is a low risk intervention as there is no need to chase individual patients because results can be gleaned from the records of flu jab uptake. The third level is to randomize ‘within’ a patient. Smeeth’s example was investigating whether statins cause muscle pain and, if so, how much? Each patient can undergo sequential randomized treatment and even though there may be only as few as 200 patients in a trial, information about each patient is argued to be very specific. He would like to see randomization as part of a systematic shift or ‘cultural change’ as he put it.

Paul Stang talked about the methodology of predicting compliance. It is possible to predict which patients will take their medication by asking them a series of questions about wearing seat belts and going to the hairdressers. People who visit the same hairdresser for years and wear their seatbelts in the back of the car tend to comply with other instructions, like taking medication.


Effectiveness and Efficiency

Many of the research questions were framed as an “un-met need” such as the coronary disease research presented by Adam Timmis and Cathy Emmas. Stable coronary patients who cease treatment after one year still fall into a high-risk category, where 20% go on to have another MI. Is there a need to treat this group? The pilot study on acute pancreatitis motivated researchers examining the average stay in hospital, which is 17 days. They asked the questions: why do some pancreatitis patients go on to develop multiple organ failure? Can this be predicted?

This issue of cost effectiveness combined with efficiency underpinned many presentations. Paul Stang (Vice president of global epidemiology at Janssen research and development) also described how, in comparison with the UK, the US has a fragmented health care landscape, characterised by separation, like the prison medical system which is completely isolated from any other system, hundreds of informatics software companies, and numerous insurance companies with hundreds of different plans. This makes data collection, collaboration and analysis extremely challenging, particularly because data owners derive income from the data so it is not intended or modified to share.

Tim Williams discussed how linking databases could achieve more efficiency in researching new treatments. This could be a way of making trials more cost-effective in terms of skyrocketing ‘cost per molecule’, which has not resulted in significant new drugs or treatments coming online. The UK, he suggested has three things going for it: a unified health system, large UK population size and using NHS numbers. These are all seen as benefits to develop an economic model. Although, I note that ironically, the capabilities of a highly centralised system, so often vilified by market fundamentalists, are the very reasons using data is believed to be so effective in the UK health system and has become its unique selling point.

Tim Williams talked about developing systems where external groups come to and partner with the health data providers. The implication seemed to be that data would not be ‘open’ and, as George Freeman MP, the Parliamentary Under Secretary for Life Sciences discussed, should be a product, marketed globally for UK PLC. As this was an event to engage industry partners, this makes a certain kind of sense, although, what the ramifications of this might be, were not discussed. Also, in terms of partnerships, industry was invited to partner in research production, research funding and a much more close relationship developed.


Size really does matter

As Tjeerd Van Staa suggested, the kind of technology we are seeing now is “disruptive” and the opportunities are great. In order to deliver this economic promise, through efficiency and value of health informatics and research, Rob Thwaites suggested the development of this sector requires 58,000 data scientists by 2018. Georgina Moulton agreed with this assessment but suggested that extra allied professionals would also be needed. Moulten is excited about the possibilities and talked about what she calls ‘hybrid scientists’: new professionals who are trained in data, clinical health care and methodologies. Manchester University has started a six month course in health Informatics to train people from different disciplines and the Farr Institute currently has approximately 50 PhDs. Andrew Roddam (Vice President & Global Head Epidemiology, GSK) drew attention to the Nesta report on data skills which was published last summer and suggested a synthesis of: analysis, computer skills, domain knowledge, storytelling and networking, business savvy and creativity and curiosity.

Ben Brown, a Ph.D. student at the Farr Institute is an example of these new interdisciplinary professionals. As a recently qualified GP, he draws together a number of skills and applies them to health informatics. He is developing a system which searches GP records and identifies patients not receiving quality care so that advice on treatment can be given. He asks the question: which patients can we do something about? And then what should be done? I am interested to follow the progress of this work and how well this will be received by GPs themselves.

And just as the examples of the future informatics expert were distilled into the person of Ben Brown, many of the speakers reminded us that it all comes down to the individual. The population scale data on how medicines perform do not accurately answer many relevant questions. In the end, all patients want, as Paul Stang suggested, is to know: how will this medicine affect me?

Using national scale data and mining for individuals with specific conditions and demographics or characteristics makes big datasets small in terms of size but several presenters suggested it may predict how patients will react more precisely. This also moves away from the notion of repeatability which underpins most current research. And it is not yet known what the results of this might be or how it could be operationalised. This kind of stratified (personal or precise) medicine, targeted to the individual for the optimisation of medication and enabled by big data, describes a new dynamic and “fluid patient/public involvement”, as Tim Williams described it.

To counteract the effect of small datasets, attention turns to other countries. Many of the speakers talked about large data sets currently being underused and the potential this represents for research. However, the problems associated with comparing data from international sources, where data is entered differently and different data entered, (not to mention the social construction of illness) are huge.


The issue of trust underpinned the whole conference. Both George Freeman and Peter Knight stressed the trust which the public need to have in the informatics project as a whole. Freeman suggested that a few “easy wins” would be helpful because informatics is not perceived as glamorous as other new fields, like genomics.

Tjeerd Van Staa discussed how electronic systems are often considered inaccurate, messy, ill-designed and unsystematic while the older, paper based systems which (because they are known and their foibles naturalised) are seen to be trustworthy by comparison. Others, however, questioned the competence of those who enter data.


Many of the delegates, not surprisingly, see the Farr Institute as a way to strengthen partnerships between organisations. This collaboration process can be problematic when different organisations hold multiple perspectives and goals. Adam Timmins indicated that it was important to put in place effective program management which can effectively broker between organisations.

My main concern, coming away from the conference, is the worry about the extent to which the medical system might become too data-driven, rather than being ideal or principle driven. However, the day as a whole was extremely enjoyable and illuminating.