LSE - Small Logo
LSE - Small Logo

Elizabeth Morrow

Teodor Zidaru-Bărbulescu

Rich Stockley

June 18th, 2021

4 priorities to reaffirm patient voice in the coming era of AI healthcare

1 comment | 336 shares

Estimated reading time: 7 minutes

Elizabeth Morrow

Teodor Zidaru-Bărbulescu

Rich Stockley

June 18th, 2021

4 priorities to reaffirm patient voice in the coming era of AI healthcare

1 comment | 336 shares

Estimated reading time: 7 minutes

Healthcare is becoming both increasingly data driven and automated. Drawing on a largescale review of artificial intelligence developments in the field of mental health and wellbeing, Elizabeth Morrow, Teodor Zidaru-Bărbulescu and Rich Stockley, find that opportunities for patients to influence and inform these future technologies are often lacking, which in turn may heighten disillusionment and lack of trust in them. As such, they propose four priorities for new data driven technologies to ensure they are ethical, effective and equitable for diverse patient groups.


As the pandemic has sadly made clear, health policymakers and practitioners often make rapid, complex, life and death decisions. It has also demonstrated the pivotal role of technology in aiding such decisions in infection control and vaccine development. As the world recovers, data-driven artificial intelligence technologies (AI technologies) are poised to transform health services and systems by using large amounts of patient data and machine learning to automate health screening, enable disease identification, employ real-time remote monitoring, deliver precision medicine, and personalise treatment.

In April, the European Commission unveiled the world’s first legal framework for AI, which included a comprehensive proposal to regulate “high-risk” AI use cases. This leaves the UK with various options as to if and how it introduces its own AI regulation, as part of a broader AI strategy in 2021.

A key point for practitioners and patients is that although AI technologies may appear to be prodigiously accurate, they make generalisations based on likelihood not ‘truth’. An AI ‘guess’, no matter how well informed it is by immense datasets, algorithms, analytics, or models, cannot always be correct for everyone. This highlights the value of patients being front and centre at every step of design and application of new technologies, so that they can alert professionals to times and circumstances when they feel decisions about the technology, or its use are wrong or off the mark. Take for example the recent GP data-sharing debacle in the UK, where weak public engagement has led to a significant public backlash.

AI technologies may appear to be prodigiously accurate, they make generalisations based on likelihood not ‘truth’. An AI ‘guess’, no matter how well informed it is by immense datasets, algorithms, analytics, or models, cannot always be correct for everyone.

While specific applications of AI stand to be of great benefit to patients, up to now very few AI technologies for health have emerged from an accountable, accessible, or collaborative processes that involves patients or the public in a meaningful way. In a year-long review of the technology landscape, we focused on mental health as it is a public health priority and area where AI technology is moving fast. New mental health diagnosis apps, mental health chatbots, wearable technologies that track health in real-time, virtual reality therapy for dementia patients, and self-monitoring systems to prevent episodes of severe mental health crisis, have been developed internationally and largely without regulation.

We found behind the closed doors of high-tech design companies, there are few opportunities for patients and the public to say, ‘I don’t think that is quite right’ or ‘could we try it another way’, going beyond end-user testing and customer feedback once tech is on the market. This deficit is likely to be felt acutely, as new technologies become available to the NHS and clinicians need to integrate intelligence from AI with their professional judgement and the experiences and preferences of the presenting patient. Building trust with the public will ultimately be harder if care shifts to being more remote from already isolated patient groups.

More invidiously, as the pandemic has revealed, basing new technologies on data that is skewed towards majority patient groups and assumptions about minority groups could make inequalities worse. As happened pre-pandemic in the well documented case of Samaritan’s Radar. Opening up AI design processes to ensure ethical, inclusive, morally just health care, requires a collaboration between technologists, practitioners, and patients – so that each understands the perspective of the other and appreciates the combined power of perspective sharing in delivering the best possible care for each individual patient.

Of the 144 mental health articles we included in the review we found only a small number of design projects for health technologies which advocated co-design methods, user involvement, or patient perspectives.

Of the 144 mental health articles we included in the review we found only a small number of design projects for health technologies which advocated co-design methods, user involvement, or patient perspectives. Yet, the growth of digital technologies in the field of mental health is vast. This void of an evidence base for practice is concerning and indicates the growth of AI has considerably outpaced the development of inclusive approaches to enable its safe development and use.

However, the proliferation of AI technologies and its high-profile application as part of social distancing measures, has brought new opportunities for the public to contribute to what has essentially been innovation led by high-tech interests. This is particularly the case as these debates shift to focus on technologies developed outside of the regulatory boundaries of publicly funded health systems and their requirements for Patient and Public Involvement (PPI). Based on our review we suggest the following four priorities to build in and promote design justice:

Public voice

The design of AI technologies for mental health should be reoriented from a focus on addressing a crisis of service demand, towards equipping diverse patients and healthy people with intelligence and healthcare services to empower them to improve their health and wellbeing. By making use of inclusion frameworks based on values of equality, diversity and inclusion (EDI), innovative AI technologies for mental health can be enhanced by democratic discourse and the voices of those directly affected by them. Strengthening the public voice requires agency, education, financial support, awareness, trust and assurance of people’s fundamental rights.

Individual’s diversity

Members of the public are likely to experience AI technologies through commercial applications, designed at a distance, such as mental health apps. Concerns about data protection and ownership often overshadow seeing the design process as an opportunity for companies to demonstrate community outreach and a commitment to social justice. Patient and public involvement has also been hindered by assumptions about the public’s willingness or ability to engage in technical debates. Reaffirming public engagement, no matter where tech is being developed, can only be achieved through the different channels for regulatory, governance, and public accountability, but it is vital to reframing the relationship between the diverse individual users and designers of AI technologies.

Participatory co-design

The design of AI technologies is an ethical and political issue, as much as a technical one. Equity issues cannot be resolved by extending user experience testing to the beginning of the pre-design process. To gain acceptance, the design process of AI technologies and the decisions made in relation to how they serve different groups in society should be open to scrutiny. This can be supported by guidelines for design and best practice in AI-assisted care together with evidence from participatory co-design in healthcare and research. There are implications for professional training and education in AI, including interdisciplinary learning, talent pipelines and human capital development strategies that value public engagement.

Open knowledge development and exchange

Public open access to information and research evidence is important in relation to building shared knowledge about new technologies and developing public awareness about the potential benefits of their engagement, alert systems, and risk. Discussion about collective data ownership arrangements and the role of patient and public work in the production and interpretation of data need to be more accessible and inclusive, if they are to reflect the diversity of public interests and respond to changes in public opinions over time.

These priorities affirm direct and ongoing public involvement as a central strategy for creating ethical AI assisted health care. It was the headline message of the 2019 State of the Nation Survey on accelerating AI in health and care – “Ground AI in problems as expressed by the users of the health system”. The NHS AI Lab is also encouraging design teams (£140M for AI Health and Care Awards) to consider inequalities in health outcomes, and we hope the new  NHS AI Ethics Initiative will give special and focused attention to mental health as a priority area.

Given that lack of transparency about patient data collection and use remains a major concern for UK mental health activists, and now the UK public, patient engagement in the production of AI technologies could help to build trust and understanding about what all this data is for. For patient advocates, now is the time to find ways to influence the ethical commissioning, design, regulation, selecting/purchasing, implementation, and evaluation of these powerful future technologies.

 


This post draws on the authors’ co-authored open access paper, Ensuring patient and public involvement in the transition to AI-assisted mental health care: A systematic scoping review and agenda for design justice, published in Health Expectations.

The authors thank the London School of Economics for providing funding for open access publication. We are also grateful to NHS England and NHS Improvement for funding that made this research possible.

Note: This review gives the views of the authors, and not the position of the LSE Impact Blog, or of the London School of Economics.

Image Credit: National Cancer Institute via Unsplash. 


Print Friendly, PDF & Email

About the author

Elizabeth Morrow

Dr Elizabeth Morrow is an independent researcher and inclusion specialist at Research Support NI and has served as a public member for the National Institute for Health Research since 2018.

Teodor Zidaru-Bărbulescu

Dr Teodor Zidaru-Bărbulescu is an anthropologist at the London School of Economics and Political Science (LSE).

Rich Stockley

Rich Stockley is a social scientist at Surrey Heartlands Health and Care Partnership and Head of Research at Surrey County Council.

Posted In: AI Data and Society | LSE comment

1 Comments