Crispin Simon discusses global access to artificial intelligence technology and the initiatives that could help address healthcare inequalities.
Artificial Intelligence (AI) has helped us through the COVID-19 pandemic in many ways, and not only via Netflix and Amazon’s alarmingly well-informed viewing and shopping suggestions. From the microbiology of vaccine development to the macro-epidemiology supporting lock-down policies, AI has been crunching terabytes of healthcare data. Even before the pandemic, capital had been pouring into a wide variety of AI healthcare start-ups and projects and there is hope, based on AI’s low operating costs, that many of these initiatives will address global healthcare inequalities.
Lower cost diagnostic tests can drive lower pricing, removing one obstacle to early diagnosis – a key predictor of health outcomes in many conditions. Improved accuracy could also eliminate costly and/or redundant treatments, including antibiotic therapy. In India, where 65% of healthcare spending is out of pocket by individuals, illness of a family member can result in devastating financial costs – affecting more than 30 million people per year (1,2).
Business as usual won’t be enough to reach low-resource communities
AI presents new challenges in all five stages of healthcare technology development: The Clinical Training of the algorithms, the Clinical Trials for proof of safety and effectiveness; the Regulatory Systems that deliver safety and quality; the Value-for-Money analysis that drives Pricing (coyly described as “reimbursement”); and the Adoption Strategy that addresses the obstacles and switching costs that are inherent in new technology adoption.
Many AI projects are making good progress through these development stages and 64 projects have now been approved for use in the US by the Food and Drug Administration (FDA). But not much of this progress is relevant in low resource communities. Firstly, for example, many of these approved and impressive technologies are designed to save money on processes that simply don’t exist in poor communities – like the systems for AI-powered breast lesion categorisation, which are useful only where regular screening requires the processing of large volumes of tests. Indeed, the many healthcare AI (AI-H) systems that interpret MRI or CT scans are likely to be irrelevant to low resource communities where the image acquisition alone is unaffordable, irrespective of the cost of scan review.
Secondly, if these technologies satisfy the regulators, that does not mean that they satisfy the opinion-leading clinical scientists who influence adoption. Researchers at Imperial College, reviewing 91 clinical trials that compared the performance of AI-H algorithms for medical imaging with that of expert clinicians, found only two gold standard randomised clinical trials (RCTs) with published results, and a further eight unpublished RCTs. Most of the 81 non-randomised trials were found to fall short of prescribed standards with a high risk of bias. The conclusions of the paper are unusually blunt: “Future studies should diminish risk of bias, enhance real world clinical relevance, improve reporting and transparency, and appropriately temper conclusions”.
The authors also noted that 90% of the projects did not reveal their source code – implicitly questioning whether the algorithm training would have been adequately challenged in the peer review process. This is one of a number of areas in which the technology is running ahead of fundamental scientific, ethical and legal issues. These have been grouped into the categories of informed consent, safety and transparency, data privacy and contextual bias. The last is especially relevant to the issue of equal access. Not only is much of the AI algorithm training undertaken in American academic medical centres, where many variables differ from low income communities, but any resulting difference in outcomes is likely to go unrecognised in the absence of investment in validation in different resource settings.
Indeed, the many healthcare AI (AI-H) systems that interpret MRI or CT scans are likely to be irrelevant to low resource communities where the image acquisition alone is unaffordable, irrespective of the cost of scan review
Thirdly, the payers introduce their own set of complicating incentives. They use the reimbursement mechanism that is within their gift, to influence practice. In October 2020, the Centre for Medicare & Medicaid Services (CMS) in the United States of America (USA) approved the first reimbursement for an AI-H technology. The Viz.ai system identifies signs of stroke on brain CT and automatically contacts the neuro-interventional radiologist, bypassing the first reading normally performed by a general radiologist. Viz.ai evidently satisfied CMS that their system offered a significant improvement in time to treatment and clinical outcomes, and so were given a hefty reimbursement price of $1,040 (USD) – the assumption being that this would be sufficient incentive for hospitals to change the treatment pathway. Viewed through an American lens, the policy may save lives and money. However, if AI-H global prices are set by reference to affordability in the well-funded American healthcare system, and not to AI-H’s low costs, the opportunity to help low resource communities will be lost.
If the promise of AI in low resource communities is to be realised, AI-H start-ups should start their planning with a comprehensive understanding of healthcare in these target communities. In 2005, The Lancet challenged the Bill and Melinda Gates Foundation’s focus on technology and Bill Gates agreed that “The world has to devote more thinking and funding to delivering interventions – not just discovering them” (3,4)
Prioritising, with a better model
The World Health Organization (WHO)-sponsored Global Alliance for Tuberculosis (TB) Drug Development is an example of a better model in practice, focused on earlier work emphasising access, rather than technology (5). Its AAA approach – Availability, Affordability, and Adoption – identifies 12 dimensions of technology development where plans are tailored from the outset to the facts of life in poor communities. The AAA approach calls for observation of the key interactions through the care pathway. This enables programme sponsors to recognise important social constraints and cultural attitudes to disease and treatment, and patients’ perceptions of risk, affordability and value for money. The AAA approach has delivered an alternative drug regimen for children who had previously been subjected to six months of bitter-tasting, improperly formulated medicines. The new regimen has now been procured by more than 93 countries, with around 75 percent of the estimated global childhood TB burden .
If the promise of AI in low resource communities is to be realised, AI-H start-ups should start their planning with a comprehensive understanding of healthcare in these target communities
London-based Feebris Ltd, is an example of an AI-H company following the principles of AAA. In the company’s India programme, local community volunteers deployed a mobile health toolkit, consisting of a mobile phone app and two point-of-care devices, to improve the management of respiratory conditions in children. Before embarking on clinical trial activity, the company undertook a baseline study in Mumbai, in partnership with community health organisation, Apnalaya, in 12 communities. Findings included data on child vaccination, antibiotic prescription, the differential costs of the service providers and the incidence (high) of respiratory complaints – all significantly different to circumstances in high income countries (6).
AI-H programmes that use the AAA model or an equivalent will have a much better chance of reaching their full humanitarian potential. The benefit for low-income communities should be established with evidence of safety, effectiveness and value for money in RCTs that are locally-recruited; and with a comprehensive programme of post marketing surveillance.
- Lagarde, Mylene and Palmer, Natasha. “The Impact of User Fees on Access to Health Services in Low-and Middle-Income Countries,” Cochrane Database System Review 13, no. 4 (April 13, 2011)
- Mohanty S.K. et al. “Out-of-Pocket Expenditure on Health Care Among Elderly and Non-Elderly Households in India,” Social Indicators Research 115, no. 3 (February 2014): 1137–57
- Birn, Anne-Emanuelle, “Gates’s Grandest Challenge: Transcending Technology as Public Health Ideology,” The Lancet 366 (2005); 514-519
- Gates, Bill “Remarks of Mr Bill Gates, Co-founder of the Bill & Melinda Gates Foundation at the World Health Assembly,” Geneva, Switzerland, 16 May 2005. Referenced as web link, now deleted, in Laura J Frost and Michael R Reich, “Access – How do good healthcare technologies get to poor people in poor countries ?” Harvard Centre for Population and Development Studies (2008)
- Frost, Laura J and Reich, Michael R “Access – How do good healthcare technologies get to poor people in poor countries?” Harvard Centre for Population and Development Studies (2008)
- Feebris Ltd, Unpublished data e-mailed to author, November 2020
The views expressed in this post are those of the author and in no way reflect those of the Global Health Initiative blog or the London School of Economics and Political Science.