Blog

13/03/19

13 March 2019

AI for physicians: what should you be doing?

It’s become a truism to say that the field of AI is moving fast, but the pace of change in thinking around AI in health has been breathtaking. Search ‘AI in health and social care’ using Google and you get 233 million hits, up from 109 million just 6 months ago. In the last year alone, we’ve had a stream of policy reports, including Future Advocacy’s collaboration with the Wellcome Trust on Ethical, Social, and Political Challenges of AI in Health, a briefing note by the Nuffield Council on bioethics, and an analysis of the opportunities and risks for the NHS by Reform. The report by the House of Lords’ Select Committee on Artificial Intelligence – a thoughtful and comprehensive review of the policy challenges around AI – has a whole chapter dedicated to the specific implications for healthcare.

It’s not just think tanks and parliamentary bodies that are thinking about AI in healthcare, as if it was an abstract concept. Last summer, both the Royal College of Radiologists and the RCP published their own position statements on AI, the latter in consultation with experts in health informatics, patients, healthcare professionals and industry representatives. A significant amount of time at NHS England’s Health and Care Innovation Expo held in September was spent discussing the potential for AI to revolutionise care in the NHS – the Academic Health Science Networks launched their State of the Nation report on AI in health in the UK, the Department of Health and Social Care unveiled its ‘initial code of conduct for data-driven health and care technology’, and the secretary of state, Matt Hancock MP, delivered a keynote speech outlining his ambition to ‘make the UK the world leader ... in HealthTech’.

And in a flurry of activity to round off the year, December saw the Royal College of Surgeons’ Commission on the Future of Surgery recommending that surgical trainees should have ‘sufficient … training in the use of new technologies’ including AI, NICE releasing an Evidence Standards Framework for Digital Health Technologies, and the chief medical officer for England firmly putting AI at the centre of her vision for healthcare in 2040 as part of her annual report. The inescapable fact is that AI in health is here to stay.

So what should you be doing about it? When RCP president Professor Andrew Goddard asks the medical profession to embrace the technology, what does this mean in practice? Here are a few suggestions.

The inescapable fact is that AI in health is here to stay

Dr Matthew Fenech, AI policy consultant

Talk the talk

It’s important to familiarise yourself with AI in all its different incarnations, and begin to understand which of this vast suite of tools may be best applied to your area of work. Thankfully, there are a number of resources out there that should help ease you into this unfamiliar territory. Future Advocacy’s report has an introductory section with an accessible definition of AI and real-world examples of the five use cases for AI in health and medical research: process optimisation, preclinical research, clinical pathways, patient-facing applications, and population-level applications. But, as discussed earlier, AI moves fast and a report published today is likely to be out of date tomorrow. For this reason, it’s good to subscribe to a newsletter with regular updates and other resources such as podcasts.

My personal favourite is Azeem Azhar’s Exponential View, which frequently features stories about novel applications of AI in health and social care, along with Azeem’s thoughtful commentary. Also recommended is the Health and Social Care.AI initiative – their monthly AI webinars and Facebook group are highly inclusive and welcome anyone interested in learning about and influencing AI in health and social care, regardless of their background or prior experience of AI. Keep an eye out for their relaunch later this year.

For a big-picture view which is by turns technical and philosophical, but always interesting, it is worth tuning in to Lex Fridman’s MIT AI podcast. Moreover, if you’re responsible for organising your local or regional training days, why not invite someone to talk to you about AI in your particular specialty?

It’s worth starting to think now about how to improve at those aspects of your job which will remain human for longer.

Dr Matthew Fenech, AI policy consultant

Critically analyse your own work

Health and care work have traditionally been considered resistant to AI-enabled automation. Cutting-edge estimates of the automatability of different sectors suggest that 12% of jobs in health and social care could be displaced by automation over the next 10 years, which will be more than offset by the creation of new jobs in this sector as a result of economic growth. But economists and policymakers now realise that it’s not whole jobs that will be automated, but their constituent tasks. Jobs are concatenations of different tasks, each requiring different skills and varying in their automatability.

In 2017, a report by the management consultancy McKinsey suggested that about 60% of jobs have at least 30% of tasks that can technically be automated with current technology – that is, without even accounting for advances in the capabilities of AI and other automating technologies. The authors go on to describe automatable tasks as those comprising ‘specific actions in familiar settings where changes are relatively easy to anticipate’.

When you think about your day-to-day work, does that description of certain tasks sound familiar to you? You’re not alone. For example, NHS nurses reported spending an estimated 2.5 million hours a week on clerical tasks, according to a poll carried out for the Royal College of Nursing, equating to more than an hour per day for every nurse. Healthcare professionals (HCPs) need to start thinking about how best to safely and ethically automate the routine aspects of their work, in conjunction with AI experts. More importantly, though, the professions need to consider how best to use the time that this technology could free up.

The utopian vision is one where HCPs spend much more time with patients and their relatives, listening to their concerns, and better explaining their diagnosis, prognosis, and treatment options. But a more empathetic cohort of physicians will not be the inevitable consequence of the increased use of AI in healthcare – undergraduate and postgraduate training programmes will need to be adapted to account for these changes in practice. It’s worth starting to think now about how you get better at those aspects of your job which will remain human for longer.

Be sceptical …

It won’t have escaped your attention that there’s a lot of hype about AI in healthcare. Barely a day goes by without a headline screaming that AI now ‘beats’ doctors –whatever that means – at some diagnosis or other. In fact, we’re now entering the phase where some of the early promises made about the revolutionary potential are coming up short, as IBM are discovering with their much-vaunted Watson for Health programme.

Thankfully, there are some physicians and other clinicians who keep a close eye on the claims made by AI manufacturers, and it’s worth reading what they have to say. Blogs by Dr Margaret McCartney, Dr Luke Oakden-Rayner, and Professor Enrico Coiera stand out for their accessible and fair critiques of the latest announcements in the AI in healthcare space, while Cathy O’Neil deals with wider issues around the fair application of algorithms to sensitive sectors.

Ultimately, we need to remember that AI tools are just that: tools. We need to evaluate them using the same rigorous standards we apply to any new medical innovation. You wouldn’t dream of prescribing a new drug without first having read the literature and convinced yourself that the benefits do outweigh the risks for the patient in question – algorithms should be no different.

You wouldn’t dream of prescribing a new drug without first having read the literature and being convinced that the benefits outweigh the risks for the patient in question. Algorithms should be no different.

Dr Matthew Fenech, AI policy consultant

... but not cynical

There’s some really encouraging work out there. The collaboration between Moorfields and DeepMind Health, published in Nature Medicine, is a great example of an AI tool being developed to address a specific clinical need that was identified by a frontline clinician, with input from patients and other stakeholders. Kheiron Medical, a deep-learning startup developing a tool for breast screening, has recently announced funding under NHS England’s Test Beds programme to perform clinical trials of its software, in collaboration with the East Midlands Radiology Consortium. Future Advocacy’s work on the impact of AI in low- and middle-income countries identified many exciting innovations that aim to use AI to deliver services that are severely underdeveloped in certain areas, such as in the diagnosis of malaria or the prediction of outbreaks of dengue fever.

Moreover, there’s another angle to this. If we want all medical practice to be evidence-based, then we need to better understand the insights that healthcare data can provide, and it’s undeniable that we’re not using this potential goldmine to maximum effect yet. AI can help with this, by providing data analytical tools of much greater scale and speed that we’ve seen before.

Ultimately, extreme views (either ‘AI is going to save the world’ or ‘there’s no evidence; this is codswallop’) help no-one. The truth about AI in healthcare is much more nuanced, and it’s up to all physicians to educate themselves sufficiently to be able to understand the opportunities, tackle the risks, and communicate both to their patients.

Dr Matthew Fenech is an affiliate consultant on AI in Health at Future Advocacy. You can follow him on Twitter at @MattFenech83.

This article appears in February’s Commentary magazine.