HomeTech Can AI really help solve a healthcare system in crisis?

Can AI really help solve a healthcare system in crisis?

0 comment
Can AI really help solve a healthcare system in crisis?

IWhat if AI isn’t all that great? What if we’ve overhyped its potential to a point of downright dangerousness? That’s the concern of leading NHS cancer experts, who warn that the health service is obsessed with new technology to the point of putting patient safety at risk. From our article yesterday:

In a stark warning, cancer experts say “novel solutions” such as new diagnostic tests have been wrongly touted as “magic bullets” for the cancer crisis, but “none address the fundamental issues of cancer as a systems problem.”

A “common fallacy” of NHS leaders is the assumption that new technologies can reverse inequalities, the authors add. The reality is that tools such as AI can create “additional barriers for those with poor digital or health literacy”.

“We warn against technocentric approaches without a solid evaluation from an equity perspective,” the document concludes.

The paper, published in The Lancet Oncology, argues for a back-to-basics approach to cancer treatment. Its proposals focus on solutions such as hiring more staff, redirecting research into less fashionable areas such as surgery and radiotherapy, and creating a dedicated technology transfer unit, ensuring that treatments that have already proven effective become part of routine care.

In the face of such much-needed improvements, AI can be an attractive distraction. The promise of the technology is that within a few years, a radical increase in capacity will allow AI technology to perform tasks in the health service that cannot be done today, or at least that require hours of a highly trained specialist’s time. And the fear among experts is that this promise about the future is distracting from the changes needed today.

It effectively presents AI as the latest example of “bionic duckweed.” a term coined by Stian Westlake in 2020 to cover the use, deliberate or not, of technology that may or may not arrive in the future to argue against investment in the present. Elon Musk’s Hyperloop is perhaps the most famous example of bionic duckweed, first proposed more than a decade ago explicitly to try to dissuade California from moving ahead with plans to build a high-speed rail line.

(The term comes from a real-life case in nature, when in 2007 the UK government was advised not to electrify the railways because “we could have… trains using hydrogen developed from bionic duckweed in 15 years… we might have to take the wires out and it would all be wasted.” Seventeen years later, the UK is still using diesel engines on non-electrified lines.)

But the fears the article expresses about AI – and the general technophilia of the healthcare system – go beyond the mere possibility that it might not materialise. Even if AI does start to make inroads in the fight against cancer, without the proper groundwork, it may prove less useful than it could be.

Back to the article, a quote from the lead author, oncologist Ajay Aggarwal:

AI is a workflow tool, but is it actually going to improve survival? Well, so far we have limited evidence of that. Yes, it is something that could potentially help the workforce, but you still need people to take a patient’s medical history, draw blood, perform surgery, and deliver bad news.

Even if AI is as good as we hope, in the short term that might mean little for healthcare overall. Suppose AI can significantly speed up a radiologist’s work, diagnosing cancer earlier or faster after a scan: that means little if there are bottlenecks in the rest of the healthcare service. In the worst case, you might even see a kind of AI-enabled denial of service attack, with the technology-driven sections of the workflow overwhelming the rest of the system.

In the long term, AI proponents hope that systems will adapt to incorporate the technology well (or, if you’re a true believer, maybe it’s simply a matter of waiting until AI can staff a hospital from start to finish). But in the short term, it’s important not to assume that just because AI can perform some medical tasks, it can help fix a failing health system.

Digital Government

New DSIT Secretary Peter Kyle in Downing Street. Photograph: Tejas Sandhu/PA

Last week we looked at some ideas for what the new government could do on technology, and it seems that at least one of those suggestions looks good. The new Secretary of State for Science, Innovation and Technology, Peter Kyle, has only been in office for a few days, but I have already received the message. According to him, DSIT:

Becoming the center of expertise and delivery of digital services in government, improving the way we Government and public services interact with citizens.

We will act as a leader and partner across government, with industry and research communities, to boost Britain’s economic performance and empower our public services to improve people’s lives and life chances through the application of science and technology.

Specifically, DSIT will “help upskill civil servants to better use AI and digital technology in their frontline work.” Last week, we called on Labour to “get serious about AI-powered government” – it appears they are already doing just that.

Skip newsletter promotion

Digital colleagues

Will your next new colleague be digital? Photo: Andriy Popov/Alamy

On the one hand, look, this is clearly a publicity stunt:

Lattice will guide an AI employee through the same processes a human employee goes through when starting a new role.

We will add them to the employee record and integrate them into our HRIS; we will add them to the org chart so you can see where they sit within a team and department; we will onboard the AI ​​employee and ensure they receive the necessary training for their role.

We will assign goals to this digital worker to ensure that he meets certain standards, just as we would with any other employee. This will be a great learning moment for us and for the industry.

Sarah Franklin, CEO of HR platform Lattice, talks about the company’s plans to have an AI employee follow the same steps as a human employee. But if you want to see what success with AI would look like, you’re not far off from here.

Companies are poor at embracing new technologies. If something works well enough, they tend to stick with it for years (even decades) and it is a major hurdle to encourage them to change to a different way of doing things, even if the benefits seem great.

But they are much better at onboarding new staff. They have to be: staff quit, retire, have children, or die. If you can tailor the process of onboarding an AI worker to be more like the latter and less like the former, you may well end up greatly expanding the pool of companies that feel they can implement AI in their own world.

The broader TechScape

Hackers have stolen digital tickets to Taylor Swift’s Eras tour. Photograph: David Parry/PA Media Assignments

You may also like