How are artificial intelligence algorithms disrupting medical practice? Professor Jean-Emmanuel Bibault, oncologist and researcher in AI applied to medicine, takes stock.
Cancer doctor at the Georges Pompidou European Hospital in Paris, Jean-Emmanuel Bibault is also a university professor and researcher in artificial intelligence applied to oncology in a laboratory at theInserm. He is notably the author of 2041, Medical Odyssey: How artificial intelligence is shaking up medicine (ed. of Ecuador, 2023).
Why Doctor: How is artificial intelligence (AI) revolutionizing medical practice?
Jean-Emmanuel Bibault : First, AI makes it possible to do what doctors know already done, such as analyzing images of radiology examinations (mammography, chest x-ray, MRI, etc.) or biopsies (for example to observe fragments of tumors under a microscope). AI will be a game-changer by making diagnoses faster and, subsequently, probably more reliable. Today, the interpretation of the results is always verified by a human, but in the coming years, AI will be increasingly able to validate them alone.
Next, AI is used in medicine to achieve what doctors they do not know TO DO. For example, carrying out predictive modeling tasks, that is, being able to predict in advance a patient’s chances of recovery, the potential risks of side effects, etc. Doctors do not know this. not and will never be able to do it, because humans simply do not have the cognitive faculties. But AI algorithms, by learning from past data from many patients, can predict these kinds of risks for new patients.
You distinguish AI from machine learning and so-called symbolic AI. Could you be more precise ?
Symbolic AI, which refers to an algorithm written by a human to simplify a task, is the oldest. In the medical context, it dates back to the early 1950s, notably with antibiotic recommendation software, which has never been used too much in everyday life. But when we talk about AI today, we talk about machine learning (we provide data to an algorithm which, from there, will learn its own rules), or even deep learning (we create artificial neural networks which, based on data, will carry out mathematical operations to make a prediction or make a diagnosis).
“AI is capable of predicting in advance a patient’s chances of recovery, the potential risks of side effects…”
Which areas of medicine are already using AI on a daily basis?
The field that uses AI the most today is radiotherapy. Mainly for the anti-cancer treatment preparation stage called contouring, which consists of determining the location, from the patient’s 3D scanners, where the rays will be sent. The operation is crucial: an error of 1mm can represent a 10% loss of chance of recovery for the patient. While this contouring was previously done by hand and took several hours, deep learning algorithms can now do it automatically in two to three minutes. Radiology also uses AI a lot, with algorithms making it possible to detect breast cancer on mammograms or interpret traumatic fractures on X-rays. This is a huge step forward, even if the radiologist always comes behind to check that the work is well done.
What is the margin of error between AI conclusions and human verifications?
A recent Swedish study, carried out on 80,000 patients, showed that breast cancer screening, which until now relies on the reading of mammograms by two doctors, was just as effective when done by a doctor and an AI: the cancer diagnosis rate was the same with or without AI. On the other hand, the use of AI saves 44% of time… For the moment, it is therefore more a saving of time and efficiency than of pure skill, but this should change in the years to come. come.
What place will doctors have in the face of AI?
The goal of medicine is not to provide work for doctors but to cure patients. So if there are more advanced techniques for healing, there should be no reason to oppose them. Doctors must today participate in the development of AI, to ensure that it is created first and foremost for the benefit of patients – which is far from obvious, because there are many interests private financial institutions at stake. But if doctors do not take ownership of the new AI tools, the risk is that they will soon become simple “implementers” of what the AI will have decided in their place. Which could have disastrous consequences for the practice of medicine.
In the same way that the calculator has reduced our mental arithmetic abilities, could AI make doctors less competent?
This seems obvious if you’re not careful. Let’s take the example of contouring: for the moment, it is often still carried out or at least validated by humans, and radioanatomy is still learned by radiotherapy residents. Except that with AI, there will soon be no need for doctors trained in this practice, so interns will no longer have to learn it. And if they are no longer able to apply it themselves, they will not be able to pass it on to the next generation either… However, if no one is able to teach it or do it without the machine, then no one will be able to check what the AI is doing, and we risk being forced, by force of circumstances, to trust it blindly. It is absolutely necessary that faculties and professors continue to teach radioanatomy (among others) to prevent future practitioners from being overtaken by the machine.
“If doctors do not appropriate AI tools, they risk becoming simple ‘appliers’ of what the AI will have decided for them”
What are the limits of AI in the medical field?
The most pessimistic believe that because AIs are created from retrospective data from the past, they are inherently incapable of making new discoveries. They can only repeat what they have seen, like a kind of parrot. The most optimistic, of which I am one, believe on the contrary that AI algorithms trained with sufficient data will potentially be able to make connections in terms of pathophysiology, biochemistry, genomics… that humans are incapable of making. Endowed with a kind of creativity, AI could thus generate new medical discoveries from which patients could benefit.
Furthermore, as recently recalled by the High Authority of Health, AI research is necessary but it should not overshadow purely clinical research. It is crucial to carry out the two in parallel, because even if we have the best prediction AI in the world, we will still have to make diagnoses and evaluate treatments on real patients, for example before putting a new drug on the market. Perhaps in 15 or 20 years, we will be able to carry out virtual clinical trials in a few minutes that would have taken us years in real life, but we are not there yet!
What can AI bring to the “psychological” field? Can we imagine a futuristic version of ChatGPT that would serve as our therapist?
In the context of therapy, there is an interpersonal dimension, an exchange between two humans. We are talking in particular about “transference”, a projection of feelings from the patient to his therapist… AI, for the moment, cannot replace this relationship. In my opinion, a virtual psychologist could work with some patients, with certain problems, but not with everyone. To be truly effective, the model must be enriched by what each patient tells it, like a real therapist. ChatGPT is slowly starting to do this, proof that it is not completely impossible.
How does the general public, patients, react to new AI tools?
Some may have reluctance or doubts, particularly regarding ethics, but all patients trust AI when it is their life that is at stake. In oncology, if we offered a treatment based on AI and this treatment gave better chances of cure than a treatment “without AI”, I think that all patients would choose AI.
AI raises many ethical questions. How can we “trust” a machine that predicts a risk of illness or death ten years in advance?
This is a dizzying problem! If an AI predicts an 80% risk of developing lung cancer at age 50, is this information of any interest to the patient? Isn’t it even, on the contrary, anxiety-provoking and therefore negative for the patient’s quality of life? In reality, it is essential to design models capable of “explaining” their predictions, and above all of proposing solutions to reduce this risk of cancer, for example by changing lifestyle. And only then does it become beneficial for the patient. In the United States, researchers have shown that AI models were capable of determining the suicidal risk of Facebook users based on their publications. The social network, which implemented this functionality, now offers suicide prevention telephone numbers to concerned Internet users. Why not, but we can also easily imagine the potential abuses of this type of tool: an employer or an insurer could profile all the information you publish on the networks and thus determine your health risks, particularly psychiatric risks. , which could harm you. These are ethical questions that we will have to answer.
“Even if we have the best prediction AI in the world, we will always have to evaluate treatments on real patients”
Should we regulate the use of AI in medicine, in the same way that we control the marketing of a drug, for example?
AI is already regulated, and we obviously need rules that protect patients from harmful use or from AI not validated by science. But regulation can become a problem when it hinders innovation – we see this in Europe which is lagging behind digitally. By regulating too much, administering too much, putting too many safeguards in place, the risk is to shoot yourself in the foot. For example, in Europe, for every euro spent on global medical research, there are four euros spent to verify that this euro is well spent… In other words, 80% of financial resources are misused. The United States and China, which are subject to fewer rules, are therefore moving much faster.
Should we be afraid of AI developed by private actors? We think in particular of OpenAI or GAFAM which offer AI models to the general public, particularly in terms of medical diagnosis…
The problem is that we risk not being able to do much to regulate them, just as we failed to slow down the growth of Facebook at the time… AI is not a wand magical, but it will potentially impact almost all areas of medicine. In which way ? There is the question.