During the first reading in the National Assembly of the bioethics bill, the deputies looked into the question of artificial intelligence (AI) in health. The text also provides for the principle of a “human guarantee” in the interpretation of medical results when machines are used.
The PMA for all was not the only subject of the debate. Tuesday, October 15, the National Assembly adopted at first reading, by 359 votes for and 114 votes against, the bill on bioethics brought to the government. After 15 sessions devoted to the examination of the four most controversial articles, in particular the measure opening up medically assisted procreation to all women, the parliamentarians studied the question of Artificial Intelligence (AI), of which the legislative framework is still very vague.
While in September 2018, the National Consultative Ethics Committee (CCNE), a citizen advisory body responsible for issuing opinions on the use of technical progress in biology and medicine, worried of the “risk of depriving the patient, in the face of the decision proposals provided by algorithms, of a large part of his capacity to participate in the construction of his care process” and of the threat of a “reduction of the taking into account individual situations”, by the machine, the government wanted to reassure. Article 11 therefore enshrines in law the principle of a “human guarantee” in the interpretation of medical results in the event of recourse to AI.
“Some studies have shown that the efficiency of artificial intelligence devices can equal or even exceed that of the best human operators for certain specific tasks, in particular for the recognition of images or signals (interpretation of “imaging” devices or electrocardiograms for example), or for the development and implementation of multi-criteria decision rules”, explains the government, recognizing however that the validity of the conclusions of these devices can be “limited by defects or biases (voluntary or not) related to their development and/or to the bases on which the learning took place, but also by the limits of the information actually taken into account with regard to the specific particularities of each clinical situation”.
Always put into perspective the results offered by the machine
This is why the Ministry of Health is now taking advantage of the bioethics bill to strengthen patient information and guarantee a human interpretation of their results. As soon as an “algorithmic processing of massive data” will be used for “acts for preventive, diagnostic or therapeutic purposes”, the doctor must inform the patient “of this use and the methods of action of this treatment.”
In addition, a health professional will have to configure the algorithms, which can be modified if necessary. In addition, a “traceability” of the actions carried out is imposed. Finally, the resulting information must be made “accessible to the health professionals concerned.”
Thus, doctors will always have to put the results proposed by the AI into perspective in relation to the other information available to them. “This makes it possible to guarantee that the final decision, which is based on these (algorithmic) devices, is taken by the doctor/healthcare professional and the patient himself, in particular for the purposes of compliance with the legislative principle of informed consent to care” , notes the bill.
Maintaining the link between doctor and patient
This measure “does not call into question the possibilities of using efficient diagnostic or preventive or therapeutic guidance devices” or medical devices integrating AI elements, assures the Ministry of Health. It is simply that the use of the machine remains under “medical decision control”. “The singular dialogue between the patient and his doctor must be preserved and it seems essential that artificial intelligence devices remain a support for a human decision without replacing it”, insists the government. Finally, with regard to governance, the bioethics law seeks to extend the scope of the competences of the National Ethics Advisory Committee for Life Sciences and Health.
Provisions “particularly welcome” according to the Council of State which, in a report devoted to bioethics in 2018 declared that it “would be insufficient to require, for the purpose of transparency, the publication of a source code which does not contribute only marginally to the understanding by doctors, and by patients, of the logic at work in artificial intelligence devices.”
From now on, after the Hemicycle, the bill will be debated in the Senate from January. So there may still be changes to the text. The latter will then go back to second reading in the Assembly. If all goes well, the law should be enacted around spring 2020.
.