Until very recently, the interest group notified bodies, IG-NB, has considered continuous learning system strictly non-certifiable. This has changed with the latest version 5 of their AI questionnaire:
In the case of "learning" software systems, the process of learning generally changes the performance of the device. Such changes, if they exceed a certain level, have to be considered as significant changes and require a new conformity assessment, which again has to be performed before placing the device on the market. Practice has shown that it is difficult for manufacturers to sufficiently prove the conformity for AI devices, which update the underlying models using in-field self-learning mechanisms. Notified bodies do not consider medical devices based on these models to be "certifiable" unless the manufacturer takes measures to ensure the safe operation of the device within the scope of the validation described in the technical documentation.
The new thinking covers two aspects:
Performance changes below "certain levels" are not even considered significant changes and therefore don't required a new conformity assessment.
Even self-learning in the field is certifiable if "the manufacturer takes measures to ensure the safe operation of the device within the scope of the validation."
This change of mind is
a good step to cut back on over-regulation,
more in line with the upcoming AI Act and the FDA approach on continuous learning,
and closer to our living certification experiences.