Buona idea, molto difficile da implementare. Va associata a sandbox, IMHO con opt-in per i pazienti.
Su temi connessi potete leggere AI come le medicine e Prediction poisoning per mitigare il rischio di “Automation bias” (cioè la tendenza indotta dagli incentivi a confermare la predizione della macchina)
Source: The Wall Street Journal
“One of the key benefits of an AI-powered product is that it can be improved over time,” says Tim Sweeney, CEO of Inflammatix, a developer of blood tests designed to predict the presence, type, and severity of an infection. If the company learned that a particular pattern of the body’s immune response strongly indicates the onset of sepsis, for instance, it would want to retrain its algorithms to account for that. “If you have a lot of extra data, you should be improving your results,” Sweeney says. Under the FDA’s traditional method of oversight, however, companies like Inflammatix would likely have to get additional permission before changing their algorithms.
The agency is beginning to point developers down an alternative path, in April offering formal guidance on how they can submit more flexible plans for devices that use AI. A manufacturer can file a “Predetermined Change Control Plan” that outlines expected alterations. FDA staff—including lawyers, doctors, and tech experts—review the plans and the scope of the expected changes. Once the device is approved, the company can alter the product’s programming without the FDA’s blessing, as long as the changes were part of the plan.
Continua qui: Your Medical Devices Are Getting Smarter. Can the FDA Keep Them Safe? – WSJ