E’ stato scritto lo stesso giorno in cui scrivevo… (e pubblicato 2 settimane dopo)
If a user decides to opt-out and remove himself from the service, is it enough for the operator to delete the data referring to this person or will it have to delete the traces inherent to this person scattered in the model that has been distilled from that data as well ? Will the company have to do a re-training of the model every time a user decides to leave the service ?
It seems reasonable to me that the answer is yes: if the user asks the operator to forget about him, it is not enough to delete the data, it is also necessary for the “
artificial intelligence” SALAMI to forget about him.
Non ero a conoscenza di queste decisioni della FTC, ma mi sembra una linea assolutamente ragionevole.
The Federal Trade Commission has recently begun to require algorithmic disgorgement in its enforcement of data protection laws – the deletion of not just the improperly obtained data itself, but artificial intelligence models and algorithms built using such data.
This Article provides a brief overview of machine learning models and algorithms and the basic function and use of artificial intelligence and describes the FTC’s role in the regulation and enforcement of data collection rules.
It then discusses recent enforcement actions brought by the FTC that utilized algorithmic disgorgement, analyzes the legality of the FTC’s authority to order destruction of computer data models and algorithms, and discusses the likelihood and possibility of future use of the new remedy, as well as the shape that any future use is likely to take.
Finally, it deliberates on the legal, policy, and practical implications of algorithmic disgorgement and proposes some possible alternatives to and restraints on the FTC’s use of algorithmic destruction orders.