on ChatGPT and the decision by the Italian Data Protection Authority. They’ve just opened the Pandora box…

This affair on chatGPT4 and the Italian Data Protection Authority (DPA) makes me think back to the Google-Vividown case in which Google executives were sanctioned because-if I remember correctly-the recommender system had classified a video featuring a bullied down boy as “funny.”

It was then said that the measure was without merit, that an intermediary was always exempt from liability, and that a “platform” was merely an intermediary. Then, too, there was tearing up about the alleged incompetence of the Italian DPA and the country’s technological reluctance (there is some, of course, but not so differently than in many other countries).

I wrote back then that the case did not seem without merit to me. I wondered if, the fact that a decision was made by a machine, was enough to absolve the operator of any responsibility.

Many years later, and after several updates to the framework, the issue is also at the center of debate, even in the U.S., with pressure to reinterpret or rewrite Section 230 of the Telecommunications Act of 1996 to redefine the perimeter of operator’s liability.

As today, we are tearing our hair out over the DPA’s action against OpenAI, but even today the case does not seem so obvious, and not only because of the reasons that had been given.

In summary, as I understand it, the DPA raises three main objections namely the lack of privacy notice, the processing of children’s data, and that the lack of legal basis.
I leave it to legal experts to delve into the merits.

It seems to me that all monopolists open datacenters in the EU to host user data, promise to resist requests for information from U.S. Authorities to try to resemble being GDPR compliant, build sophisticated legal mechanisms to minimize the risk of processing children’s data, etc. The fact that OpenAI is not doing such things yet, having become insanely relevant in zero time, well, it seems to me that some extra effort on the part of the company is not out of place.

The issue of disclosure and legal basis is addressable, as many multinational companies are doing. Mitigating the risk of processing children’s data is also addressable. It is an issue that everyone who collects user data to create profiles to show advertisements face. OpenAI will be able to address it like Facebook, Instagram, etc.do.

As I understand it, the DPA did not block the service, but the processing of the data. It seems to me that the service could have continued to be provided if they had stopped the collection and transfer to the U.S. of all Italian users’ data. I do not doubt that the company will find a solution to put itself on the same ground as other U.S. operators, if it wants to do so. (It is not certain that the solution will eventually be able to withstand a future Schrems 3 ruling – we can bet it will come- but in the meantime things can continue.)

The DPA raised the issue with an urgent measure during an unfinished inquiry, without a confrontation with the other EU DPAs (gathered in the so-called “Art. 29 Task Force”). This seems to me to be less than temperance-oriented, and it has rightly garnered an uproar that reverberates around the world and will lead to reflection within the Task Force.

As said, these profiles seem to me of little concern.
Pandora’s box has been opened and this is what has come out of it so far.

But when the EU DPAs will look into it, in my opinion they will find another aspect worthy of reflection and probably censure, still unheard of, something that is specific to Machine Learning: what is the legal relationship between personal data and the model, from the point of view of the GDPR ?

If a user decides to opt-out and remove himself from the service, is it enough for the operator to delete the data referring to this person or will it have to delete the traces inherent to this person scattered in the model that has been distilled from that data as well ? Will the company have to do a re-training of the model every time a user decides to leave the service ?

It seems reasonable to me that the answer is yes: if the user asks the operator to forget about him, it is not enough to delete the data, it is also necessary for the “artificial intelligenceSALAMI to forget about him.

The consequences might be unimportant for OpenAI, which does not have a business model centered on user profiling, but they might be much more so for “surveillance capitalists,” some of whom already promise to erase the data after a short time, as they have the models anyway.

This is what I think will be the most relevant surprise to come out of the Pandora’s box uncovered by the Italian DPA.

Let’s get the popcorn ready.

If you like this post, please consider sharing it.

2 thoughts on “on ChatGPT and the decision by the Italian Data Protection Authority. They’ve just opened the Pandora box…”

  1. “Will the company have to do a re-training of the model every time a user decides to leave the service ?
    It seems reasonable to me that the answer is yes”

    Uhmmm… looks to me this is a request for a TARDIS… 🙂
    If I do a consulting project for a customer (s)he owns the IP on it… but I own the fact that “I have learnt something” in the process… how can I forget it?
    If I have developed a BP model for a customer and developed a “formula” to simulate something… and in the next project I refine it… [repeat for years] but after 10 years I am told I was not entitled to “use” the know-how I developed in project 3… how that is manageable? Even if I wanted… how can I predict what I would have “developed” without project 3?
    Getting closer… if in the BP I develop a function where “a numerical parameter” is defined by the statistical analysis of a set of dataset… to make the dataset dynamic (i.e. a subset of data can be removed and the parameter be re-calculated) I would need to tag the dataset i.e. not to make it anonymous…

    Would gladly discuss… 🙂

Leave a Reply to Gilberto Cancel Reply

Your email address will not be published. Required fields are marked *