Maybe we ought to reintroduce (some form of) prior checkings by privacy authorities

In an early morning chat at the meeting of the HLEG-AI (High level expert group on AI of the European commission), some colleagues were reporting that their job was hindered by excessively strict interpretations of the GDPR by their organizations’ legal counsels.

Fact is that many uses of AI involve personal information and that legal counsels incentives are misaligned with respect of the research&development groups (and that is generally OK) but tends to lead them to be excessively prudent and, quite often, blocking. (and they usually have an almost-veto power).

Legal counsels would benefit from a (quasi) discharge of responsibility.

With the regulation before GDPR one could turn do the privacy regulator to request a prior checking, before starting data processing. They gave guidance by often helping the organization shaping processes and technical details.

Prior chekings no longer exist (replaced by an own impact assessment) and while the whole process is conceptually easier, it may effectively raise some barriers due to a (resonable) precaution by the organization legal counsels.

So, in order to ease a blocking friction in the development of AI systems in Europe,  perhaps we ought to think to some form of prior checking by/with privacy regulators.

If you like this post, please consider sharing it.

Leave a Comment

Your email address will not be published. Required fields are marked *