This is a point I raised today at the HLEG on AI of the EC (High-level Expert Group on AI of the European Commission)

Like we have the principle of “privacy by design” for systems managing personal data, we should have a principle of “redress by design” for systems based on AI which take decisions that can affect people’s lives.

The basic consideration is that a perfectly functioning AI system will make wrong decisions. For example, a person could be denied a service because of a decision by an AI system.

Such a system is not deterministic like, for example, a speeding  camera can be: if you exceed the speed limit with your car, the speeding camera detects it and you get a ticket. You can appeal the ticket, but you’re guilty until proven innocent because a perfectly working (properly configured, certified and audited) deterministic system “decides” that you are guilty.

But with a perfectly working AI system,  as it is a statistical engine that necessarily produces probabilistic results, this decision might be right 98% of the times and wrong 2% of the times (it would be inappropriate to classify these wrong decisions as mistakes), meaning that in these 2%, the person is detemined to be guilty even when she is not (or cannot obtain a service, even if she has full right of obtaining it).

For the person, the wrong decision can generate spillovers, exceeding the scope of the decision itself, for example by generating social reproach, negative feedbacks online and other consequences that may spread online and become impossible to remove.

In these wrong cases (they can be false positives or false negatives), the appeal procedure may not exist or, if it exist, it may be ineffective, its cost may be excessive, it will not be accessible to all, it may require an excessive time or it will not rectify the above mentioned spillovers.

Redress by design relates to the idea of establishing, from the design phase, mechanisms to ensure redundancy, alternative systems, alternative procedures, etc. in order to  be able to effectively detect, audit, rectify the wrong decisions taken by a perfectly functioning system and, if possible, improve the system.

As an example of redress-by-design implementation, consider the recent EU Directive on copyright: an AI system will decide if a content is purportedly legitimate or is in violation of someone’s copyright.

A perfectly working filtering system will make mistakes and block the publication of some contents. Although overall the number of false positives may be small, the damage caused to these persons’ right to freedom of speech, for those persons will be total.

The details of the appeal procedure in the Directive are very scarce and it is likely that it will be implemented by member states in such a way that (willingly or unwillingly) will inhibit actual appeals. We are therefore not going to have an effective way of timely and freely correct wrong decisions and, even more, we will not gain a precise view of the actual precision of the systems.

A Redress-by-design alternative, for example, could have been granting uploading users the possibility of immediately oppose a decision to block a publication of a content, forcing its publication upon provision by the user of an indirect verification mechanism usable by a court. (pretty much like phone number verification by a messaging app or wifi splash page).