We need “redress by design” for AI systems

This is a point I raised today at the HLEG on AI of the EC (High-level Expert Group on AI of the European Commission)

Like we have the principle of “privacy by design” for systems managing personal data, we should have a principle of “redress by design” for systems based on AI which take decisions that can affect people’s lives.

The basic consideration is that a perfectly functioning AI system will make wrong decisions. For example, a person could be denied a service because of a decision by an AI system.

Such a system is not deterministic like, for example, a speeding  camera can be: if you exceed the speed limit with your car, the speeding camera detects it and you get a ticket. You can appeal the ticket, but you’re guilty until proven innocent because a perfectly working (properly configured, certified and audited) deterministic system “decides” that you are guilty.

But with a perfectly working AI system,  as it is a statistical engine that necessarily produces probabilistic results, this decision might be right 98% of the times and wrong 2% of the times (it would be inappropriate to classify these wrong decisions as mistakes), meaning that in these 2%, the person is detemined to be guilty even when she is not (or cannot obtain a service, even if she has full right of obtaining it).

For the person, the wrong decision can generate spillovers, exceeding the scope of the decision itself, for example by generating social reproach, negative feedbacks online and other consequences that may spread online and become impossible to remove.

In these wrong cases (they can be false positives or false negatives), the appeal procedure may not exist or, if it exist, it may be ineffective, its cost may be excessive, it will not be accessible to all, it may require an excessive time or it will not rectify the above mentioned spillovers.

Redress by design relates to the idea of establishing, from the design phase, mechanisms to ensure redundancy, alternative systems, alternative procedures, etc. in order to  be able to effectively detect, audit, rectify the wrong decisions taken by a perfectly functioning system and, if possible, improve the system.

As an example of redress-by-design implementation, consider the recent EU Directive on copyright: an AI system will decide if a content is purportedly legitimate or is in violation of someone’s copyright.

A perfectly working filtering system will make mistakes and block the publication of some contents. Although overall the number of false positives may be small, the damage caused to these persons’ right to freedom of speech, for those persons will be total.

The details of the appeal procedure in the Directive are very scarce and it is likely that it will be implemented by member states in such a way that (willingly or unwillingly) will inhibit actual appeals. We are therefore not going to have an effective way of timely and freely correct wrong decisions and, even more, we will not gain a precise view of the actual precision of the systems.

A Redress-by-design alternative, for example, could have been granting uploading users the possibility of immediately oppose a decision to block a publication of a content, forcing its publication upon provision by the user of an indirect verification mechanism usable by a court. (pretty much like phone number verification by a messaging app or wifi splash page).

Update

I believe this not only may apply to remedy incorrect predictions, but also to increase resilience to adversarial attacks.

think of a system composed by two or more neural networks trained with different algorithms and training sets, each of them delivering similar precision and recall. it would be hard to produce adversarial examples causing both networks to produce incorrect predictions for the same input.

that would increase resilience.
we could have a procedure in place whereby when there is no unanimity between the n networks, or there’s a significant difference in the confidcence of the predictions, an decision by a human could be required.

obviously this would not lead always to correct predictions, but the number of incorrect predictions would reduce.

furthermore, to fill the gap, a person who thinks is being victim of an incorrect prediction could also be provided with a radically different procedure (addressing a correlated, but different issue) in order to expeditely obtain a remedy (like in the example of the copyright directive)

If you like this post, please consider sharing it.

Leave a Comment

Your email address will not be published. Required fields are marked *