Balancing enforcement and civil rights (Some ideas on anonimity protection and redress by design)

[draft]

We recently had a debate in Italy about anonimity on social networks and the possibility/difficulty/impossibility to prosecute possible online crimes.

The whole debate started when a politician proposed to impose registration to social networks with personal IDs. This is a recurrent proposal. In every legislature, at least since the early 2000 some member of the parliament makes a similar proposal, there’s a public debate (now on twitter..) and then everything vanishes after a couple of days.

The proposal is both technically impractical and politically unsettling, given it exposes commenting persons to possible retaliations for their opinions determining a chilling effect on free speech. On the other hand, there’s already a chilling effect due to the huge amount of bot accounts that harass online conversions, or reputation damaging lies. (let alone other crimes like sexual abuse or hatred).

IMHO, the discussion fails to grasp an important point: we _already_ are reacting putting political pressure and legal oblilgation on intermediaries to police online speech. (I wrote about this in a book Parole e Potere (Words and power) cowritten with Oreste Pollicino, full professor of Constitutional law at Bocconi University and Giovanni Pitruzzella, Advocate General at the European Court of Justice).

Facebook has recently reported data about their policing. The figures are staggering, as they amount to many billions of interventions in 2019.

And they do it mostly proactively (e.g. not upon notification from users, they filter in near realtime upon content upload). For example, in the category “Child & nudity”, 99,5% of removals were proactive and just 0,03% of contents have been reinstated after a remediation request by users. The same goes for “Violence and graphics”: 98,9% proactive removals, 0,007% redress; “hate speech” 67,8%proactive, 0,05% redress; fake accounts 5,4Bn removals, 99,8% proactive, no data on redress.

Perhaps Twitter is not doing as much as Facebook is doing, maybe because of the level of required investment, but the “effectiveness” of Facebook actions is driving up pressure on everyone and eventually Twitter is going to end up with similar approach.

De facto, there are already private companies that, using Artificial Intelligence, decide who gets to see what, and who can say what, with very limited (or almost non existent) redress possibility, transparency and accountability.

This _already_ is the case.

And censorship will extend over time as Artificial Intelligence proliferates.

There are two measures we should look at, when working with probabilistic systems like machine learning: precision (of all those identified as wrongdoers, how many really were wrongdoers ?) and recall (of all the cases examined, how many of the wrongdoers has been identified ?)

AI works, for what a policy maker is concerned, as recall can be very high. Like it or not (I don’t), this will lead to an increase of requests of content policing by online intermediaries.

But we should care of precision, avoiding false positive to the maximum extent. Because for the specific person that has been denied one of her rights because of an incorrect prediction of a non defective system, her damage is unitary. A 100% violation of her rights. This is why, IMHO, we should be really concerned about redress, combined with a protection of anonimity to preserve user’s rights and prevent chilling effects.

I think the censorship power should not reside in private hands (unmonitored).

Borrowing some words from the US vs. Columbia steel antitust decision of 1948 (we are not talking about steel, but about communication control..):

We have here the problem of bigness.

Its lesson should by now have been burned into our memory by Brandeis. The Curse of Bigness shows how size can become a menace — both industrial and social.

It can be an industrial menace because it creates gross inequalities against existing or putative competitors.


In final analysis, size in steel is the measure of the power of a handful of men over our economy.

That power can be utilized with lightning speed. It can be benign, or it can be dangerous.

The philosophy of the Sherman Act is that it should not exist.

For all power tends to develop into a government in itself. Power that controls the economy should be in the hands of elected representatives of the people, not in the hands of an industrial oligarchy.

Industrial power should be decentralized.

It should be scattered into many hands, so that the fortunes of the people will not be dependent on the whim or caprice, the political prejudices, the emotional stability of a few self-appointed men.

The fact that they are not vicious men, but respectable and social-minded, is irrelevant.

That is the philosophy and the command of the Sherman Act. It is founded on a theory of hostility to the concentration in private hands of power so great that only a government of the people should have it.”

In this case, power should not reside in elected officials, but in justice systems.

This is why I proposed in the past legislature in Italy a bill (italian) on protected anonimity (as defined in a resolution that I proposed and was approved with unanimity, based on the final Declaration of Internet rights [Art. 10] by a Chamber of deputies study commission) and am pushing the concept of redress by design which has also been included in the policy and investment recommendations to the European Commission by the High level expert group on Artificial Intelligence of which I am a member (AI HLEG).

I already wrote about the idea of redress by design in this blog. The way how it was worded in the AI HLEG Policy recommendations 2019 is “establishing – from the design phase – mechanisms to ensure alternative systems and procedures with an adequate level of human oversight (human in the loop, on the loop or in command approach) to be able to effectively detect, audit, and rectify incorrect decisions taken by a “perfectly” functioning system, for those situations where the AI system’s decisions significantly affects individuals.”

The AI HLEG Guidelines explicitly state: Oversight may be achieved through governance mechanisms such as a human-in-the-loop (HITL), human-on-the-loop (HOTL), or human-in-command (HIC) approach. This can include the decision not to use an AI system in a particular situation, to establish levels of human discretion during the use of the system, or to ensure the ability to override a decision made by a system. Moreover, it must be ensured that public enforcers have the ability to exercise oversight in line with their mandate.

My proposal of protected anonimity in conjunction with a redress request for an invalid censorship decision is aimed to have courts taking a final decision, with al guarantees of a due process. This can be achieved, on a procedure triggered by the user,  by associating the specific censored content (not all of the users’ online activity) to a token that can be reconciled by the court to the user’s identity, thanks to the cooperation of a trusted third party (anywhere in Europe, relying on the existing eIDAS trust framework), that furthermore has the obligation to store the association offline, in order to prevent privacy violations and to permanently delete such association after a specific amount of time (in line to the expiration terms for the alleged violation’s prosecutability).

It may sound complicated, but it can be made as simple as logging into a wifi; IMHO it relieves the intermediary of possible responsibility consequences, it ensures all citizen’s safeguards are in place, it increases the reliability of the system by relying on a europewide exixting network of fiduciaries, it dramatically mitigates the risk of unjust decisions and, last but not lest, it can create a service industry for European companies coherent with European values.

If you like this post, please consider sharing it.

Leave a Comment

Your email address will not be published. Required fields are marked *