AI: from users to beneficiaries (and some implications on liability).

What’s the difference between a Robot vacuum cleaner and a traditional vacuum cleaner ?

When you operate a traditional vacuum cleaner in your house, you are the user.

When your Robot vacuum cleaner operates in your house, you are the beneficiary; you are no longer the “user” who “uses” (or operates) the device. You benefit from its activity.

It is no longer a product you use to clean your house; cleaning the house becomes a service you benefit from, provided by a “living product”.

You own the tool, but you are no longer its “user”. You are just the “beneficiary”.

Now think of self driving cars.

You will own the car, but you’re not going to be its “user” driving it, just the passenger, the beneficiary of the service.

We know that with traditional cars, persons who use the tool (car users, aka drivers) happen to use them poorly and cause accidents with casualties. There is a general consensus that, by avoiding having humans using the tool (driving), the number of casualties will dramatically decrease.

Self driving cars are based on statistical AI systems. We know by design that, although they have no defects, they are going to have accidents and eventually determine some casualties.

But those casualties will not be attributable to a poor usage of the tool, as there are no users involved.

If a “perfect” tool is being used by a human who makes a mistake, causing the service outcome to fail, there will be no liability for the manufacturer. For non-AI systems, product liability is associated to a malfunction of a device, when the user uses it correctly.

But, with AI systems, there are no users, just beneficiaries of a service.

We know that, in some occasions, the service provided by a perfect tool, by design, is going to fail (given the statistical nature of AI).

Civil liability can be easily covered by insurances.

But liability is not only civil but also criminal.

Based on existing legal frameworks in various states, in case of mens rea offenses, negligence based offenses, and strict liability offenses, criminal laws do not limit the liability of humans (CEOs, COOs, programmers, users, etc.) who may have been involved in a crime perpetrated by an AI System, given its known non deterministic behaviour.

When a casualty arises from an action of an AI system, many humans are involved, and the arising criminal liability could potentially deter the development of an otherwise benefical technology.

Consider for example a self driving car that reduces 95% of driving casualties; this should be considered as a big human success, but the remaining 5% of casualties would involve criminal responsibility for a number of humans related to the product.

It is a situation somehow similar to drug development where the overall effects of a drug are extremely beneficial to society although it is known beforehand that some adverse events might happen.

Persons involved in the drug development face no criminal responsibility if all prescribed procedures are followed in a correct manner, as monitored by appropriate regulatory bodies.

A similar approach to criminal liability could be developed for AI, in order to not hinder applications that can significantly benefit society although it is known that they might have unwanted consequence in a (comparatively) limited number of cases, lesser than the pre-existing situation.

This is the basis of the bill proposal I introduced when I was in the Italian parliament in the last legislature.

If you like this post, please consider sharing it.

Leave a Comment

Your email address will not be published. Required fields are marked *