Negligence and AI’s Human Users by Andrew D. Selbst :: SSRN

Link articolo originale

Archivio di tutti i clip:
clips.quintarelli.it
(Notebook di Evernote).

Negligence and AI’s Human Users

Boston University Law Review, Forthcoming

58 Pages

Posted: 2 Apr 2019

See all articles by Andrew D. SelbstAndrew D. SelbstData & Society Research Institute; Yale Information Society Project

Date Written: March 11, 2019

Abstract
Negligence law is often asked to adapt to new technologies. So it is with artificial intelligence (AI). But AI is different. Drawing on examples in medicine, financial advice, data security, and driving in semi-autonomous vehicles, this Article argues that AI poses serious challenges for negligence law. By inserting a layer of inscrutable, unintuitive, and statistically-derived code in between a human decisionmaker and the consequences of that decision, AI disrupts our typical understanding of responsibility for choices gone wrong. The Article argues that AI’s unique nature introduces four complications into negligence: 1) unforeseeability of specific errors that AI will make; 2) capacity limitations when humans interact with AI; 3) introducing AI-specific software vulnerabilities into decisions not previously mediated by software; and 4) distributional concerns based on AI’s statistical nature and potential for bias.Tort scholars have mostly overlooked these challenges. This is understandable because they have been focused on autonomous robots, especially autonomous vehicles, which can easily kill, maim, or injure people. But this focus has neglected to consider the full range of what AI is. Outside of robots, AI technologies are not autonomous. Rather, they are primarily decision-assistance tools that aim to improve on the inefficiency, arbitrariness, and bias of human decisions. By focusing on a technology that eliminates users, tort scholars have concerned themselves with product liability and innovation, and as a result, have missed the implications for negligence law, the governing regime when harm comes from users of AI.The Article also situates these observations in broader themes of negligence law: the relationship between bounded rationality and foreseeability, the need to update reasonableness conceptions based on new technology, and the difficulties of merging statistical facts with individual determinations, such as fault. This analysis suggests that though there might be a way to create systems of regulatory support to allow negligence law to operate as intended, an approach to oversight that it not based in individual fault is likely to be a more fruitful approach.

Keywords: tort, negligence, artificial intelligence, machine learning

Suggested Citation:
Suggested Citation

Selbst, Andrew D., Negligence and AI’s Human Users (March 11, 2019). Boston University Law Review, Forthcoming. Available at SSRN: ssrn.com/abstract=

If you like this post, please consider sharing it.

1 thought on “Negligence and AI’s Human Users by Andrew D. Selbst :: SSRN”

  1. Dice l’introduzione alle 58 pagine …..
    “As with any new technology, widespread adoption of artificial intelligence (AI) will lead to injuries”

    Ci siamo già fatti male con l’adozione su scala globale di una rete di sistemi interconnessi, senza aver prima valutato i rischi e i danni che la mancata prevenzione di quei rischi avrebbe prodotto.

    Dice l’abstract … “AI disrupts our typical understanding of responsibility for choices gone wrong. ”

    C’è bisogno di aspettare che sia l’AI a mettere in crisi la nostra capacità di assumere comportamenti responsabili, a fronte di scelte che si sono rivelate sbagliate?
    Aspettiamo che siano gli accademici a dirci come procedere?
    Oppure cominciamo ad assumere un comportamento responsabile per gli errori già commessi, tipo quello di credere che l’acquisizione dell’interoperabilità dovesse essere solo un problema tecnico?

Leave a Comment

Your email address will not be published. Required fields are marked *