Riguardavo questo manifesto:
Era il 1990.
Nel 1987 si era tenuto a Milano l’IJCAI (International Joint Conference on Artificial Intelligence)
Mi sembrava che questo manifesto fosse dell’IJCAI ed invece era del Computational Intelligence del 1990.
Degno di nota il tutorial numero 6.
Ho poi notato una cosa che pare incredibile: Era il settembre 1990, erano tutti tecnologi internazionali di alto livello in un settore altamente innovativo (AI). Nel manifesto c’era un solo indirizzo email (il mio). E, nota nella nota, non c’erano i top level domains.
Un’era geologica fa.
Ricordate che “L’autopilot di Tesla riduce del 40% gli incidenti” ? Beh, è una affermazione non sostanziata
In estrema sintesi dice che i dati su cui si basava l’affermazione erano inconsistenti ed insufficienti e che, se proprio, ci sarebbe una riduzione del 13% delle denuncie alle assicurazioni di danni per incidenti, ma…
had no statistically significant effect on other types of insurance claims, including property damage liability, bodily injury liability, claims under medical payment coverage, or personal injury protection claims
e che quindi, non si può affermare nulla con certezza.
“l’hardware si rompe, il software nasce rotto” è una massima tirata fuori in una cena dal mio amico Mauro, quello che quando gli chiedi “quale è il tuo linguaggio di programmazione preferito ?” risponde “il saldatore”.
la cosa è stata sistemata 8 mesi dopo che è stata scoperta.
non oso pensare cosa sarebbe successo se una simile notizia avesse riguadato un intermediario finanziario tradizionale.
l’idea che il software aperto sia ispezionabile e quindi sicuro, è fallace.
può essere _più_ sicuro, ma il software è insicuro. punto. anche se ci guardano molte persone competenti.
la sicurezza non esiste.
come dice sempre il mio amico Gigi (la gente che si occupa di security sa bene di chi parlo), la sicurezza è una emozione. sei sicuro quando ti senti sicuro. perchè i problemi possono sempre succedere e il rischio può essere mitigato ma non annullato.
per questo la sicurezza maggiore si ottiene con tecnologie, processi e persone (con valori).
sia per i momenti in cui le cose vanno bene che per quelli in cui le cose non vanno bene. shit happens. punto.
l’assenza di uno di questi fattori riduce il livello di sicurezza complessivo.
basta esserne consci, quando vi si rinuncia.
gli approcci fideistici non sono appropriati.
pensa se lo facesse telecom…
sono sempre stato scettico su gFiber.
ho sempre pensato fosse piu’ che altro marketing regolamentare.
adesso penso che sia un segno di pressione sui ricavi pubblicitari.
Dice un articolo appena pubblicato dalla University of California su Transport Policy:
Autonomous vehicles (AVs) have no need to park close to their destination, or even to park at all. Instead, AVs can seek out free on-street parking, return home, or cruise (circle around). Because cruising is less costly at lower speeds, a game theoretic framework shows that AVs also have the incentive to implicitly coordinate with each other in order to generate congestion. Using a traffic microsimulation model and data from downtown San Francisco, this paper suggests that AVs could more than double vehicle travel to, from and within dense, urban cores. New vehicle trips are generated by a 90% reduction in effective parking costs, while existing trips become longer because of driving to more distant parking spaces and cruising….
Conferma ciò che scrissi sei+ mesi fa:
…Piu’ la città è congestionata minore è la velocità commerciale, maggiore è il numero di ore che l’auto può guidare al costo. Se la velocità commerciale scendesse a 8mi/hr (nei giorni di traffico la velocità media è 9mi/hr), un’ora di parcheggio costa come la benzina per 37 ore di guida.
Potete immaginare lo scenario della città paralizzata, inondata di auto.
Se voi foste il sindaco, cosa fareste ?
Non potete aumentare il prezzo della benzina.
Non potete ridurre di oltre il 90% il costo dei parcheggi (privati).
Potreste fare un’ordinanza per vietare le auto dedicate e consentire solo auto condivise. Cosa che si potrebbe fare già adesso. Ma ci sarebbe una rivolta, difficilmente passerebbe.
Potreste pensare ad un congestion charge, ma questo impatta chi entra nella zona limitata, non chi ci gira all’interno.
Potreste vietare le auto senza passeggero a bordo. Come è adesso. Non ci sarebbero grandi impatti sulla viabilità ma ci sarebbe comunque un beneficio: Non dovresti prendere la patente (forse).
In an early morning chat at the meeting of the HLEG-AI (High level expert group on AI of the European commission), some colleagues were reporting that their job was hindered by excessively strict interpretations of the GDPR by their organizations’ legal counsels.
Fact is that many uses of AI involve personal information and that legal counsels incentives are misaligned with respect of the research&development groups (and that is generally OK) but tends to lead them to be excessively prudent and, quite often, blocking. (and they usually have an almost-veto power).
Legal counsels would benefit from a (quasi) discharge of responsibility.
With the regulation before GDPR one could turn do the privacy regulator to request a prior checking, before starting data processing. They gave guidance by often helping the organization shaping processes and technical details.
Prior chekings no longer exist (replaced by an own impact assessment) and while the whole process is conceptually easier, it may effectively raise some barriers due to a (resonable) precaution by the organization legal counsels.
So, in order to ease a blocking friction in the development of AI systems in Europe, perhaps we ought to think to some form of prior checking by/with privacy regulators.
a modest proposal in two (plus one) provisions:
- limit the number of friends a non-paying user can have: let’s say 100. do you want more friends ? you have to pay something: paying not only introduces friction but implies verifiability of identity of influences in case of suspicion of a crime. multiple (quasi) identical posts by different accounts (e.g. bots) would be easily spotted.
- limit the level of posts re-sharing. let’s say friends of friends. This would limit the reach of a non paying user to maximum 10.000 persons.
- (eventually) introduce latency in the visibility of re-shared posts. Let’s say 8 hours. This would allow up to 16 hours to react to fake news posting (content moderation)
the above proposal would not negatively impact social network revenues (nor costs) and would not imply any editorial responsibility for the social network.
just came to my mind.
I’d like to read some comments!
What’s the difference between a Robot vacuum cleaner and a traditional vacuum cleaner ?
When you operate a traditional vacuum cleaner in your house, you are the user.
When your Robot vacuum cleaner operates in your house, you are the beneficiary; you are no longer the “user” who “uses” (or operates) the device. You benefit from its activity.
It is no longer a product you use to clean your house; cleaning the house becomes a service you benefit from, provided by a “living product”.
You own the tool, but you are no longer its “user”. You are just the “beneficiary”.
Now think of self driving cars.
You will own the car, but you’re not going to be its “user” driving it, just the passenger, the beneficiary of the service.
We know that with traditional cars, persons who use the tool (car users, aka drivers) happen to use them poorly and cause accidents with casualties. There is a general consensus that, by avoiding having humans using the tool (driving), the number of casualties will dramatically decrease.
Self driving cars are based on statistical AI systems. We know by design that, although they have no defects, they are going to have accidents and eventually determine some casualties.
But those casualties will not be attributable to a poor usage of the tool, as there are no users involved.
If a “perfect” tool is being used by a human who makes a mistake, causing the service outcome to fail, there will be no liability for the manufacturer. For non-AI systems, product liability is associated to a malfunction of a device, when the user uses it correctly.
But, with AI systems, there are no users, just beneficiaries of a service.
We know that, in some occasions, the service provided by a perfect tool, by design, is going to fail (given the statistical nature of AI).
Civil liability can be easily covered by insurances.
But liability is not only civil but also criminal.
Based on existing legal frameworks in various states, in case of mens rea offenses, negligence based offenses, and strict liability offenses, criminal laws do not limit the liability of humans (CEOs, COOs, programmers, users, etc.) who may have been involved in a crime perpetrated by an AI System, given its known non deterministic behaviour.
When a casualty arises from an action of an AI system, many humans are involved, and the arising criminal liability could potentially deter the development of an otherwise benefical technology.
Consider for example a self driving car that reduces 95% of driving casualties; this should be considered as a big human success, but the remaining 5% of casualties would involve criminal responsibility for a number of humans related to the product.
It is a situation somehow similar to drug development where the overall effects of a drug are extremely beneficial to society although it is known beforehand that some adverse events might happen.
Persons involved in the drug development face no criminal responsibility if all prescribed procedures are followed in a correct manner, as monitored by appropriate regulatory bodies.
A similar approach to criminal liability could be developed for AI, in order to not hinder applications that can significantly benefit society although it is known that they might have unwanted consequence in a (comparatively) limited number of cases, lesser than the pre-existing situation.
This is the basis of the bill proposal I introduced when I was in the Italian parliament in the last legislature.
the AI Now 2018 Report contends with this central problem and addresses the following key issues:
- The growing accountability gap in AI, which favors those who create and deploy these technologies at the expense of those most affected
- The use of AI to maximize and amplify surveillance, especially in conjunction with facial and affect recognition, increasing the potential for centralized control and oppression
- Increasing government use of automated decision systems that directly impact individuals and communities without established accountability structures
- Unregulated and unmonitored forms of AI experimentation on human populations
- The limits of technological solutions to problems of fairness, bias, and discrimination
Within each topic, we identify emerging challenges and new research, and provide recommendations regarding AI development, deployment, and regulation. We offer practical pathways informed by research so that policymakers, the public, and technologists can better understand and mitigate risks.
- Governments need to regulate AI by expanding the powers of sector-specific agencies to oversee, audit, and monitor these technologies by domain. The implementation of AI systems is expanding rapidly, without adequate governance, oversight, or accountability regimes. Domains like health, education, criminal justice, and welfare all have their own histories, regulatory frameworks, and hazards. However, a national AI safety body or general AI standards and certification model will struggle to meet the sectoral expertise requirements needed for nuanced regulation. We need a sector-specific approach that does not prioritize the technology, but focuses on its application within a given domain. Useful examples of sector-specific approaches include the United States Federal Aviation Administration and the National Highway Traffic Safety Administration.
- Facial recognition and affect recognition need stringent regulation to protect the public interest. Such regulation should include national laws that require strong oversight, clear limitations, and public transparency. Communities should have the right to reject the application of these technologies in both public and private contexts. Mere public notice of their use is not sufficient, and there should be a high threshold for any consent, given the dangers of oppressive and continual mass surveillance. Affect recognition deserves particular attention. Affect recognition is a subclass of facial recognition that claims to detect things such as personality, inner feelings, mental health, and “worker engagement” based on images or video of faces. These claims are not backed by robust scientific evidence, and are being applied in unethical and irresponsible ways that often recall the pseudosciences of phrenology and physiognomy. Linking affect recognition to hiring, access to insurance, education, and policing creates deeply concerning risks, at both an individual and societal level.
- The AI industry urgently needs new approaches to governance. As this report demonstrates, internal governance structures at most technology companies are failing to ensure accountability for AI systems. Government regulation is an important component, but leading companies in the AI industry also need internal accountability structures that go beyond ethics guidelines. This should include rank-and-file employee representation on the board of directors, external ethics advisory boards, and the implementation of independent monitoring and transparency efforts. Third party experts should be able to audit and publish about key systems, and companies need to ensure that their AI infrastructures can be understood from “nose to tail,” including their ultimate application and use.
- AI companies should waive trade secrecy and other legal claims that stand in the way of accountability in the public sector. Vendors and developers who create AI and automated decision systems for use in government should agree to waive any trade secrecy or other legal claim that inhibits full auditing and understanding of their software. Corporate secrecy laws are a barrier to due process: they contribute to the “black box effect” rendering systems opaque and unaccountable, making it hard to assess bias, contest decisions, or remedy errors. Anyone procuring these technologies for use in the public sector should demand that vendors waive these claims before entering into any agreements.
- Technology companies should provide protections for conscientious objectors, employee organizing, and ethical whistleblowers. Organizing and resistance by technology workers has emerged as a force for accountability and ethical decision making. Technology companies need to protect workers’ ability to organize, whistleblow, and make ethical choices about what projects they work on. This should include clear policies accommodating and protecting conscientious objectors, ensuring workers the right to know what they are working on, and the ability to abstain from such work without retaliation or retribution. Workers raising ethical concerns must also be protected, as should whistleblowing in the public interest.
- Consumer protection agencies should apply “truth-in-advertising” laws to AI products and services. The hype around AI is only growing, leading to widening gaps between marketing promises and actual product performance. With these gaps come increasing risks to both individuals and commercial customers, often with grave consequences. Much like other products and services that have the potential to seriously impact or exploit populations, AI vendors should be held to high standards for what they can promise, especially when the scientific evidence to back these promises is inadequate and the longer-term consequences are unknown.
- Technology companies must go beyond the “pipeline model” and commit to addressing the practices of exclusion and discrimination in their workplaces. Technology companies and the AI field as a whole have focused on the “pipeline model,” looking to train and hire more diverse employees. While this is important, it overlooks what happens once people are hired into workplaces that exclude, harass, or systemically undervalue people on the basis of gender, race, sexuality, or disability. Companies need to examine the deeper issues in their workplaces, and the relationship between exclusionary cultures and the products they build, which can produce tools that perpetuate bias and discrimination. This change in focus needs to be accompanied by practical action, including a commitment to end pay and opportunity inequity, along with transparency measures about hiring and retention.
- Fairness, accountability, and transparency in AI require a detailed account of the “full stack supply chain.” For meaningful accountability, we need to better understand and track the component parts of an AI system and the full supply chain on which it relies: that means accounting for the origins and use of training data, test data, models, application program interfaces (APIs), and other infrastructural components over a product life cycle. We call this accounting for the “full stack supply chain” of AI systems, and it is a necessary condition for a more responsible form of auditing. The full stack supply chain also includes understanding the true environmental and labor costs of AI systems. This incorporates energy use, the use of labor in the developing world for content moderation and training data creation, and the reliance on clickworkers to develop and maintain AI systems.
- More funding and support are needed for litigation, labor organizing, and community participation on AI accountability issues. The people most at risk of harm from AI systems are often those least able to contest the outcomes. We need increased support for robust mechanisms of legal redress and civic participation. This includes supporting public advocates who represent those cut off from social services due to algorithmic decision making, civil society organizations and labor organizers that support groups that are at risk of job loss and exploitation, and community-based infrastructures that enable public participation.
- University AI programs should expand beyond computer science and engineering disciplines. AI began as an interdisciplinary field, but over the decades has narrowed to become a technical discipline. With the increasing application of AI systems to social domains, it needs to expand its disciplinary orientation. That means centering forms of expertise from the social and humanistic disciplines. AI efforts that genuinely wish to address social implications cannot stay solely within computer science and engineering departments, where faculty and students are not trained to research the social world. Expanding the disciplinary orientation of AI research will ensure deeper attention to social contexts, and more focus on potential hazards when these systems are applied to human populations.