US to endorse new OECD principles on artificial intelligence

Link articolo originale

Archivio di tutti i clip:
clips.quintarelli.it
(Notebook di Evernote).

US to endorse new OECD principles on artificial intelligence
The group, representing the world’s richest countries, hopes non-binding guidelines will become global standard.

By Janosch Delcker | 5/19/19, 7:00 AM CET
| Updated 5/20/19, 4:43 AM CET

The OECD are set to adopt a list of guidelines for the development and use of artificial intelligence | Ben Stansall/AFP via Getty ImagesPARIS — Donald Trump’s administration has finally found an international agreement it can support.
At an annual meeting on Wednesday, the 36 countries in the Organization for Economic Cooperation and Development (OECD) plus a handful of other nations are set to adopt a list of guidelines for the development and use of artificial intelligence.
The agreement, seen by POLITICO, marks the first time that the United States — home to some of the world’s largest and most powerful tech companies — has endorsed international guidelines for the emerging technologies.
China, the second global front-runner in the field, is not a member of the OECD.
Over four pages, the agreement lays out a series of broad principles designed to ensure that as AI develops, the technology will benefit humanity rather than harming it, and urges governments to draft policies for such “responsible stewardship of trustworthy AI.”
Most of today’s cutting-edge AI systems, for example, are prone to mirroring biases from the analog world and to discriminating against minorities.
However, the document omits the matter of whether or not binding rules would be necessary to regulate the technology — a question that divides policymakers and researchers around the world.
“At this stage … it’s completely premature to know whether and what to regulate when it comes to AI,” Anne Carblanc, the head of the OECD’s digital economy policy division, told POLITICO during an interview at the group’s headquarters in the French capital.
Carblanc, a former judge, said that AI affects too many sectors to be covered by one-for-all rules, and that much of the technology — including questions of accountability and liability — is already covered by existing national regulation as well as by international human rights law.
Rather than being a blueprint for hard global rules, the idea behind the OECD’s principles is to “provide a clear orientation to what are the fundamental values that need to be respected,” she stressed.
U.S. President Donald Trump called for regulating AI earlier this year | Chip Somodevilla/Getty Images
By embracing such principles, countries express their “political commitment” to implementing them, she added — a process that will be monitored and reviewed by her group.
The OECD also hopes that the principles will have an impact beyond their own members.
At this year’s G20 summit in Osaka, Japan, the OECD wants to encourage member countries — which includes non-OECD nations such as China — to express support for their principles, in one form or another, according to officials.
Are you a machine?
The guidelines, due to be released on Wednesday, were drafted by a group of 50 experts from the industry, governments, trade unions, the civil society as well as tech companies.
The final document starts by pledging that AI should be designed to respect the rule of law, human rights and democratic values.
It adds that AI systems should be safe and transparent, that people should know whether or not they’re dealing with a machine, and that those developing or deploying AI should be held accountable for their actions.
“At the OECD, they’re very present on everything digital, so I believe they thought it was the right place to do something” — Anne Carblanc, the head of the OECD’s digital economy policy division
The OECD also urges governments to boost public and private investment in AI, set up open datasets for developers and support efforts to share data.
Governments should also review legal frameworks to make it easier to turn research into market-ready applications, for example by creating deregulated environments to test technology, the OECD says.
Research into AI goes back to the 1950s. But only in recent years have a boost in computing power, the emergence of cloud computing and unprecedented masses of data turned it from blue-sky research into technology that powers day-to-day applications.
The technologies offer opportunities, from better treatment of cancer patients to saving energy to tackling climate change, but they also come with significant risks. Most of today’s cutting-edge AI systems, for example, are prone to mirroring biases from the analog world and to discriminating against minorities.
AI also poses unprecedented challenges to privacy, as shown by media reports suggesting that China is using state-of-the-art AI to build an omnipresent surveillance system targeting vulnerable groups.
Against this backdrop, the European Union released detailed guidelines for what it calls “trustworthy” artificial intelligence in March — technology that respects European values and is engineered in a way that prevents it from causing intentional or unintentional harm.
The EU’s push into writing the guiding principles was watched closely by the administration of U.S. President Donald Trump, who himself called for regulating AI in an executive order in February.
Alarmed by the fact that the EU’s set of sweeping new privacy rules implemented last year could soon become a global standard for data protection, U.S. officials reportedly intensified cooperation with the OECD on the international AI guidelines.
“The U.S. was interested in pursuing this,” said the OECD’s Carblanc, who oversaw the development of the principles on the working level. “At the OECD, they’re very present on everything digital, so I believe they thought it was the right place to do something.”
In line with the group’s traditional “soft power” approach to exert influence through peer pressure, the idea for the principles is to influence practice by serving as a framework for both national governments drafting legislation and corporations writing up their own guidelines for the development of AI.
There are several past examples that could serve as a precedent, officials say.
In April, for example, the London Metals Exchange announced that by the end of 2022, it would allow companies only to trade those goods at its marketplace that are compliant with the OECD’s guidelines on responsible supply chains for minerals.
Tags:
Artificial Intelligence, Big data, Regulation, Research and Development

If you like this post, please consider sharing it.

Leave a Comment

Your email address will not be published. Required fields are marked *