The White House Puts New Guardrails on Government Use of AI

“…l’europa regolamenta” (cit.)

Source: Wired

In October, President Biden signed a sweeping executive order on AI that would foster expansion of AI tech by the government but also requires those who make large AI models to give the government information about their activities, in the interest of national security.

The new policy for US government use of AI announced Thursday asks agencies to take several steps to prevent unintended consequences of AI deployments. To start, agencies must verify that the AI tools they use do not put Americans at risk. For example, for the Department of Veterans Affairs to use AI in its hospitals it must verify that the technology does not give racially biased diagnoses. Research has found that AI systems and other algorithms used to inform diagnosis or decide which patients receive care can reinforce historic patterns of discrimination.

If an agency cannot guarantee such safeguards, it must stop using the AI system or justify its continued use. US agencies face a December 1 deadline to comply with these new requirements.

The policy also asks for more transparency about government AI systems, requiring agencies to release government-owned AI models, data, and code, as long as the release of such information does not pose a threat to the public or government. Agencies must publicly report each year how they are using AI, the potential risks the systems pose, and how those risks are being mitigated.

And the new rules also require federal agencies to beef up their AI expertise, mandating each to appoint a chief AI officer to oversee all AI used within that agency. It’s a role that focuses on promoting AI innovation and also watching for its dangers.

Continua qui: The White House Puts New Guardrails on Government Use of AI | WIRED

If you like this post, please consider sharing it.

Leave a Comment

Your email address will not be published. Required fields are marked *