US and Great Britain Forge AI Safety Pact

artificial intelligence

The U.S. and U.K. have pledged to work together on safe AI development.

The agreement, inked on Monday (April 1) by U.S. Commerce Secretary Gina Raimondo and U.K. Technology Secretary Michelle Donelan, will see the AI Safety Institutes of both countries collaborate on tests for the most advanced artificial intelligence (AI) models.

“The partnership will take effect immediately and is intended to allow both organizations to work seamlessly with one another,” the Department of Commerce said in a news release.

“AI continues to develop rapidly, and both governments recognize the need to act now to ensure a shared approach to AI safety which can keep pace with the technology’s emerging risks.”

In addition, the two countries agreed to forge similar partnerships with other countries to foster AI safety around the world. The institutes also plan to conduct at least one joint test on a publicly accessible model and to “tap into a collective pool of expertise by exploring personnel exchanges” between both organizations.

The agreement comes days after the White House unveiled a policy requiring federal agencies to identify and mitigate the potential risks of AI and to designate a chief AI officer.

Agencies must also create detailed and publicly accessible inventories of their AI systems. These inventories will highlight use cases that could potentially impact safety or civil rights, such as AI-powered healthcare or law enforcement decision-making.

Speaking to PYMNTS following this announcement, Jennifer Gill, vice president of product marketing at Skyhawk Security, stressed the need for the policy to require uniform standards across all agencies.

“If each chief AI officer manages and monitors the use of AI at their discretion for each agency, there will be inconsistencies, which leads to gaps, which leads to vulnerabilities,” said Gill, whose company specializes in AI integrations for cloud security.

“These vulnerabilities in AI can be exploited for a number of nefarious uses. Any inconsistency in the management and monitoring of AI use puts the federal government as a whole at risk.”

This year also saw the National Institute of Standards and Technology (NIST) launch the Artificial Intelligence Safety Institute Consortium (AISIC), is designed to promote collaboration between industry and government to foster safe AI use.

“To unlock AI’s full potential, we need to ensure there is trust in the technology,” Mastercard CEO Michael Miebach said at the time of the launch. “That starts with a common set of meaningful standards that protects users and sparks inclusive innovation.”

Mastercard is among the more than 200 members of the group, composed of tech giants such as AmazonMetaGoogle and Microsoft, schools like Princeton and Georgia Tech, and a variety of research groups.