U.S., Others Release AI Safety Guidelines
The U.S. and 17 other countries have agreed to “a set of guidelines to ensure AI systems are built to ‘function as intended’ without leaking sensitive data to unauthorized users,” The Hill reports.
What’s going on: The 20-page document—unveiled last Sunday and published jointly by the Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency and the UK National Security Centre—enumerates recommendations for everything “from AI system design and development to its deployment and maintenance.”
- The agreement discusses threats to AI systems, how to protect AI models and data and how to release and monitor AI systems responsibly.
- Other signatories include Canada, Australia, Germany, Israel, Nigeria and Poland.
Why it’s important: “This is the first time that we have seen an affirmation that these capabilities should not just be about cool features and how quickly we can get them to market or how we can compete to drive down costs,” said U.S. Cybersecurity and Infrastructure Security Agency Director Jen Easterly.