Global Agreement on AI: 'Secure by Design'
In a historic move, the United States, the United Kingdom, and a coalition of other nations have come together to establish the first detailed international agreement aimed at ensuring artificial intelligence (AI) systems are safe from exploitation by nefarious entities. The alliance, which includes over a dozen countries, emphasizes the importance of 'secure by design' AI technologies.
The multi-national agreement, captured in a 20-page report, sets out broad recommendations for the deployment of AI. These include continuous monitoring of AI systems for potential abuse, safeguarding data against manipulation, and conducting thorough vetting processes for software vendors. While the pact is not legally binding, it marks a significant commitment to privileging security in AI's rapidly evolving landscape.
Jen Easterly, the Director of the U.S. Cybersecurity and Infrastructure Security Agency, highlighted the importance of this agreement, emphasizing that the focus should not solely be on speed to market or cost reduction but primarily on security during the design phase of AI systems.
The document falls under the latest efforts by global governments to influence AI development responsibly. Leading countries in the agreement beyond the U.S. and the UK include Germany, Italy, Australia, and Singapore, among others. Still, the framework steers clear of prescriptive measures regarding AI's suitable applications or the origins of the data fueling these systems, leaving room for further deliberation.
AI's ascent has raised various concerns, from potential disruptions to democratic processes, acceleration of fraudulent activities, to large-scale job losses. Meanwhile, the European Union is making strides with regulations aimed at AI, with France, Germany, and Italy endorsing mandatory self-regulation for foundational AI models capable of generating diverse outputs.
Regulatory efforts in the U.S. have faced obstacles with a divided Congress struggling to pass substantial legislation. Despite this, the White House issued an executive order to mitigate AI risks, focusing on consumer protection, workforce impacts, and national security facets.
Further reading