Security
published : 2023-11-30
U.S. Cybersecurity Official Urges Safeguards Against Artificial Intelligence Threats: 'Moving Too Fast'
AI Developers Work with Government to Manage Risks Associated with Evolving Technology
The rapid development of artificial intelligence (AI) poses a potential threat that requires preemptive safeguards, according to a top U.S. official.
Jen Easterly, director of the U.S. Cybersecurity and Infrastructure Security Agency, emphasized the need to avoid a scenario where vulnerabilities are introduced in technology products and consumers are left to patch them.
Speaking after discussions with Canada's Centre for Cyber Security, Easterly stated, 'We can't live in that world with AI. It is too powerful, it is moving too fast.'
To address the security concerns surrounding AI, 18 countries, including the United States, have endorsed new guidelines focused on secure design, development, deployment, and maintenance.
The guidelines aim to ensure security considerations are incorporated throughout the entire lifecycle of AI systems.
Highlighting the importance of security throughout the AI capability's lifecycle, Sami Khoury, head of Canada's Centre for Cyber Security, commented, 'We have to look at security throughout the lifecycle of that AI capability.'
In a collaborative effort, leading AI developers have agreed to partner with governments to conduct testing on new frontier models before their release, aiming to effectively manage the risks associated with the rapidly evolving technology.
Easterly expressed confidence in the progress made, stating, 'I think we have done as much as we possibly could do at this point in time, to help come together with nations around the world, with technology companies, to set out from a technical perspective how to build these capabilities as securely and safely as possible.'