US, UK, and Global Partners Release Secure AI System Development Guidelines

Nov 27, 2023NewsroomArtificial Intelligence / Privacy

Safe AI System

The UK and US, together with international partners from 16 other countries, have issued new guidelines for the development of safe artificial intelligence (AI) systems.

“The approach prioritizes ownership of security outcomes for customers, embraces radical transparency and accountability, and establishes organizational structures where secure design is a top priority,” US Cybersecurity and Infrastructure Security Agency (CISA) SAYS.

The goal is to raise the level of cyber security in AI and help ensure that the technology is designed, developed, and deployed in a safe manner, the National Cyber ​​Security Center (NCSC) added.

Cybersecurity

The guidelines are also established by the US government continues efforts to manage the risks posed by AI by ensuring that new tools are adequately tested before being released to the public, there are guardrails in place to address societal harms, such as prejudice and discrimination, and privacy concerns, and create robust methods for consumers to identify AI-generated material.

The commitments also require companies to commit to speeding up third-party discovery and reporting of vulnerabilities in their AI systems through a bug bounty system so they can be found and corrected quickly.

the latest guidelines “Help developers ensure that cyber security is an important condition of AI system safety and integral to the development process from the beginning and throughout, known as the ‘secure by design’ approach,” said the NCSC.

It includes safe design, safe development, safe deployment, and safe operation and maintenance, which cover all the important areas within the AI ​​system development life cycle, which must organizations model threats to their systems as well as protect their supply chains and infrastructure.

Cybersecurity

The goal, the agencies said, is also to counter attacks by adversaries targeting AI and machine learning (ML) systems that aim to cause unintended behavior in a variety of ways, including affecting the classification of a model, allowing users to perform unauthorized actions, and obtaining sensitive information.

“There are many ways to achieve these effects, such as quick injection attack in the large-scale language modeling (LLM) domain, or intentionally corrupting training data or user feedback (known as ‘data poisoning’),” the NCSC said.

Did you find this article interesting? Follow us on Twitter and LinkedIn to read more exclusive content we post.

Leave a comment