ETSI has introduced a new technical specification designed to bolster the cybersecurity of Artificial Intelligence (AI) systems in response to the rising tide of digital threats. Released by the European Telecommunications Standards Institute (ETSI), the document, titled 'ETSI TS 104 223 - Securing Artificial Intelligence (SAI); Baseline Cyber Security Requirements for AI Models and Systems,' outlines a comprehensive set of requirements to safeguard end users and offer practical guidance for AI security.
The specification adopts a lifecycle approach to AI security, structured around 13 core principles that expand into 72 trackable requirements spanning five distinct lifecycle phases. This framework aims to elevate security practices across all participants involved in AI system development and deployment, including developers, vendors, integrators, and operators.
Among the advantages of this approach is the establishment of transparent, high-level security principles coupled with tangible measures to protect AI systems. The requirements are intended to serve as a foundational defence mechanism in the face of rapidly evolving cyber threats targeting AI.
ETSI highlights that AI technology presents security challenges distinct from those associated with traditional software. Specific risks addressed include data poisoning, model obfuscation, indirect prompt injection, and complex data management issues. In response, the specification blends established cybersecurity methodologies with contemporary AI security research and newly developed guidance tailored to these unique threats.
The development of the specification was undertaken by ETSI's Technical Committee on Securing Artificial Intelligence (SAI), comprised of representatives from international organisations, government bodies, and cybersecurity experts. ETSI emphasises that this interdisciplinary and collaborative process has produced globally pertinent requirements suitable for deployment in a variety of contexts.
Alongside the primary specification document, ETSI has committed to publishing an implementation guide intended to assist Small and Medium-sized Enterprises (SMEs) and other stakeholders. This supporting guide will include case studies from diverse deployment environments to facilitate adherence to the baseline security standards specified by TS 104 223.
Scott Cadzow, Chair of ETSI's Technical Committee for Securing Artificial Intelligence, said in an interview with SecurityBrief UK: "In an era where cyber threats are growing in both volume and sophistication and negatively impacting organisations of every kind, it is vital that the design, development, deployment, and operation and maintenance of AI models is protected from malicious and unwanted inference. Security must be a core requirement, not just in the development phase, but throughout the lifecycle of the system. This new specification will help do just that – not only in Europe, but around the world."
He further noted, "This publication is a global first in setting a clear baseline for securing AI and sets TC SAI on the path to giving trust in the security of AI for all its stakeholders."
ETSI's initiative seeks to raise AI security standards internationally while providing accessible and actionable guidance to organisations of varying sizes. The new specification and forthcoming implementation guide aim to act as reference points for the AI industry amid ongoing concerns about digital safety and trust.
Source: Noah Wire Services