Blog entry by Les Bell

Les Bell
by Les Bell - Tuesday, 28 November 2023, 10:04 AM
Anyone in the world

The UK's National Cyber Security Centre, in collaboration with 23 international partners and a number of industry organizations, has released the first version of its Guidelines For Secure AI System Development. The document is aimed primarily at providers of AI systems which use models hosted by an organisation, or which make use of external API's, and provides advice for each phase of development, from design through development and deployment to operation.


Secure design starts with raising all participants' awareness of the threats and risks facing AI systems and in particular training developers in secure coding techniques and secure and responsible AI practices. It then proceeds through the modeling of threats, designing for security as well as functionality and performance and considering security benefits and tradeoffs in the selection of the AI model.

Secure development begins with the assessment and monitoring of security of the AI supply chains across the system life cycle, then identifying, tracking and protecting the AI-related assets - the models, data (including user feedback), prompts, software, documentation, logs and assessments. The model itself, along with the data and prompts should be documented, including security-relevant information such as the sources of training data, and guardrails. It's also important to identify, track and manage technical debt, which can be more challenging in this context than for most software because of rapid development cycles and a lack of well-established protocols and interfaces.

Secure deployment depends upon the security of the underlying infrastructure, such as access controls on API's, as well as the models and data, not to mention the training and processing pipelines. The model needs to be protected continuously, against a range of both direct attacks, such as by acquiring the model weights, and indirect attacks such as querying the model via an application or API. Incident response procedures will also need to be developed, and systems will have to be subjected to security evaluation and assessment such as red-teaming. Ideally, the most secure settings will be integrated into the system as the only option, and users must be advised about where and how their data might be used, accessed or stored (e.g. reviewed by employees or contractors, or used for model retraining).

Finally, secure deployment will require monitoring of the system's behaviour to identify not just sudden, but also gradual, changes in behaviour which affect security. The system's inputs will have to be monitored and logged to allow investigation and remediation as well to meet compliance obligations. Updates will also follow a similar process to the original development, e.g. testing and evaluation, reflecting the fact that changes to data, models or prompts can lead to changes in system behaviour. Finally, developers should participate in appropriate sharing of best practices as well as vulnerability disclosure procedures.

The Guidelines themselves are quite short and pitched at a level suitable for all stakeholders, from managers and risk owners down to developers and data scientists. However, a list of additional resources including standards, guidance documents and open-source tools and test frameworks allow architects and developers to delve more deeply into the key concerns and activities.

UK National Cyber Security Centre, Guidelines for secure AI system development, web document, 27 November 2023. Available online at https://www.ncsc.gov.uk/collection/guidelines-secure-ai-system-development.


Upcoming Courses

  • SE221 CISSP Fast Track Review, Sydney, 11 - 15 March 2024
  • SE221 CISSP Fast Track Review, Virtual/Online, 13 - 17 May 2024
  • SE221 CISSP Fast Track Review, Virtual/Online, 17 - 21 June 2024
  • SE221 CISSP Fast Track Review, Sydney, 22 - 26 July 2024

About this Blog

I produce this blog while updating the course notes for various courses. Links within a story mostly lead to further details in those course notes, and will only be accessible if you are enrolled in the corresponding course. This is a shallow ploy to encourage ongoing study by our students. However, each item ends with a link to the original source.

These blog posts are collected at https://www.lesbell.com.au/blog/index.php?user=3. If you would prefer an RSS feed for your reader, the feed can be found at https://www.lesbell.com.au/rss/file.php/1/dd977d83ae51998b0b79799c822ac0a1/blog/user/3/rss.xml.

Creative Commons License TLP:CLEAR Copyright to linked articles is held by their individual authors or publishers. Our commentary is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License and is labeled TLP:CLEAR.