Blog entry by Les Bell

Les Bell
by Les Bell - Friday, 1 September 2023, 9:18 AM
Anyone in the world

Welcome to today's daily briefing on security news relevant to our CISSP (and other) courses. Links within stories may lead to further details in the course notes of some of our courses, and will only be accessible if you are enrolled in the corresponding course - this is a shallow ploy to encourage ongoing study. However, each item ends with a link to the original source.

News Stories


NCSC Warns of AI Chatbot Prompt Injection Attacks

AI chatbots, which use generative transformers to extract information from pre-trained large language models (LLM's), are very sensitive to the format of the prompts which are used to qury them. Increasingly, however, such chatbots are being integrated into a variety of products and services - some for internal use within organizations, and some for use by customers. And because some of the behaviours exhibited by these chatbots are unpredictable - think of 'AI hallucinations' which have caused LLM's to reference non-existent research papers or cite non-existent law cases - they are ripe for exploitation by creative hackers willing to experiment with prompts.

One problem is that LLM's are unable to distinguish between an instruction and data provided to help complete the instruction. This could, hypothetically, be exploited by an attacker who constructs an invoice or transaction request, with the transaction reference hiding a prompt injection on the LLM underlying the recipient's bank's AI chatbot. Later, when the recipient asks the chatbot, "Am I spending more this month?", the LLM analyses this month's transactions, encounters the malicious transaction and transforms this into a request to transfer funds to the attacker's account. Although this example is hypothetical, similar attempted attacks have been seen in the wild.

Over the years, we have developed a good understanding of SQL Injection, command injection and other injection attacks. But since LLM-based chatbots are intended to interact using natural human language, simple syntax-based input sanitzation techniques are unlikely to work without rendering the chatbot near-useless. We need to dig deeper into the semantic processing performed by transformers in order to make chatbots resistant to prompt injection; the problem is not dissimilar to making human users resistant to social engineering.

C, Dave, Exercise caution when building off LLMs, blog post, 30 August 2023. Available online at https://www.ncsc.gov.uk/blog-post/exercise-caution-building-off-llms.

NCSCand Partners Analyze Infamous Chisel Malware

In other news from the NCSC, it - along with a number of five eyes partners - has released a malware analysis report on the Infamous Chisel mobile device malware. Infamous Chisel, which targets Android devices is associated with the Sandworm threat actor group, which is linked to the Main Centre for Special Technologies (GTsST) within the GRU, Russia's military intelligence service.

In essence, Infamous Chisel is a collection of components which enable persistent access to an infected device via a backdoor over the Tor onion routing network or via SSH, while periodically collecting and exfiltrating victim information, such as device configuration, and files, either of commercial interest or from applications which are specific to the Ukrainian military. It can also scan the local network, gathering information about active hosts, open ports and banner messages.

The 35-page report provides a detailed analysis of the various components and IOC's, which are also available in STIX JSON and XML formats via CISA:

NCSC, Infamous Chisel: Malware Analysis Report, report, 31 August 2023. Available online at https://www.ncsc.gov.uk/static-assets/documents/malware-analysis-reports/infamous-chisel/NCSC-MAR-Infamous-Chisel.pdf.


These news brief blog articles are collected at https://www.lesbell.com.au/blog/index.php?courseid=1. If you would prefer an RSS feed for your reader, the feed can be found at https://www.lesbell.com.au/rss/file.php/1/dd977d83ae51998b0b79799c822ac0a1/blog/user/3/rss.xml.

Creative Commons License TLP:CLEAR Copyright to linked articles is held by their individual authors or publishers. Our commentary is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License and is labeled TLP:CLEAR.

Tags: