Site blog

Les Bell
by Les Bell - Wednesday, 1 November 2023, 7:50 AM
Anyone in the world

Welcome to today's daily briefing on security news relevant to our CISSP (and other) courses. Links within stories may lead to further details in the course notes of some of our courses, and will only be accessible if you are enrolled in the corresponding course - this is a shallow ploy to encourage ongoing study. However, each item ends with a link to the original source.

News Stories


SEC Charges SolarWinds and its CISO

The US Securities and Exchange Commission has announced charges against Texas network management software firm SolarWinds and its Chief Information Security Officer, Timothy G. Brown, following the legendary Sunburst attack on the firm and the customers using its Orion software. The SEC alleges that SolarWinds misled investors by disclosing only generic, hypothetical risks when in fact Brown, and the company management, knew of specific shortcomings in the company's controls, as well as the elevated level of risk the company faced at the time.

The complaint allegest that, for example, 'a 2018 presentation prepared by a company engineer and shared internally, including with Brown, that SolarWinds’ remote access set-up was “not very secure” and that someone exploiting the vulnerability “can basically do whatever without us detecting it until it’s too late,” which could lead to “major reputation and financial loss” for SolarWinds.'

Similarly, the SEC alleges that in 2018 and 2019 presentations, Brown stated that 'the “current state of security leaves us in a very vulnerable state for our critical assets” and that “[a]ccess and privilege to critical systems/data is inappropriate.”'.

The SEC claims that Brown was aware of the vulnerabilities and risks but failed to adquately address them or, in some cases, raise them further within the company.

There's a lot more in the SEC's press release and doubtless in the court filings.

There's a lesson in this for CISO's everywhere. We have long recommended the involvement of both security personnel - who can assess the strength of controls and likelihood of vulnerability exploitation - and the relevant information asset owners - who can assess loss magnitude or impact - in both the evaluation of risk and, very importantly, the selection of controls which will mitigate risk to a level acceptable to the information asset owner. What is an acceptable level of risk is a business decision, not a security one, and it needs to be balanced against opportunities which lie firmly on the business side of the risk taxonomy - so it is one that the information asset owner has to make.

What this suit makes clear is that fines and judgements are an increasingly significant component of breach impact. The result should be a new clarity of thought about cyber risk management and an increased willingness of management to engage in the process. With this increased impact should come an increased willingness to fund controls.

All this gives a bit more leverage for security professionals to get the job done properly. But as an added incentive for honesty and clarity all round, I'd suggest capturing all risk acceptance decisions formally - if not on paper, then at least with an email trail. You never know when this could prove useful.

SEC, SEC Charges SolarWinds and Chief Information Security Officer with Fraud, Internal Control Failures, press release, 30 October 2023. Available online at https://www.sec.gov/news/press-release/2023-227.


Upcoming Courses


These news brief blog articles are collected at https://www.lesbell.com.au/blog/index.php?courseid=1. If you would prefer an RSS feed for your reader, the feed can be found at https://www.lesbell.com.au/rss/file.php/1/dd977d83ae51998b0b79799c822ac0a1/blog/user/3/rss.xml.

Creative Commons License TLP:CLEAR Copyright to linked articles is held by their individual authors or publishers. Our commentary is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License and is labeled TLP:CLEAR.

 
Les Bell
by Les Bell - Tuesday, 31 October 2023, 11:01 AM
Anyone in the world

Welcome to today's daily briefing on security news relevant to our CISSP (and other) courses. Links within stories may lead to further details in the course notes of some of our courses, and will only be accessible if you are enrolled in the corresponding course - this is a shallow ploy to encourage ongoing study. However, each item ends with a link to the original source.

News Stories


New Wiper Targets Israeli Servers

As expected, the conflict in the Middle East continues to spill over into cyberspace, with a likely pro-Hamas hacktivist group now distributing malware which targets Linux systems in Israel.

Security Joes Incident Response Team, who volunteered to perform incident response forensics for Israeli companies, have discovered a new wiper targeting Linux systems in Israel. Dubbed BiBi-Linux because the string "Bibi" (a nickname referring to Israeli PM Netanyahu) is hardcoded in both the binary and the renamed files it overwrites, the program superficially looks like ransomware but makes no attempt to exfiltrate data to a C2 server, does not leave a ransom note, and does not use a reversible encryption algorithm. Instead, it simply overwrites every files with random data, renaming it with a random name and an extension that starts with "Bibi".

The software is designed for maximum efficiency, written in C/C++ and compiled to a 64-bit ELF executable, and it makes use of multithreading to overwrite as many files as quickly as possible. It is also very chatty, continuously printing details of its progress to the console, so the attackers simply invoke it at the command line using the nohup command to intercept SIGHUP signals and redirect its output to /dev/null, allowing them to detach the console and leave it running in the background.

Command-line arguments allow it to target specific folders, but it defaults to starting in the root directory, and if executed with root privileges, it would delete the entire system, with exception of a few filetypes it will skip, such as .out and .so, which it relies upon for its own execution (the binary is itself named bibi-linux.out).

Interestingly, this particular binary is recognized by only a few detectors on VirusTotal, and does not seem to have previously been analyzed.

The use of wipers is not uncommon in nation-state conflicts - NotPetya, for example, was not reversible even though it pretended to be - and Russia has continued to deploy many wipers against Ukrainian targets.

Given the use of "Bibi" in naming, and the targeting of Israeli companies, this malware was likely produced by a Hamas-affiliated hacktivist group. They would not be the only one; Sekoia last week detailed the operations of AridViper (also known as APT C-23, MoleRATs, Gaza Cyber Gang and Desert Falcon), another threat actor believed to be associated with Hamas.

Arid Viper seems to have been active since at least 2012, with first reporting on their activities in 2015 by Trend Micro, and they have been observed delivering data-exfiltration malware for Windows, iOS and Android via malmails to targets in Israel and the Middle East. Since 2020, Arid Viper has been using the PyMICROPSIA trojan and Arid Gopher backdoor, although earlier this month ESET reported the discovery of a new Rust-based backdoor called Rusty Viper, which suggests they are continuing to sharpen their tools.

Sekoia has done a deep dive on Arid Viper's C2 infrastructure as well as the victimology of their targets, who extend across both the Israeli and Arab worlds.

Security Joes, BiBi-Linux: A New Wiper Dropped By Pro-Hamas Hacktivist Group, blog post, 30 October 2023. Available online at https://www.securityjoes.com/post/bibi-linux-a-new-wiper-dropped-by-pro-hamas-hacktivist-group.

Sekoia Threat & Detection Research Team, AridViper, an intrusion set allegedly associated with Hamas, blog post, 26 October 2023. Available online at https://blog.sekoia.io/aridviper-an-intrusion-set-allegedly-associated-with-hamas/.


Upcoming Courses


These news brief blog articles are collected at https://www.lesbell.com.au/blog/index.php?courseid=1. If you would prefer an RSS feed for your reader, the feed can be found at https://www.lesbell.com.au/rss/file.php/1/dd977d83ae51998b0b79799c822ac0a1/blog/user/3/rss.xml.

Creative Commons License TLP:CLEAR Copyright to linked articles is held by their individual authors or publishers. Our commentary is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License and is labeled TLP:CLEAR.

Tags:
 
Les Bell
by Les Bell - Monday, 30 October 2023, 9:33 AM
Anyone in the world

Welcome to today's daily briefing on security news relevant to our CISSP (and other) courses. Links within stories may lead to further details in the course notes of some of our courses, and will only be accessible if you are enrolled in the corresponding course - this is a shallow ploy to encourage ongoing study. However, each item ends with a link to the original source.

News Stories


The Risks of Artificial Intelligence


With AI all over the news in recent weeks, I thought it was time to do a bit of a deep dive on some of the risks posed by artificial intelligence. I'll cover just a few of the stories published over the last few days, before concluding with some deeper thoughts based on recent research.

OpenAI Prepares to Study "Catastrophic Risks"

OpenAI, the company behind the GPT-3 and -4 large language models, ChatGPT, and the Dall-E AI image generator, has started to assemble a team, called "Preparedness", which will 

"tightly connect capability assessment, evaluations, and internal red teaming for frontier models, from the models we develop in the near future to those with AGI-level capabilities. The team will help track, evaluate, forecast and protect against catastrophic risks spanning multiple categories including:

  • Individualized persuasion
  • Cybersecurity
  • Chemical, biological, radiological, and nuclear (CBRN) threats
  • Autonomous replication and adaptation (ARA)

"The Preparedness team mission also includes developing and maintaining a Risk-Informed Development Policy (RDP). Our RDP will detail our approach to developing rigorous frontier model capability evaluations and monitoring, creating a spectrum of protective actions, and establishing a governance structure for accountability and oversight across that development process. The RDP is meant to complement and extend our existing risk mitigation work, which contributes to the safety and alignment of new, highly capable systems, both before and after deployment."

OpenAI, Frontier risk and preparedness, blog post, 26 October 2023. Available online at https://openai.com/blog/frontier-risk-and-preparedness.

Google Adds AI to Bug Bounty Programs

Google, which has a long history of AI research, has announced that it is adding its AI products to its existing Bug Hunter Program. Based on the company's earlier research and Red Team exercises, it has tightly defined several categories of attacks, such as prompt attacks, training data erxtraction, model manipulation, adversarial perturbation, model theft or extraction, etc. and have further developed a number of scenarios, some of which will be in scope for the Bug Hunter Program, and so of which will not.

Interestingly. jailbreaks and discovery of hallucinations, etc. will not be within scope, as Google's generative AI products already have a dedicated reporting channel for these content issues.

The firm has already given security research a bit of a boost with the publication of a short report which describes Google's "Secure AI Framework" and provided the categorisation described above, along with links to relevant research.

Vela, Eduardo, Jan Keller and Ryan Rinaldi, Google’s reward criteria for reporting bugs in AI products, blog post, 26 October 2023. Available online at https://security.googleblog.com/2023/10/googles-reward-criteria-for-reporting.html.

Fabian, Daniel, Google's AI Red Team: the ethical hackers making AI safer, blog post, 19 July 2023. Available online at https://blog.google/technology/safety-security/googles-ai-red-team-the-ethical-hackers-making-ai-safer/.

Fabian, Daniel and Jacob Crisp, Why Red Teams Play a Central Role in Helping Organizations Secure AI Systems, technical report, July 2023. Available online at https://services.google.com/fh/files/blogs/google_ai_red_team_digital_final.pdf.

Handful of Tech Firms Engaged in "Race to the Bottom"

So we are gently encouraged to assume, from these and other stories, that the leading AI and related tech firms already programs to mitigate the risks. Not so fast. One school of thought argues that these companies are taking a proactive approach to self-regulation for two reasons:

  1. Minimize external regulation by governments, which would be more restrictive than they would like
  2. Stifle competition by increasing costs of entry

In April, a number of researchers published an open letter calling for a six-month hiatus on experiments with huge models. One of the organizers, MIT physics professor and AI researcher Max Tegmark, is highly critical:

"We’re witnessing a race to the bottom that must be stopped", Tegmark told the Guardian. "We urgently need AI safety standards, so that this transforms into a race to the top. AI promises many incredible benefits, but the reckless and unchecked development of increasingly powerful systems, with no oversight, puts our economy, our society, and our lives at risk. Regulation is critical to safe innovation, so that a handful of AI corporations don’t jeopardise our shared future."

Along with other researchers, Tegmark has called for governments to licence AI models and - if necessary - halt their development:

"For exceptionally capable future models, eg models that could circumvent human control, governments must be prepared to license their development, pause development in response to worrying capabilities, mandate access controls, and require information security measures robust to state-level hackers, until adequate protections are ready."

Milmo, Dan and Edward Helmore, Humanity at risk from AI ‘race to the bottom’, says tech expert, The Guardian, 26 October 2023. Available online at https://www.theguardian.com/technology/2023/oct/26/ai-artificial-intelligence-investment-boom.

The Problem is Not AI - It's Energy

For many, the threat of artificial intelligence takes a back seat to the other existential threat of our times: anthropogenic climate change. But what if the two are linked?

The costs of running large models, both in the training phase and for inference once in production, are substantial; in fact, without substantial injections from Microsoft and others, OpenAI's electricity bills would have rendered it insolvent months ago. According to a study by Alex de Vries, a PhD candidate at VU Amsterdam, the current trends of energy consumption by AI are alarming:

"Alphabet’s chairman indicated in February 2023 that interacting with an LLM could “likely cost 10 times more than a standard keyword search. As a standard Google search reportedly uses 0.3 Wh of electricity, this suggests an electricity consumption of approximately 3 Wh per LLM interaction. This figure aligns with SemiAnalysis’ assessment of ChatGPT’s operating costs in early 2023, which estimated that ChatGPT responds to 195 million requests per day, requiring an estimated average electricity consumption of 564 MWh per day, or, at most, 2.9 Wh per request.  ...

"These scenarios highlight the potential impact on Google’s total electricity consumption if every standard Google search became an LLM interaction, based on current models and technology. In 2021, Google’s total electricity consumption was 18.3 TWh, with AI accounting for 10%–15% of this total. The worst-case scenario suggests Google’s AI alone could consume as much electricity as a country such as Ireland (29.3 TWh per year) [my emphasis], which is a significant increase compared to its historical AI-related energy consumption. However, this scenario assumes full-scale AI adoption utilizing current hardware and software, which is unlikely to happen rapidly."

Others, such as Roberto Verdecchia at the University of Florence, think de Vries' predictions may even be conservative, saying, "I would not be surprised if also these predictions will prove to be correct, potentially even sooner than expected".

A cynic might wonder: why does artificial intelligence consume so much power when the genuine article - the human brain - operates on a power consumption of only 12W? It really is quite remarkable, when you stop to think about it.

de Vries, Alex, The growing energy footprint of artificial intelligence, Joule, 10 October 2023. DOI:https://doi.org/10.1016/j.joule.2023.09.004. Available online at https://www.cell.com/joule/fulltext/S2542-4351(23)00365-3.

The Immediate Risk

Having canvassed just some of the recent news coverage of AI threats and risks, let me turn now to what I consider the biggest immediate risk of the current AI hype cycle.

Current large language models (LLM's) are generative pretrained transformers, but most people do not really understand what a transformer is and does.

A transformer encodes information about word position before it feeds it into a deep learning neural network, allowing the network to learn from the entire input, not just words within a limited distance of each other. Secondly, transformers employ a technique called attention - particularly a derivative called self-attention - which allows the output stages of the transformer to refer back to the relevant word in the input sentence as it produces output.

The result - which leads to the key risk - is the impressive performance in conversational tasks, which can seduce non-technical business users into thinking they are a general artificial intelligence - but this is far from the case. In fact, most LLM's work on the statistical properties of the text they are trained on and do not understand it in any way. In this respect, they are actually very similar to the compression algorithms used to encode text, speech, music and graphics for online transmission (Delétang, et. al., 2023). In the same way as the decompression algorithm predicts the colour of the next pixel in a graphics image, so an LLM predicts the mostly likely next word, based upon the statistical properties of the text it has been trained upon. However, this is not always le mot juste.

For example, when Princeton researchers asked Open AI's GPT-4 LLM to multiply 128 by 9/5 and add 32, it was able to give the correct answer. But when asked to multiply 128 by 7/5 and add 31, it gave the wrong answer. The reason is that the former example is the well-known conversion from centigrade to fahrenheit, and so its training corpus had included lots of examples, while the second example is probably unique. GPT-4 simply picked a likely number - it did not perform the actual computation (McCoy et. al., 2023).

Another example found by the researchers was a simple task of deciphering text encrypted using the Caesar Cipher; GPT-4 easily performed the task when the key was 13, because that value is used for ROT-13 encoding on Usenet newsgroups - but a key value of 12, while returning recognizable English-language text, gave the incorrect text.

In short, large language models do not understand the subject matter of the text they process.

Many managers either have never realized this, or forget it in their enthusiasm. And right now, that is the real risk of AI.

Delétang, Grégoire, et. al., Language Modeling Is Compression, arXiv preprint, 19 September 2023. Available online at https://arxiv.org/abs/2309.10668.

McCoy, R. Thomas, Shunyu Yao, Dan Friedman, Matthew Hardy and Thomas L. Griffiths, Embers of Autoregression: Understanding Large Language Models Through the Problem They are Trained to Solve, arXiv preprint, 24 September 2023. Available online at https://arxiv.org/abs/2309.13638.


Upcoming Courses


These news brief blog articles are collected at https://www.lesbell.com.au/blog/index.php?courseid=1. If you would prefer an RSS feed for your reader, the feed can be found at https://www.lesbell.com.au/rss/file.php/1/dd977d83ae51998b0b79799c822ac0a1/blog/user/3/rss.xml.

Creative Commons License TLP:CLEAR Copyright to linked articles is held by their individual authors or publishers. Our commentary is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License and is labeled TLP:CLEAR.

[ Modified: Monday, 30 October 2023, 10:42 AM ]
 
Les Bell
by Les Bell - Friday, 27 October 2023, 10:57 AM
Anyone in the world

Welcome to today's daily briefing on security news relevant to our CISSP (and other) courses. Links within stories may lead to further details in the course notes of some of our courses, and will only be accessible if you are enrolled in the corresponding course - this is a shallow ploy to encourage ongoing study. However, each item ends with a link to the original source.

News Stories


FBI Warns of Chinese and Russian Cyber-Espionage

The FBI, in conjunction with other Five Eyes agencies, has warned of increasing intellectual property theft, including cyber-espionage, by both China and Russia, particularly targeting high tech companies and universities engaged in areas such as space research, AI, quantum computing and synthetic biology. China, in particular, has "long targeted business with a web of techniques all at once: cyber intrusions, human intelligence operations, seemingly innocuous corporate investments and transactions", said FBI Director Christopher Wray.

The FBI, in conjunction with the US Air Force Office of Special Investigations and the National Counterintelligence and Security Center (NCSC), has published a Counterintelligence Warning Memorandum detailing the threats faced by the US space industry, but we can safely assume that other tech sectors face similar difficulties. The report, entitled "Safeguarding the US Space Industry: Keeping Your Intellectual Property in Orbit" lists the variety of impacts caused by espionage in areas such as global competition, national security and economic security, then goes on to detail indicators that an organization is targeted along with suggested mitigation actions. The details of reporting contact points are US-specific, but it is not difficult to find the corresponding agencies in other countries.

FBI, AF OSI and NCSC, Safeguarding the US Space Industry: Keeping Your Intellectual Property in Orbit, counterintelligence warning memorandum, October 2023. Available online at https://www.dni.gov/files/NCSC/documents/SafeguardingOurFuture/FINAL%20FINAL%20Safeguarding%20the%20US%20Space%20Industry%20-%20Digital.pdf.

Middle East Conflict Spills Over Into DDoS Attacks

Earlier this month we wrote about the HTTP/2 Rapid Reset attack, which was used to deliver massive layer 7 distributed denial of service attacks to a number of targets. Cloudflare reported a peak of 201 million requests per second. In its latest report, the network firm reports that it saw an overall increase of 65% in HTTP DDoS attack traffic in Q3 of 2023, by comparison to the previous quarter - due in part to the layer 7 Rapid Reset attacks. Layer 3 and 4 DDoS attacks increased by 14%, with numerous attacks in the terabit/second range, the largest peaking at 2.6 Tbps.

The largest volume of HTTP DoS traffic was directed at gaming and online gambling sites, which have long been a favourite of DDoS extortion operators. Although the US remains the largest source of DDoS traffic, at 15.8% of the total, China is not far behind with 12.6%, followed by Brazil up from fourth place at 8.7% and Germany, which has slipped from third place, at 7.5%.

In other news that will likely surprise no-one, only 12 minutes after Hamas launched rocket attacks into Israel on 7 October Clouflare's systems detected and mitigates DDoS attacks on Israeli websites that provide alerts and critical information to civilians on rocket attacks. The initial attack peaked at 100k RPS and lasted ten minutes, but was followed 45 minutes later by a much larger six-minute attack which peaked at 1M RPS.

In addition, Palestinian hacktivist groups engaged in other attacks, such as exploiting a vulnerability in the "Red Alert: Israel" warning app.

In the days since, DDOS attacks on Israeli web sites have continued, mainly targeting newspaper and media sites, as well as the software industry and financial sector.

However, there are attacks in the other direction; since the beginning of October, Cloudflare has detected and mitigated over 454 million HTTP DDoS attack requests targeting Palestinian web sites. Although this is only one-tenth of the volume of attack requests directed at Israel, it is a larger proportion of the traffic sent to Palestinian web sites; since 9 October nearly 6 out of every 10 HTTP requests to Palestinian sites were DDoS attack traffic.

Yoachmik, Omer and Jorge Pacheco, Cyber attacks in the Israel-Hamas war, blog post, 24 October 2023. Available online at https://blog.cloudflare.com/cyber-attacks-in-the-israel-hamas-war/.

Yoachmik, Omer and Jorge Pacheco, DDoS threat report for 2023 Q3, blog post, 27 October 2023. Available online at https://blog.cloudflare.com/ddos-threat-report-2023-q3/.


Upcoming Courses


These news brief blog articles are collected at https://www.lesbell.com.au/blog/index.php?courseid=1. If you would prefer an RSS feed for your reader, the feed can be found at https://www.lesbell.com.au/rss/file.php/1/dd977d83ae51998b0b79799c822ac0a1/blog/user/3/rss.xml.

Creative Commons License TLP:CLEAR Copyright to linked articles is held by their individual authors or publishers. Our commentary is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License and is labeled TLP:CLEAR.

Tags:
 
Les Bell
by Les Bell - Thursday, 26 October 2023, 10:40 AM
Anyone in the world

Welcome to today's daily briefing on security news relevant to our CISSP (and other) courses. Links within stories may lead to further details in the course notes of some of our courses, and will only be accessible if you are enrolled in the corresponding course - this is a shallow ploy to encourage ongoing study. However, each item ends with a link to the original source.

News Stories


New Attack Extracts Credentials from Safari Browser (But Don't Panic!)

Most security pros will doubtless remember the original speculative execution attacks, Spectre and Meltdown, which were first disclosed in early 2018. These attacks exploit the side effects of speculative, or out-of-order, execution, a feature of modern processors.

One design challenge for these CPU's is memory latency - reading the computer's main memory across an external bus is slow by comparison with the internal operation of the processor, and the designs deal with this by making use of one or more layers of on-chip cache memory. Much of the time, programs execute loops, for example, and so the first time through the loop, the relevant code is fetched and retained in cache so that subsequent executions of the loop body fetch the code from the cache and not main memory.

In fact, modern processors also feature a pipelined architecture, pre-fetching and decoding instructions (e.g. pre-fetching required data from memory) ahead of time, and they are also increasingly parallelized, featuring multiple execution units or cores. The Intel i9 CPU of the machine I am typing this on, for example, has 16 cores, and 8 of those each has two independent sets of registers to make context switching even faster (a feature Intel calls hyperthreading) so that it looks like 24 logical processors. All those processors need to be kept busy, and so they may fetch and execute instructions from the pipeline in a different order from the way they were originally fetched and placed in the pipeline, especially if an instruction is waiting for the required data to be fetched.

One aspect of this performance maximization is branch prediction and speculative execution. A branch predictor circuit attempts to guess the likely destination of a branch instruction ahead of time, for example, if the branch logic depends on the value of a memory location that is in the process of being read. Then having predicted this, the CPU will set out executing the code starting at the guessed branch destination. One downside is that sometimes the branch predictor gets it wrong, resuling in a branch misprediction, so this computation is done using yet another set of registers which will be discarded in the event of a misprediction (or committed, i.e. switched for the main register set, if the prediction was correct). This technique is referred to as speculative execution, and it is possible because the microarchitecture that underlies x86_64 complex instruction set (CISC) processors is really a RISC (reduced instruction set complexity) processor with a massive number of registers.

The result is an impressive performance boost, but it has a number of side-effects. For example, a branch misprediction will cause a delay of between 10 and 20 CPU clock cycles as registers are discarded and the pipeline refilled, so there will be a timing effect. It will also leave traces in the cache - instructions that were fetched but not executed, for example. And if a branch prediction led to, for example, a system call into the OS kernel, then a low-privilege user process may have fetched data from a high-privilege, supervisor mode, address into a CPU register.

These side-effects were exploited by the Spectre and Meltdown exploits of 2018, which caused some panic at the time; AMD's stock fell dramatically (leading some observers to suspect that the objective of related publicity was stock manipulation) while the disabling of speculative execution led to a dramatic rise in the cost of cloud workloads as more cores were required to carry the load. In due course, the processor manufacturers introduced various hardware mitigations, and we have gradually forgotten they were a problem.

You could think of these attacks as being related to the classic trojan horse problem in multi-level security systems, in which a high security level process passes sensitive data to a low security level process via a covert channel; they are probably most similar to the timing covert channel (in fact, most variants of these attacks make use of various timers in the system, just like the low security level part of the trojan horse). The essential distinction is that in the case of the trojan horse, the high security level process is a willing participant; in this case, it is not. Even secure-by-design, well-written programs will leak information via speculative execution and related attacks.

While the original investigation that led to Spectre and Meltdown was done on the Intel/AMD x86_64 architecture, Apple's silicon foundry has been doing some amazing work with the M series processors used in recent Macbooks, Mac minis, iMacs and iPads, not to mention the earlier A series which powered the iPhone, earlier iPads and Apple TV. Could the Apple Silicon processors be similarly vulnerable? The M1 and M2 processors, in particular, were produced after these attacks were known, and incorporate some mitigation features, such as 35-bit addressing and value poisoning (e.g. setting bit 49 of 64-bit addresses so they are offset \(2^{49}\) bytes too high).

A group of researchers from Georgia Tech, University of Michigan and Ruhr University Bochum - several of whom had been involved in the earlier Spectre and Meltdown research - turned their attention to this problem a couple of years ago and found that, yes - these CPU's were exploitable. In particular, they were able to get round the hardware mitigations as well as the defenses in the Safari browser, such as running different browser tabs in different processes (hence different address spaces).

On the web site they created for their attack, the researchers write:

We present iLeakage, a transient execution side channel targeting the Safari web browser present on Macs, iPads and iPhones. iLeakage shows that the Spectre attack is still relevant and exploitable, even after nearly 6 years of effort to mitigate it since its discovery. We show how an attacker can induce Safari to render an arbitrary webpage, subsequently recovering sensitive information present within it using speculative execution. In particular, we demonstrate how Safari allows a malicious webpage to recover secrets from popular high-value targets, such as Gmail inbox content. Finally, we demonstrate the recovery of passwords, in case these are autofilled by credential managers.

Their technical paper lists the work involved in combining a variety of techniques to create the iLeakage attack:

Summary of Contributions. We contribute the following:

  • We study the cache topology, inclusiveness, and speculation window size on Apple CPUs (Section 4.1, 4.2, and 4.5).
  • We present a new speculative-execution technique to timerlessly distinguish cache hits from misses (Section 4.3).
  • We tackle the problem of constructing eviction sets in the case of low resolution or even non-existent timers, adapting prior approaches to work in this setting (Section 4.4).
  • We demonstrate timerless Spectre attack PoCs with near perfect accuracy, across Safari, Firefox and Tor (Section 4.6).
  • We mount transient-execution attacks in Safari, showing how we can read from arbitrary 64-bit addresses despite Apple’s address space separation, low-resolution timer, caged objects with 35-bit addressing, and value poisoning countermeasures (Section 5).
  • We demonstrate an end-to-end evaluation of our attack, showing how attackers can recover sensitive website content as well as the target’s login credentials (Section 6).

In essence, the iLeakage exploit can be implemented in either JavaScript or WebAssembly, and will allow an attacker's malicious web page, runing in a browser tab, to recover the desired data from another page. In order for this to work, the victim must visit a malicious attack site, and the exploit code needs to monitor the hardware for cache hits vs cache misses, which will take around five minutes. Once it has done this, it can then use the window.open() function to open pages which will share the rendering process with the attacker page, making its memory accessible. Even if the user closes the page, the attack will continue, since the memory is not reclaimed immediately.

The researchers disclosed their technique to Apple on 12 September 2022, upon which Apple requested an embargo on the publication of their work and set about refactoring the Safari multi-process architecture to include mitigation features. As of today, the mitigation feature is present in Safari Technology Preview versions 173 and newer, but is not enabled by default and is hidden in an internal debug menu. Users of MacOS Sonoma can enable it fairly easily, but users of earlier versions who have not updated will first need to download and install the appropriate Safari Technology Preview.

In addition to their deeply technical paper, which will be presented at CCS '28 in Copenhagen late next month, the researchers have set up a web site which is much easier to follow, where you can find their FAQ which explains the attack and also presents step-by-step instructions for enabling mitigation in Safari. There are also some videos demonstrating the attack in practice.

There is no need to panic, however; this is an extremely complex and technically challenging attack technique which depends upon a deep understanding of both the Apple Silicon processor architecture and the internals of the Safari browser (it will not work against other browsers, for example, although it could be adapted). A real attack in the wild is vanishingly improbable, and in fact, by the time that a threat actor could come up with one, it is likely that we will all have moved on to new processors with effective mitigations in hardware.

Kim, Jason, Stephan van Schaik, Daniel Genkin and Yuval Yarom, iLeakage: Browser-based Timerless Speculative Execution Attacks on Apple Devices, CCS '23, Copenhagen, Denmark, 26 - 30 November 2023. Available online at https://ileakage.com/files/ileakage.pdf.

Kim, Jason, Stephan van Schaik, Daniel Genkin and Yuval Yarom, iLeakage: Browser-based Timerless Speculative Execution Attacks on Apple Devices, web site, October 2023. Available at https://ileakage.com/.


Upcoming Courses


These news brief blog articles are collected at https://www.lesbell.com.au/blog/index.php?courseid=1. If you would prefer an RSS feed for your reader, the feed can be found at https://www.lesbell.com.au/rss/file.php/1/dd977d83ae51998b0b79799c822ac0a1/blog/user/3/rss.xml.

Creative Commons License TLP:CLEAR Copyright to linked articles is held by their individual authors or publishers. Our commentary is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License and is labeled TLP:CLEAR.

Tags:
 
Les Bell
by Les Bell - Wednesday, 25 October 2023, 10:18 AM
Anyone in the world

Welcome to today's daily briefing on security news relevant to our CISSP (and other) courses. Links within stories may lead to further details in the course notes of some of our courses, and will only be accessible if you are enrolled in the corresponding course - this is a shallow ploy to encourage ongoing study. However, each item ends with a link to the original source.

News Stories


Yet Another Bit Flipping Attack

Many readers doubtless remember the consternation caused by the revelation of the RowHammer attack on dynamic RAM (DRAM) modules back in 2015. Dynamic RAM cells consist of a single transistor and a capacitor which is charged up to represent a one, so that its positive end reads 5V or 3.3V, and discharged to 0V to represent a zero. In order to minimize the number of address pins on each chip, the memory cells are organized into rows and columns, and a complete cell address is typically multiplexed, with one half of the complete address being used to identify the row while the other half addresses the column, thereby addressing the cell.

The charge on the capacitor will, however, gradually drain away, so a refresh controller circuit will periodically  - at most every 64 ms - read a row of memory and then rewrite it, recharging the capacitors which need it. But the increasing density of DRAM chips has led to a related problem: the electrostatic field of the capacitor can affect neighbouring cells. This was identified in 2014 by researchers from Carnegie Mellon University and Intel, whp presented a paper - Flipping bits in memory without accessing them: An experimental study of DRAM disturbance errors - at that year's Annual International Symposium on Computer Architecture showing that repeated reads of a row can affect the adjacent rows, causing corruption (Kim, et. al, 2014). However, they saw this as a reliability problem, and not specifically a security problem.

The stakes were raised the following year, when a couple of Google Project Zero researchers figured out a way to use this technique in a privilege escalation attack (Seaborn and Dullien, 2015) which they dubbed RowHammer. The attack defeats the page-based memory protection features of the processor; for example, it can be used to flip bits in a 4 KB page which belongs to a privileged process and would not normally be accessible to the attacker.

RowHammer works by performing thousands or even hundreds of thousands of reads of two different rows in the same bank of RAM adjacent to the row which the attacker wants to flip - the victim row. Since a bank of RAM has only a single row buffer for output, each read activates the relevant row to reload the row buffer (repeatedly hammering a single row would only activate it once). Another complication is that the processor's own cache would normally keep a copy of the read values, but a couple of clflush instructions will flush the cached copies, forcing a read of the DRAM.

The result is flipped bits in the target row. Of course, there's a bit more to it, but that's the basic idea, and the Project Zero researchers were able to demonstrate code that broke out of the Chrome Native Client sandbox, as well as a Linux privilege escalation attack which worked by flipping a bit in an x86 page table entry to gain access to the attacking process's own page table, thereby allowing privileged access to all physical memory.

Naturally, the semiconductor industry has not taken this lying down, introducing mitigations into their DRAM circuit designs - and as so often happens in security, this has turned into an escalating arms race as researchers have developed workarounds. For example, DDR3 memory added extra bits onto the rows, using Hamming codes to provide error checking and correction (ECC); ECC used to be a common feature of mainframe and high-end server memory but the reliability of modern chips had led to many dropping it. But once DDR3 became available, it didn't take long for researchers to come up with another RowHammer variant which defeats it.

DDR4 therefore includes an additional feature called Target Row Refresh (TRR). This monitors the number of times a row is accessed and when it exceeds a target threshold, it refreshes adjacent rows to guard against bit flipping. Problem solved, right?

Wrong. A new attack defeats TRR by combining the repeated reads of RowHammer with its own new approach (Luo, wt. al., 2023; Goodin, 2023).

The RowPress attack works by keeping one DRAM row - an aggressor row - open for a long period of time, which disturbs the adjacent rows. This can induce bitflips in the victim row without requiring tens of thousands of activations of the aggressor row, and therefore does not trigger TRR. The researchers concluded:

... with a user-level program on a real DDR4-based Intel system with TRR protection, 1) RowPress induces bitflips when RowHammer cannot, 2) RowPress induces many more bitflips than RowHammer, and 3) increasing tAggON up to a certain value increases RowPress-induced bitflips and number of rows with such bitflips. Thus, read-disturb-based attacks on real systems can leverage RowPress to be more effective despite the existence of periodic auto-refresh and in-DRAM target row refresh mechanisms employed by the manufacturer (Luo et. al., 2023).

In theory, the RowPress technique can achieve bitflipping by holding a row open just once, for an extended period of time. However, this is not really practical, and so an actual attack would combine the RowPress technique with RowHammer, using repeated row activations, but for a longer period of time to keep the number of reads below the TRR threshold, and some experimentation is required to find an optimal combination of the number and duration of activations in order to achieve the desired bitflips.

I dare say a lot of researchers are already working on proof-of-concept exploits, but getting this technique to flip the specific bits required in, say, a page table entry is going to be challenging.

Goodin, Dan, There’s a new way to flip bits in DRAM, and it works against the latest defenses, Ars Technica, 19 October 2023. Available online at https://arstechnica.com/security/2023/10/theres-a-new-way-to-flip-bits-in-dram-and-it-works-against-the-latest-defenses/.

Kim, Y., Daly, R., Kim, J., Fallin, C., Lee, J. H., Lee, D., Wilkerson, C., Lai, K., & Mutlu, O., Flipping bits in memory without accessing them: An experimental study of DRAM disturbance errors, 2014 ACM/IEEE 41st International Symposium on Computer Architecture (ISCA), 361–372, 2014. https://doi.org/10.1109/ISCA.2014.6853210. Available online at http://users.ece.cmu.edu/~yoonguk/papers/kim-isca14.pdf.

Luo, H., Olgun, A., Yağlıkçı, A. G., Tuğrul, Y. C., Rhyner, S., Cavlak, M. B., Lindegger, J., Sadrosadati, M., & Mutlu, O., RowPress: Amplifying Read Disturbance in Modern DRAM Chips. Proceedings of the 50th Annual International Symposium on Computer Architecture, pp. 1–18, 2023. https://doi.org/10.1145/3579371.3589063. Available online at https://people.inf.ethz.ch/omutlu/pub/RowPress_isca23.pdf.

Seaborn, M., & Dulien, Thomas, Project Zero: Exploiting the DRAM rowhammer bug to gain kernel privileges, Google Project Zero blog, 9 March 2015. Available online at https://googleprojectzero.blogspot.com/2015/03/exploiting-dram-rowhammer-bug-to-gain.html.


Upcoming Courses


These news brief blog articles are collected at https://www.lesbell.com.au/blog/index.php?courseid=1. If you would prefer an RSS feed for your reader, the feed can be found at https://www.lesbell.com.au/rss/file.php/1/dd977d83ae51998b0b79799c822ac0a1/blog/user/3/rss.xml.

Creative Commons License TLP:CLEAR Copyright to linked articles is held by their individual authors or publishers. Our commentary is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License and is labeled TLP:CLEAR.

Tags:
 
Les Bell
by Les Bell - Tuesday, 24 October 2023, 9:47 AM
Anyone in the world

Welcome to today's daily briefing on security news relevant to our CISSP (and other) courses. Links within stories may lead to further details in the course notes of some of our courses, and will only be accessible if you are enrolled in the corresponding course - this is a shallow ploy to encourage ongoing study. However, each item ends with a link to the original source.

News Stories


Citrix Warns of NetScaler Exploits in the Wild

Ealier this month, Citrix released fixes for CVE-2023-4966, an unauthorized data disclosure vulnerability in the NetScaler ADC (application delivery controller) and NetScaler Gateway products. The vulnerability affects NetScaler ADC if it is configured as a gateway (VPN virtual server, ICA proxy, CVPN or RDP proxy) or as a AAA (authentication, authorization and accounting) virtual server.

The vulnerability was discovered by Citrix's internal team, and at the time they disclosed it, they were not aware of any exploits in the wild.

But we all know how that goes: no sooner are patches or updated builds released than the bad guys get hold of them, do a diff against the unpatched version, find the modified code, reverse-engineer the fix and develop a matching exploit.

And sure enough, Citrix now has reports, via Mandiant, of incidents consistent with session hijacks, and credible reports of targeted attacks exploiting this CVE-2023--4966. CISA has also added this vuln to its Known Exploited Vulnerabilities Catalog. Customers using any of the affected builds should update immediately, and also kill all active and persistent sessions with the following commands:

kill icaconnection -all
kill rdp connection -all
kill pcoipConnection -all
kill aaa session -all
clear lb persistentSessions

Shetty, Anil, CVE-2023-4966: Critical security update now available for NetScaler ADC and NetScaler Gateway, blog post, 23 October 2023. Available online at https://www.netscaler.com/blog/news/cve-2023-4966-critical-security-update-now-available-for-netscaler-adc-and-netscaler-gateway/.

Mandiant, Remediation for Citrix NetScaler ADC and Gateway Vulnerability (CVE-2023-4966), blog post, 17 October 2023. Available online at https://www.mandiant.com/resources/blog/remediation-netscaler-adc-gateway-cve-2023-4966.

Microsoft To Invest $A5 Billion On AI and Cybersecurity In Australia

Timed to coincide with Prime Minister Athony Albanese's visit to the US comes news of Microsoft's investment of an additional $5 billion over the next two years in Australia. The investment was announced by the PM, along with Microsoft President Brad Smith and ANZ Managing Director Steve Worrall, at the Australian Embassy in Washington DC.

A large part of the investment will be in the construction of nine new data centers in Sydney, Melbourne and Canberra, primarily intended to support hyperscale cloud technology, particularly Microsoft's bold strategy to dominate the artificial intelligence market. This will add to an existing 20 data centers the company operates in Australia, and in order to staff these centres, in early 2024 the firm will open a new "Data Centre Academy", in conjunction with TAFE NSW, to train 200 people in two years. The company also proposes to support other programs which will deliver "digital skills training" to 300,000 Australians.

However, the other major part of the announcement related to cybersecurity, with increased collaboration between Microsoft and the Australian Signals Directorate in order to build a "cyber shield" which will boost Australia's protection from online threats. In a statement, the company said that the exchange of cyber threat information leads to better protection for Australian residents, businesses and government. The focus of its activity will be the detection, analysis and defence against the operations of nation-state advanced persistent threats.

ASD Director-General, Rachel Boble, said the investments would strengthen the agency's "strong partnership with Microsoft and ... turbocharge our collective capacity to protect Australians in cyberspace".

Murphy, Katharine and Daniel Hurst, Microsoft to help Australia’s cyber spies amid $5bn investment in cloud computing, The Guardian, 24 October 2023. Available online at https://www.theguardian.com/australia-news/2023/oct/24/microsoft-to-invest-5bn-in-australian-cybersecurity-over-next-two-years.

Ryan, Brad, Microsoft to help Australia build 'cyber shield', Anthony Albanese announces on Washington trip, ABC News, 24 October 2023. Available online at https://www.abc.net.au/news/2023-10-24/anthony-albanese-in-washington-dc-microsoft-deal/103012802.


Upcoming Courses


These news brief blog articles are collected at https://www.lesbell.com.au/blog/index.php?courseid=1. If you would prefer an RSS feed for your reader, the feed can be found at https://www.lesbell.com.au/rss/file.php/1/dd977d83ae51998b0b79799c822ac0a1/blog/user/3/rss.xml.

Creative Commons License TLP:CLEAR Copyright to linked articles is held by their individual authors or publishers. Our commentary is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License and is labeled TLP:CLEAR.

Tags:
 
Les Bell
by Les Bell - Monday, 23 October 2023, 9:04 AM
Anyone in the world

Welcome to today's daily briefing on security news relevant to our CISSP (and other) courses. Links within stories may lead to further details in the course notes of some of our courses, and will only be accessible if you are enrolled in the corresponding course - this is a shallow ploy to encourage ongoing study. However, each item ends with a link to the original source.

News Stories


Cybersecurity 'Skills Shortage' a Mirage?

For some years now we have been hearing about a cybersecurity skills shortage, and massive shortfalls in the number of security professionals available to fill the growing number of jobs. YouTube is full of channels offering advice to those entering the field via bootcamp courses, and ISC2 (which has rebranded itself, concluding that (ISC)² is incomprehensible) claims to be well on the way to putting one million candidates through its free online training and certificate, 'Certified in Cybersecurity'.

This has never jelled with my experience as a university lecturer teaching third-year students ('seniors' to those in the US) and Masters students. While more than a few of my students were already in the workforce (it's a joy teaching those who already have some experience) and others had jobs lined up, sometimes via graduate recruitment programs in tech and finance companies, others were struggling, even after graduation. Many of those who graduated with a good Bachelors degree in Computer Science, IT or Cybersecurity often quickly moved on to Masters programs in search of even deeper knowledge.

Now long-term security pro Ben Rothke has blogged on the issue, pointing out that figures such as the claim by Cybersecurity Ventures that there will be 3.5 million unfilled cybersecurity jobs in 2025, a backlog that has continued from 2022, are highly exaggerated. This reflects a number of problems, predominantly in the recruitment process - starting with companies who post job listings with significant security requirements while only offering entry-level salaries.

At this point, there does not seem to be a shortage in the higher-level positions occupied by generalists, middle managers and CISO's. Rather, the shortage is of people with deeper technical knowledge, Quoting top recruitment professional Lee Kushner, Rothke writes:

"What there is a shortage of are computer scientists, developers, engineers, and information security professionals who can code, understand technical security architecture, product security and application security specialists, analysts with threat hunting and incident response skills. And this is nothing that can be fixed by a newbie taking a six-month information security boot camp."

I would have to agree. Gaining deep experience in these fields can take years; gaining experience across several, decades. And while many recruiters simply look for a high-level certification such as CISSP, that certification really only reflects a shallow understanding across multiple domains of security, and not a deep understanding of any one of them, with a requirement for only five years experience in total across all - not much for those moving into the senior and management positions the certification is really intended for.

I have long worried that our 5-day CISSP prep course contains just too much technical information, perhaps diving deeper into some areas than the exam really requires. But increasingly I am glad that it is backed by an 800-page wiki of course notes and other references that do allow our students to gain a more thorough understanding of these areas than just recognising a few buzzwords.

Furthermore, there are very few entry-level jobs in security - at least, that are suitable for entry-level skills. An application security specialist, for example, needs to have a few years of experience in application development in order to have seen - and made - the kinds of mistakes that a security specialist should be hunting for, not to mention an understanding of the development evironment and tools. The idea that a six-month boot camp - or a free online course - can lead to a six-figure salaried job defending a megacorp against thousands of wily hackers is, well, naive.

For most employers, the best way to meet their own demand for security professionals is to recruit from within, cross-training and offering administrators and developers a path into a security stream, and taking advantage of their existing experience. In a sense, this mirrors the experience of the multi-decade security professionals I know, who all ended up in security after many years in other IT fields, which they capitalized upon as the basis of a thorough knowledge of how security really works.

External recruitment will still be necessary, however, and it is time for a shakeout of both recruitment practices and recruitment professionals - the latter, especially need to be able to differentiate the various subfields of infosec and the depth of technical roles in each. Hmmm. Perhaps we should offer a short course for recruitment firms?

Rothke, Ben, Is there really an information security jobs crisis?, blog post, 12 September 2023. Available online at https://brothke.medium.com/is-there-really-an-information-security-jobs-crisis-a492665f6823.


Upcoming Courses


These news brief blog articles are collected at https://www.lesbell.com.au/blog/index.php?courseid=1. If you would prefer an RSS feed for your reader, the feed can be found at https://www.lesbell.com.au/rss/file.php/1/dd977d83ae51998b0b79799c822ac0a1/blog/user/3/rss.xml.

Creative Commons License TLP:CLEAR Copyright to linked articles is held by their individual authors or publishers. Our commentary is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License and is labeled TLP:CLEAR.

Tags:
 
Les Bell
by Les Bell - Friday, 20 October 2023, 10:05 AM
Anyone in the world

Welcome to today's daily briefing on security news relevant to our CISSP (and other) courses. Links within stories may lead to further details in the course notes of some of our courses, and will only be accessible if you are enrolled in the corresponding course - this is a shallow ploy to encourage ongoing study. However, each item ends with a link to the original source.

News Stories


CISA Updates Its #StopRansomware Guide

The US Cybersecurity and Infrastructure Security Agency (CISA), in conjunction with the NSA, the FBI and the Multi-State Information Sharing and Analysis Center (MS-ISAC) has released an updated version of the joint #StopRansomware Guide.

The Guide, which is developed through the US Joint Ransomware Task Force, is intended to be a one-stop resource to help organizations mitigate the risks of ransomware through good practices and step-by-step approaches to detect, prevent, respond and recover from attacks. The update includes new tips for prevention, such as hardening the SMB protocol, a revision of the response approaches and additional threat hunting insights.

CISA, #StopRansomware Guide, resource, 19 October 2023. Available online at https://www.cisa.gov/resources-tools/resources/stopransomware-guide. Direct PDF download at https://www.cisa.gov/sites/default/files/2023-10/StopRansomware-Guide-508C-v3_0.pdf.

How Not to Get Hooked by Phishing

Phishing, leading to credential compromise, continues to be a huge problem and, in fact, is getting worse as threat actors take advantage of generative AI to eliminate almost all of the clues, such as grammatical errors and off-pitch phraseology, that would previous alert users to a fake email.

An additional difficulty is the appearance of many new variants:

  • Spear phishing: targeted email phishing
  • Whaling: Executive email phishing
  • Harpoon Whaling: Highly-targeted executive phishing
  • BEC: Business email compromise (CEO fraud)
  • Smishing: Text message (SMS) phishing
  • Vishing: Voice (phone call) phishing
  • Quishing: QR code phishing
  • Angler phishing: Social media phishing

A rather nice piece from Trend Micro examines the current trends in phishing attacks, such as the use of the new top-level domains like .zip (what was Google thinking, there?), the use of multiple phishing variants in tandem to lend credibility and create a sense of urgency, not to mention the use of tools like ChatGPT to research a victim in a so-called AI-enabled harpooning attack.

Although Trend Micro still recommends security education, training and awareness and in particular, phishing simulations to test employees, they also recommend more sophisticated technical approaches, such as authorship analysis on the email gateway, along with the use of cloud access security brokers and secure web gateways - all of which increasingly incorporate AI techniques to escalate the arms race with the attackers.

Clay, Jon, Email Security Best Practices for Phishing Prevention, blog post, 17 October 2023. Available online at https://www.trendmicro.com/en_us/ciso/22/k/email-security-best-practices.html.

AI Comes to Access Control

While the fundamentals of access control still - for good and sound reasons - depend upon decades-old research into security models such as Bell-LaPadula, Clark-Wilson and Role-Based Access Control, in practice many of these (although not BLP) devolve into an access control matrix represented by access control lists - a model that dates back to the early 1970's. Each object in a system - be that a file, API, database table, transformational procedure or something else - carries a list of subjects (users - often aggregated into groups for simplicity), and the types of access each user is granted.

However, attempting to map a high-level access control policy for a complex business application which may have hundreds of objects and thousands of subjects down to a set of ACL entries and their related rules (e.g. if a user is a member of two groups, one allowed access and one denied, how is this resolved?) can be mind-numbingly complex. Generally, this has been done using a policy language like XACML, requiring the policy developer to have a good understanding of application requirements, the security model and the syntax of the specific policy language - not to mention underlying principles like the principle of least privilege and segregation of duties.

People who can do all of that are in short supply.

Now a small team from the Enterprise Security and Access Security organizations at Google have developed a tool, based on the company's PaLM2 large language model, which allows developers to create and modify security policies using plain English instructions. The tool significantly reduces the difficulty of defining access control policies that comply with Google's BeyondCorp zero trust architecture and its identity aware proxy.

The SpeakACL tool can not only generate ACL's, but can also verify the access policies and sports additional safeguards for sensitive information disclosure, data leaking, prompt injections, and supply chain vulnerabilities. Although this is only a prototype, it shows another aspect of the trend towards utilizing AI in security services.

Khandelwal, Ayush, Michael Torres, Hemil Patel and Sameer Ladiwala, Scaling BeyondCorp with AI-Assisted Access Control Policies, blog post, 10 October 2023. Available online at https://security.googleblog.com/2023/10/scaling-beyondcorp-with-ai-assisted.html.


Upcoming Courses


These news brief blog articles are collected at https://www.lesbell.com.au/blog/index.php?courseid=1. If you would prefer an RSS feed for your reader, the feed can be found at https://www.lesbell.com.au/rss/file.php/1/dd977d83ae51998b0b79799c822ac0a1/blog/user/3/rss.xml.

Creative Commons License TLP:CLEAR Copyright to linked articles is held by their individual authors or publishers. Our commentary is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License and is labeled TLP:CLEAR.

 
Les Bell
by Les Bell - Thursday, 19 October 2023, 10:13 AM
Anyone in the world

Welcome to today's daily briefing on security news relevant to our CISSP (and other) courses. Links within stories may lead to further details in the course notes of some of our courses, and will only be accessible if you are enrolled in the corresponding course - this is a shallow ploy to encourage ongoing study. However, each item ends with a link to the original source.

News Stories


Really? That's Your Password?

A small study done by Outpost24 makes for scary reading, suggesting that web site administrators may be just as bad as ordinary users when it comes to advice about choosing passwords - especially changing default passwords after intial installation and configuration of software and systems. One of the first rules of system administration is to immediately change any vendor-preset default password, as these are widely known and make even brute force attacks increadibly easy.

In fact, legislation such as the UKs Product Security and Telecommunications Infrastructure Bill and California's Senate Bill 327, the default password law, will ban the use of default passwords, requiring developers to include a password-change step as part of any installation or setup process. But for the time being, default passwords live on - and administrators either do not change them, or change them to one of a few commonly-used variants.

According to the Outpost24 research, performed by mining the data in their Threat Compass threat intelligence backend database, the top 20 popular passwords associated with compromised accounts are:

  1. admin
  2. 123456
  3. 12345678
  4. 1234
  5. Password
  6. 123
  7. 12345
  8. admin123
  9. 123456789
  10. adminisp
  11. demo
  12. root
  13. 123123
  14. admin@123
  15. 123456aA@
  16. 01031974
  17. Admin@123
  18. 111111
  19. admin1234
  20. admin1

Oh, come on, people - it's like you're not even trying! Isn't anyone even using a password safe?

But while it's all very well to blame users, developers have to shoulder some of the blame here, too. For example, while I've recently railed against password complexity rules, it's obvious that many systems are not even enforcing an adequate minimum passphrase length, let alone requirements for multiple character types (and the even worse prohibition on repeated characters). And even when systems do enforce such requirements, administrators are complying in a very few predictable ways that barely increase the search space for attackers.

Developers should be incorporating stronger authentication mechanisms, ideally based on cryptographic techniques, with a view to abandoning passwords completely in due course. We've been doing this for command-line administration for decades now; in fact, the default for most IaaS cloud-based systems is to log in using an SSH private key, and the SSH authentication agent (e.g. PuTTY's Pageant) makes this extremely convenient by eliminating password prompts completely for the working day. For web access, FIDO2 authentication via passkeys is similarly easy, or even easier.

Remember, these passwords are from stolen credentials, which also suggests that complementary controls, such as multi-factor authentication, were also not implemented - or, perhaps, were easily circumvented by a man-in-the-middle or proxy attack. And of course, this list says nothing about credentials which were not stolen, so we know that not all admins are this bad. But all the same, we can see how easy it is for even script kiddies to compromise some systems.

Outpost24, IT admins are just as culpable for weak password use, blog post, 17 October 2023. Available online at https://outpost24.com/blog/it-admins-weak-password-use/.

Multiple Agencies Update "Secure By Design" Principles

A large coalition of national cybersecurity agencies - rather than listing them all, it's easiest just to say that Russia, China, North Korea and Iran are not on the list - has updated the guidance issued earlier this year on principles and approaches for designing software which is secure by design. Citing the need to shift the balance of security risk - specifically, the impact of threats - from customers to developers and manufacturers, the guidance revolves around three fundamental principles for tech firms:

  • Take ownership of customer security outcomes
  • Embrace radical transparency and accountability
  • Build organizational structure and leaderhip to achieve these goals - lead from the top

In order to achieve each of these objectives, the publication outlines a number of practices. For example, in support of that first principle, the practices include:

  • Eliminate default passwords (surprise!)
  • Conduct security-centric user field tests
  • Reduce hardening guide size
  • Actively discourage use of unsafe legacy features
  • Implement attention grabbing alerts
  • Create secure configuration templates
  • Document conformance to a secure SDLC framework
  • Document Cybersecurity Performance Goals (CPG) or equivalent conformance
  • Vulnerability management
  • Responsibly use open source software
  • Provide secure defaults for developers
  • Foster a software developer workforce that understands security
  • Test security incident event management (SIEM) and security orchestration, automation, and response (SOAR) integration
  • Align with Zero Trust Architecture (ZTA)
  • Provide logging at no additional charge
  • Eliminate hidden taxes (do not charge for security and privacy features or integrations)
  • Embrace open standards
  • Provide upgrade tooling

There's a lot more, for the other principles.

At only 36 pages, this guide is primarily aimed at senior managers - it is certainly much smaller than any of the many textbooks on correctness-by-construction and secure programming intended for architects and programmers. This is not to say that developers don't need to at least skim it - there are some useful ideas in there.

CISA et. al., Secure By Design - Shifting the Balance of Cybersecurity Risk: Principles and Approaches for Secure by Design Software, technical report, 16 October 2023. Available online at https://www.cisa.gov/resources-tools/resources/secure-by-design.


Upcoming Courses


These news brief blog articles are collected at https://www.lesbell.com.au/blog/index.php?courseid=1. If you would prefer an RSS feed for your reader, the feed can be found at https://www.lesbell.com.au/rss/file.php/1/dd977d83ae51998b0b79799c822ac0a1/blog/user/3/rss.xml.

Creative Commons License TLP:CLEAR Copyright to linked articles is held by their individual authors or publishers. Our commentary is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License and is labeled TLP:CLEAR.