Blog entry by Les Bell

Les Bell
by Les Bell - Friday, November 3, 2023, 9:24 AM
Anyone in the world

Welcome to today's daily briefing on security news relevant to our CISSP (and other) courses. Links within stories may lead to further details in the course notes of some of our courses, and will only be accessible if you are enrolled in the corresponding course - this is a shallow ploy to encourage ongoing study. However, each item ends with a link to the original source.


Site Maintenance

This site will be offline intermittently over the weekend of Saturday 4 and Sunday 5 November as we migrate to a new server. Normal, stable service will resume at 8 am AEDT (Sydney time) on Monday 6 November.


News Stories


FIRST Publishes CVSS v4.0

FIRST - the Forum of Incident Response and Security Teams - has published version 4.0 of the Common Vulnerability Scoring System (CVSS). A draft of the standard was previewed back in June at the FIRST Conference in Montreal, and this was followed by two months of public comments and two months of work to address the feedback, culminating in the final publication.

CVSS is a key standard used for calculation and exchange of severity information about vulnerabilities, and is an essential component of vulnerability management systems, particularly the prioritization of remediation efforts. Says FIRST:

"The revised standard offers finer granularity in base metrics for consumers, removes downstream scoring ambiguity, simplifies threat metrics, and enhances the effectiveness of assessing environment-specific security requirements as well as compensating controls. In addition, several supplemental metrics for vulnerability assessment have been added including Automatable (wormable), Recovery (resilience), Value Density, Vulnerability Response Effort and Provider Urgency. A key enhancement to CVSS v4.0 is also the additional applicability to OT/ICS/IoT, with Safety metrics and values added to both the Supplemental and Environmental metric groups."

A key part of the update is new nomenclature which will enhance the usability of CVSS in automated threat intelligence, particularly emphasizing that CVSS is not just the base score which is the most widely quoted, The new nomenclature distinguishes:

  • CVSS-B: CVSS Base Score
  • CVSS-BT: CVSS Base + Threat Score
  • CVSS-BE: CVSS Base + Environmental Score
  • CVSS-BTE: CVSS Base + Threat + Environmental Score

The base score provides information about the severity of the vulnerability itself. But in order to assess the risk posed by potential exploitation, it is necessary to take into account information specific to the defender's environment, such as the existence of compensating controls, not to mention the value of exposed assets and likely impact of exploitation. Some of these are expressed in the Environmental Metric group, which encompasses the CIA security requirements of the vulnerable system along with modifications to the Base Metrics as appropriate for the environment. Similarly, it is up to the user to use information about the maturity and availability of exploits and the skill level of likely threat actors in deriving a Threat Score. I expect some risk management professionals are already busy mapping these to elements of the FAIR ontology, such as Threat Capability and Resistance Strength.

Additional Supplemental Metrics can now be used to convey additional extrinsic attributes of a vulnerability, such as safety impacts, lack of automatibility and the effectiveness of recovery controls,  that do not affect the final CVSS-BTE score.

CVSS 4.0 is extensively documented, and FIRST has even developed an online training course, which can be found at https://learn.first.org/.

Uncredited, FIRST has officially published the latest version of the Common Vulnerability Scoring System (CVSS v4.0), press release, 1 November 2023. Available online at https://www.first.org/newsroom/releases/20231101.

Dugal, Dave and Rich Dale (chairs), Common Vulnerability Scoring System Version 4.0, web documenation page, 1 November 2023. Available online at https://www.first.org/cvss/v4-0/index.html.

CVSS v4.0 Calculator: https://www.first.org/cvss/calculator/4.0.

Still More AI Hallucination Embarassment

Regular readers might remember the scandal that erupted earlier this year when it was revealed that Big 4 consultancy PwC had been providing its clients with inside information about tax reforms which had been acquired in its role of advisor to the Australian Taxation Office. As we commented at the time, this would have been an egregious failure of the Brewer-Nash security model, better known as the Chinese Wall model, were it not for the fact that there was obviously no implementation of it in the first place.

That outrage surrounding this prompted a parliamentary enquiry into the ethics and professional accountability of the big audit and consulting firms more generally, with all the Big Four coming under sustained criticism for their practices. Now that enquiry has given us yet another example of how the uncritical use of large language models can give rise to major embarassment.

One of the public submissions to the parliamentary enquiry was prepared by a group of accounting academics who ripped into the several of Big Four, accusing KPMG of being complicit in a "KPMG 7-Eleven wage theft scandal" culminating in the resignation of several partners and claiming the firm audited the Commonwealth Bank during a financial planning scandal. Their submission also claimed that Deloitte was being sued by the liquidators of collapsed construction firm Probuild for failing to audit the firm's accounts properly, and also accused Deloitte of falsifying the accounts of a company called Patisserie Valerie.

Here's the problem: KPMG was not involved in 7-Eleven's scandal and never audited the Commonwealth Bank, while Deloitte never audited either Probuild or Patisserie Valerie.

So why did these academics accuse the firms? You guessed it: part of their submission was generated by an AI large language model, specifically Google's Bard, which the authors had used to create several case studies about misconduct.

The result was certainly plausible - the inquiry was, after all, instituted in response to similar, real cases - but the case studies it created were the result of classic LLM "hallucination", and the academics have now been forced to apologise and withdraw their work, submitting a new version.

It's going to be hard for their university to enforce their policy on academic integrity when even emeritus professors don't do their homework properly. We grade this paper as an "F" - failure to properly cite sources; possible plagiarism?

Belot, Henry, Australian academics apologise for false AI-generated allegations against big four consultancy firms, The Guardian, 2 November 2023. Available online at https://www.theguardian.com/business/2023/nov/02/australian-academics-apologise-for-false-ai-generated-allegations-against-big-four-consultancy-firms.


Upcoming Courses


These news brief blog articles are collected at https://www.lesbell.com.au/blog/index.php?courseid=1. If you would prefer an RSS feed for your reader, the feed can be found at https://www.lesbell.com.au/rss/file.php/1/dd977d83ae51998b0b79799c822ac0a1/blog/user/3/rss.xml.

Creative Commons License TLP:CLEAR Copyright to linked articles is held by their individual authors or publishers. Our commentary is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License and is labeled TLP:CLEAR.

Tags:
[ Modified: Sunday, November 5, 2023, 10:41 AM ]