Blog entry by Les Bell

Anyone in the world

We have long known of the potential for machine learning algorithms to graft celebrity faces onto pornographic images, or to modify the pitch and timbre of a speaker's voice so as to emulate that of another person - a process known as deep faking. For several years now, we have seen examples of deepfaked video in which political identities such as US President Barack Obama, appear to say things they have never said, and such videos are reported to be used increasingly in state-sponsored information warfare.

Tower buildings in a finance district.

(Image credit: Sean Pollock, via Unsplash)

For some time now, it has been possible to perform voice modification in real time - the pre-training of the model takes some time, but the subsequent voice modification through the model can be done by an app. However, now comes news of a whaling attack which is notable for at least three reasons:

  • The apparent use of real-time voice and video deepfaking to trick a finance worker
  • The deepfaking of multiple participants in a video conference
  • The scale of the fraud - $HK200 million (approximately $US25.6 million)

An employee of a multinational company was apparently lured into attending a video call with several other staff. The email lure, apparently from the UK-based CFO of the firm, spoke of the need for a secret transaction, which raised the employee's suspicions. However, when he attended the linked video conference, his doubts were assuaged by the presence of several other attendees whom he recognised - all of whom were, in fact, faked.

Consequently, he followed the instructions and transferred the funds as requested. The scam was only discovered when the employee checked with the corporation's head office.

The case was one of several revealed by Hong Kong police in a press conference on Friday. In other cases, stolen HK identity cards were used to make fraudulent loan applications and AI deepfakes were used to trick facial recognition programs.

These cases show the need for increased security controls to be placed around video conferencing software and related email accounts. Organizations have been slow to deploy phishing-resistent multi-factor authentication technologies such as security keys and passkeys, preferring to rely on low-cost solutions such as mTAN's (the six-digit verification codes sent as text messages - these were deprecated by NIST back in 2017!). The result is a continuing high number of credential compromises.

This leads to secondary compromises, especially in systems which use an email address as a single sign-on identifier across multiple services. While the video conferencing system used in the attack above was not identified, all the big players - Zoom, Microsoft Teams/Skype and Google Meet - support federated identity management over protocols such as OAuth2. Consequently, once an email account falls, so do all the related services, including video conferencing.

Furthermore, most video conferencing platforms allow the user to choose their displayed name, so that an attacker need not even use a compromised account - they can just change their name.

Suggested mitigations for these attacks would include:

  • Requiring phishing-resistant multi-factor authentication on privileged and influential email accounts (never allow the C-suite to claim they're too busy and important to waste their time - they need to be demonstrating security leadership)
  • Configuring video-conferencing accounts to require meeting participants to log in using email accounts within the corporate domain and protected by MFA (although one participant's account may be compromised, it's unlikely all participants could be faked)
  • Never allow the person whose identity you want to verify to control or set up the verification mechanism. The verifier should create the meeting and set its controls to be as restrictive as possible, and should initiate it at a time of their choosing - although deepfaking can be performed in real time, it may take the attacker some time to set up their software.

In the long run, organizations need to redesign their payment authorization procedures to incorporate stronger authentication using cryptographic techniques. Until they do, these types of business email compromise and whaling attacks will only increase in both frequency and severity.

(Thanks to Gadi Evron for directing us to this story.)

Chen, Heather and Kathleen Magramo, Finance worker pays out $25 million after video call with deepfake ‘chief financial officer’, CNN Asia, 4 February 2024. Available online at https://edition.cnn.com/2024/02/04/asia/deepfake-cfo-scam-hong-kong-intl-hnk/index.html.


Upcoming Courses


About this Blog

I produce this blog while updating the course notes for various courses. Links within a story mostly lead to further details in those course notes, and will only be accessible if you are enrolled in the corresponding course. This is a shallow ploy to encourage ongoing study by our students. However, each item ends with a link to the original source.

These blog posts are collected at https://www.lesbell.com.au/blog/index.php?user=3. If you would prefer an RSS feed for your reader, the feed can be found at https://www.lesbell.com.au/rss/file.php/1/dd977d83ae51998b0b79799c822ac0a1/blog/user/3/rss.xml.

Creative Commons License TLP:CLEAR Copyright to linked articles is held by their individual authors or publishers. Our commentary is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License and is labeled TLP:CLEAR.