Risk Update

Future Risk: A Look at Emerging Issues (Part 1: Phishing & Fake Phone Fears)

We’ve covered a few stories about phishing and related financial risk. The classic is the email that appears to come from the Managing Partner, demanding a wire transfer. Sounds fishy, but in truth these sorts of scams have been successful with law firms.

With that in mind, this story in the tech press caught my eye and sparked some anxiety: “Fake voices ‘help cyber-crooks steal cash‘” —

  • “A security firm says deepfaked audio is being used to steal millions of pounds. Symantec said it had seen three cases of seemingly deepfaked audio of different chief executives used to trick senior financial controllers into transferring cash.”
  • “The AI system could be trained using the “huge amount” of audio the average chief executive would have innocently made available, Symantec said.”
  • “Dr Alexander Adam, a data scientist at AI specialist Faculty, said it would take a substantial investment of time and money to produce good audio fakes.”

With the implication that this is already happening, how long until we read about a law firm example? And here the risks are not just standard financial thievery, but also pursuing sensitive information, like details suitable for insider trading…

I may have read a bit too much sci-fi in my day, but Jeff-Goldblum-in-Jurrasic-Park like reactions aside, a quick search reveals William Gibson is right once again. He’s the one who said: “The future is already here—It’s just not very evenly distributed.” See: “This AI lets you deepfake your voice to speak like Barack Obama” —

  • “Advances in machine learning will soon make it possible to sound like yourself with a different age or gender—or impersonate someone else.”
  • “Modulate has a demonstration voice skin of Barack Obama on its site, and cofounder and CEO Mike Pappas said it would be possible to generate one for anyone, given enough training data. But he adds that the company won’t make a celebrity voice skin available without the owner’s permission. He also insists that deception isn’t the main point.”

Will we see a future in which all interactions are initiated first with a two-factor authentication handshake among participants communicating in any way other than in the flesh? Or am I getting a bit too creative, which I’ve done before

If you liked this post, please share it: