Font Size: A A A

Biometrics

Threat from deepfakes

Most, 77 per cent of cyber security decision-makers in financial services are concerned about the threat deepfakes pose to their industry. However, just 28pc say they have already implemented measures to combat such a threat.

The research, by a facial biometric authentication product company, polled over 100 responsible for overseeing cyber-security operations in financial services firms. The results highlight how seriously the technology’s ever-growing threat is perceived, while revealing a distinct lack of action having been taken to mitigate it, according to iProov.

Andrew Bud, Founder and CEO, iProov, said: “Whilst it’s encouraging to see the industry acknowledge the scale of the dangers posed by deepfakes, the tangible measures being taken to defend against them are what really matter. It’s likely that so few organisations have taken such action because they’re unaware of how quickly this technology is evolving. The latest deepfakes are so good they will convince most people and systems, and they’re only going to become more realistic.”  

The use of deepfake technology in fake news, pornographic videos, hoaxes and fraud, has created controversies. For example, earlier this month, Facebook announced plans to ban deepfakes from its platform, with concerns mounting about their influence on the impending US election.

As both artificial intelligence and machine learning have become more advanced and available, deepfakes have also been deployed by fraudsters in a commercial context. In fact, just under half (43pc) of those polled cited deepfakes as the tactic most likely to compromise facial authentication defences. The authentication firm points to a personal finance context, given that respondents expected those making online payments (50pc), and those using personal banking services (46pc), to be most at risk from deepfakes.

Bud added: “The era in which we can believe the evidence of our own eyes is ending. Without technology to help us identify fakery, every moving and still image will, in future, become suspect. That’s hard for all of us as consumers to learn, so we’re going to have to rely on really good technology to protect us.”


Tags

Related News