close
close

Association-anemone

Bite-sized brilliance in every update

Deepfakes and synthetic IDs are already a problem; just wait for the next upgrade
asane

Deepfakes and synthetic IDs are already a problem; just wait for the next upgrade

Deepfake fraud has proven its potential to cost businesses millions of dollars. But the deepfake fear reverberating through tech circles goes beyond the pocketbook. With their ability to co-opt someone’s identity without consent, deceive and disrupt economic and political processes, deepfakes they are existentially troubling. Simply put, they’re creepy.

And according to security leaders monitoring the deepfake landscape, it’s only going to get creepier. Technology is developing exponentially and new identity fraud tactics are emerging rapidly. Generative artificial intelligence, which many executives had hoped might be a passing fad, has turned into an increasingly common threat. And while it relies on AI deepfake detection tools are available, they are not guaranteed to have implemented governance measures around customer data that align with corporate policies.

IDology report shows widespread concern about generative AI

New data from IDology shines a light on the “industrial scale” of fraud that is committed with synthetic identities created using generative AI. A release promoting the research says 45% of reported fintechs have grown synthetic identity fraud in the last 12 and a half months they are concerned that GenAI will create more convincing synthetic identities and better deepfakes.

According to the release, “GenAI has given criminals a way to work faster, scale attacks, and create more credible phishing scams and synthetic identities.” And it’s just the beginning: Companies see artificial intelligence-based generative attacks as the dominant fraud trend in the next 3-5 years.

IDology’s response is a familiar rallying cry: use AI to fight AI.

“These figures point to a need for action,” says James Bruni, Managing Director at GBG IDology. “While Gen AI is being used to intensify fraud tactics, its ability to quickly analyze large volumes of data can also be an advantage for fintechs, allowing them to quickly track down trusted identities and intensify those at high risk. The powerful combination of artificial intelligence, human fraud expertise and cross-sector collaboration will help fintechs verify customers in real time, authenticate their identities and monitor transactions across the enterprise and beyond to protect against hard-to-detect types of fraud , such as synthetic identity fraud.”

FS-ISAC proposes taxonomy of deepfake threats

The deepfake drum continues to beat with the release of a new report from the Financial Services Intelligence Sharing and Analysis Center (FS-ISAC), an industry consortium dedicated to reducing cyber risk in the global financial sector. Prepared by FS-ISAC’s Working Group on Risk in Artificial Intelligence, “Deepfakes in the financial sector: Understanding Threats, Managing Risk” highlights broad categories and “a common language of deep false threats and controls to counter them.”

Like any industry, financial services brings its own specific context for deepfake fraud. One of the most feared new techniques is the deepfake CEO video fraudor, more generally, “C-suite impersonation.” Customer biometrics are a target, and banks are their own gold mine for fraudsters who commit consumer fraud, often through voice authentication systems. The infrastructure can be attacked, and the deepfake detection models themselves are often at stake.

The risks are diverse: destabilized, expensive markets data breacheshumiliation leading to reputational damage.

The meat of IDology’s work is the Deepfake Threat Taxonomy, which breaks down threats to organizations by category. “The FS-ISAC Deepfake taxonomy covers two topics,” the paper says. “The six threats facing financial services firms from deepfakes” and “three primary attack vectors targeting technologies that detect and prevent deepfakes”. Each defined category has a number of subcategories, which together provide a broad view of the general deepfake fraud ecosystem.

“Understanding the different types of threats posed by deepfakes and how they can be taxonomized clarifies the types of controls most appropriate for defense,” the paper says. “Financial services institutions should conduct full threat modeling for each of the threat categories.” A corresponding table of control mechanisms completes the mosaic.

The the fight against deepfakessays FS-ISAC, it will need to be collaborative, vigilant and agile. “While the threat posed by deepfakes to financial institutions is significant and evolving,

a proactive, multifaceted approach to security can substantially mitigate these risks. The way forward lies in the continuous improvement of detection technologies, along with sound security practices and comprehensive awareness programs.”

Advanced spoofing coming soon to fool everyone’s moms

A article Fortune.com solicits views on deepfake threat from cyber chiefs at SoftBank, MasterCard and anthropic – and the diagnosis is bleak, suggesting we have entered an “AI cold war”.

“You’ve made criminal entities move very quickly, using AI to come up with new types of threats and methodologies to make money,” says Gary Hayslip, chief security officer at investment holding company SoftBank. “That in turn pushes us back with the breaches and incidents that we have, which pushes us to develop new technologies.”

“It’s like a big wave in a way,” Hislip says of the pace of the ride AI technologies it spills over into the market.

Fraud detection is also improving, but companies have concerns about what third-party AI providers are allowed to do with the data they collect. Hislip says that “you have to be a little paranoid” in evaluating what tools and services are integrated into a company’s security ecosystem. Some products will carry an unacceptable risk, especially in highly regulated industries such as healthcare.

Meanwhile, Alissa Abdullah, deputy CSO at Mastercard, says deepfake scams are getting better and more varied. She describes an emerging attack technique in which video AI and deepfakes audio presented as strangers from a trusted brand, such as a help desk representative.

“They’ll call you and say, ‘we need to log you into our system,’ and they’ll ask for $20 to remove the ‘fraud alert’ that was on my account,” Abdullah says. “He doesn’t want $20 billion in Bitcoin anymore, he wants $20 from 1000 people – small amounts that even people like my mother would be happy to say ‘let me give it to you.’

Article topics

biometry | deepfake detection | deepfakes | financial services | fraud prevention | IDology | synthetic data | synthetic identity fraud

Latest news on biometrics

It presents an opportunity for relying parties to more easily and securely onboard users through mobile driving licenses (mDLs). But…

The government of Benin will see a boost in its civil registration efforts thanks to a donation of 2,050…

Retail theft is on the rise in the US, with losses exceeding an estimated $100 billion annually. In addition to…

Yoti achieved a compliance win for its facial age estimation (FAE) software. A blog from age insurance and…

The Biometrics Security and Privacy (BSP) group within the Idiap Research Institute will carry out a research project on the development…

Enrollment of refugees and other forcibly displaced people for digital ID in Ethiopia has been underway since March, but…