AI fuels surge in sophisticated cybercrime

cybercrime

Staff Reporter|Published

Cybercrime poses an unprecedented threat to businesses.

Image: File picture.

Artificial intelligence is ushering in a new era of cybercrime, with AI-powered scams increasingly targeting individuals and financial systems.

In recent months, experts have reported a surge in sophisticated fraud schemes that use AI to mimic real people with startling accuracy, raising concerns about security, privacy, and the erosion of public trust in digital communications.

Sameer Kumandan, Managing Director of SearchWorks, says one of the key strategies being used is the creation of highly convincing fake images, videos, and audio—commonly referred to as “deepfakes.”

He explains that these are often used to impersonate real individuals and spread misleading or false information. More concerning is that, while early deepfakes were often unconvincing, recent advancements have made them increasingly difficult to detect, making it easier for bad actors to mislead, manipulate, and defraud.

Kumandan recounts a recent incident where criminals impersonated Risto Ketola, Momentum Group’s Financial Director, on WhatsApp. They used Ketola’s LinkedIn profile photo to create a closed WhatsApp group, pretending to be him. Although this particular case did not involve AI-generated imagery or video, it highlighted the risks associated with the misuse of a person's likeness for malicious purposes.

“Deepfake-driven cybercrime has escalated to the point where the South African Banking Risk Information Centre (SABRIC) recently issued a strong warning about the growing threat of AI-enabled fraud,” said Kumandan. “SABRIC specifically highlighted the use of deepfakes and voice cloning to impersonate bank officials, promote fake investment schemes, and fabricate endorsements from well-known public figures. This emerging threat not only compromises the integrity of the financial sector but also erodes customer trust and confidence in digital interactions.”

He added that fraudsters are increasingly using AI to bypass security measures such as automated onboarding systems and Know Your Customer (KYC) checks, allowing them to create accounts and access services under false identities.

“From a business email compromise (BEC) standpoint, attackers are now incorporating deepfake audio and video of senior executives into phishing attempts, convincing employees to release funds or disclose sensitive information. Social engineering attacks have also become more sophisticated, with AI being used to analyse and replicate communication styles based on publicly available information, making scams appear more authentic.

“In some cases, AI is used to generate entirely synthetic identities, combining real and fabricated data to create fake personas capable of applying for credit, laundering money, or committing large-scale financial fraud.”

Kumandan warns that many legacy fraud detection tools aren’t equipped to identify fake audio or video, making deepfake scams even harder to detect.

“In response, financial institutions must urgently evolve their fraud prevention strategies to stay ahead of these sophisticated threats. Regulators expect institutions to keep up with the latest cybercrime trends, and failing to detect deepfake-based fraud can result in compliance failures, fines, and legal action.

“Furthermore, financial institutions must consider the broader impact of these risks on customer trust. As awareness of deepfake threats grows, it is understandable that clients may begin to question the authenticity of video calls, digital signatures, and other remote interactions. This erosion of confidence has the potential to hinder digital transformation initiatives and may even prompt some customers to disengage from digital platforms altogether.”

Kumandan says that through VOCA, an application designed to streamline compliance processes for accountable institutions, SearchWorks provides financial institutions with verified data and intelligent processes to reduce fraud exposure and ensure regulatory compliance.

“By leveraging real-time data and automated checks, VOCA helps organisations verify the identity and legitimacy of the individuals and entities they engage with. It flags discrepancies, detects suspicious behaviour, and highlights incomplete or false information, supporting informed decision-making at every stage.”

He added that through continuous monitoring of client behaviour and borrower risk profiles, VOCA enables early identification of potential threats, helping institutions close compliance gaps, avoid financial penalties, and stay ahead of emerging fraud risks.