AI voice fraud poses serious threat to South African businesses

AI voice fraud is hitting South African businesses hard, and the attacks are getting more sophisticated by the day. Criminals are now using artificial intelligence to perfectly clone voices and trick employees into authorising fraudulent transactions or releasing sensitive data.

The threat became real in 2024 when a local cryptocurrency exchange thwarted a fraud attempt involving a WhatsApp voice note from what appeared to be a cloned executive voice. The scammer attempted to authorise a wallet transfer, but internal security protocols flagged the suspicious request before any damage occurred.

“Until recently, voice-based scams were fairly unsophisticated,” says Nic Laschinger, CTO of Euphoria Telecom. “Today, fraudsters are using AI to clone voices with remarkable accuracy. These tools can replicate a person’s tone, accent, and speech patterns.”

The technology works by analysing audio samples from voice notes, podcasts, social media clips, or video calls. Once criminals have enough audio data, they can create convincing voice clones that fool even close colleagues.

Regulators sound the alarm

The Financial Sector Conduct Authority (FSCA) has issued numerous warnings about deepfakes, particularly those using fake video imagery of public figures like President Cyril Ramaphosa and celebrities like Leanne Manas to promote fraudulent investment schemes.

But voice fraud represents a different category of threat. Unlike deepfake videos, which often contain visual artifacts that trained eyes can spot, voice clones can be nearly indistinguishable from the real thing.

“The implications for South African businesses and individuals are serious,” Laschinger explains. “Imagine an employee receiving a panicked call from a familiar executive, authorising an urgent payment or data release. Would they question it?”

The scams extend beyond corporate fraud. Criminals are increasingly using AI-generated voices to impersonate distressed children or relatives, targeting parents and grandparents for emergency money transfers.

Banks vulnerable despite voice biometrics

South Africa’s reliance on voice communication for business makes the country particularly vulnerable. Some local banks have invested heavily in voice biometric authentication systems, but these could be compromised if voice cloning technology continues to advance.

“Voice, once considered a secure form of authentication, is no longer enough on its own,” warns Laschinger.

The financial sector faces particular risks, as voice fraud could potentially bypass existing security measures and lead to account breaches and identity theft on a large scale.

Fighting back with multi-layered security

Security experts recommend several strategies for businesses to protect against AI voice fraud:

Employee training: Staff at all levels need to understand how voice fraud works and implement verification protocols for urgent or unusual requests.

Multi-factor authentication: Companies should combine voice verification with additional security layers like PINs, tokens, or authentication apps.

Callback procedures: Establishing protocols to verify identity by calling back on known numbers can prevent many fraud attempts.

Advanced detection tools: Financial institutions should invest in fraud detection systems that analyse speech patterns and metadata, not just voice content.

Regulation struggles to keep pace

“South Africa is beginning to explore AI regulation, and voice fraud must be part of that conversation,” says Laschinger. “We need ethical standards for voice synthesis tools, clearer legal consequences for misuse, and public-private collaboration to track and prevent abuse.”

The technology community also bears responsibility. AI voice synthesis tools should include built-in safeguards like traceable watermarks or usage restrictions to prevent criminal misuse.

The rise of AI voice fraud represents a fundamental shift in cybersecurity threats. As technology continues to evolve at breakneck speed, South African businesses must adapt their security practices accordingly.

“We’re entering an era where hearing is no longer believing,” Laschinger concludes. “Protecting voice identity must now be treated with the same seriousness as safeguarding passwords, PINs or company data.”

For businesses that haven’t yet encountered AI voice fraud, it’s not a question of if, but when. The companies that survive will be those that prepare now, before the call comes in.

Zeen Social Icons