Value Chain Asia MagazineAll ArticlesBusiness and EconomyEventGeopoliticsHuman ResourcesLeader In Supply ChainLogisticsOpinionsPress ReleasesSupply Chain and ManufacturingSustainabilityTechnologyThe New Hubs: Logistics Real Estate & Warehousing in 2026Cleared for Takeoff: Aerospace & Aviation Supply ChainsConstructing the Future: Pioneers in Infrastructure DevelopmentConsumer Currents: Exploring the Evolution of Consumer Fast Moving GoodsFarm to Table: Innovations in Food and AgricultureHealth in Focus: Innovation in Medicine and PharmaceuticalsElectrifying Advances: The Next Wave of Innovations in TechThe Next Chapter in Retail and E-commerce
Technology

APAC’s deepfake problem: HR’s role in countering AI fraud across supply chains

30 Apr 202610 min read
APAC’s deepfake problem: HR’s role in countering AI fraud across supply chains

Summary

  • A finance worker at Arup's Hong Kong office was tricked into transferring US$25.6 million
  • Deepfake fraud across Asia-Pacific surged 1,530% between 2022 and 2023. Singapore saw a 240% spike in 2024
  • Regulators in Singapore, South Korea, and Hong Kong are responding (but most of their frameworks target financial institutions yet there is notable increase in fraud within supply chains and procurement)
In January 2024, a finance worker at Arup’s Hong Kong office joined a video call with his CFO and several senior colleagues. He recognised their faces. He heard their voices. They instructed him to execute a series of wire transfers across 15 transactions, and he complied, moving HK$200 million out of the firm.

But, as it turns out, every person on that call was a deepfake.
AI fraud is hitting APAC supply chains harder than anywhere else on the planet. Deepfake video calls. Synthetic voices. Increasingly, wire transfers are being authorised by people who believe every face they see.

But the technology is not where the failure occurs. The failure is human, and that makes it a HR problem.

In the Arup case, the attackers had reconstructed the faces and voices of Arup’s leadership from publicly available footage. And by the time the employee raised the alarm with actual headquarters, US$25.6 million was gone. A total of zero arrests have been made.

The funds remain unrecovered. Arup’s Global CIO Rob Greig later said that “Like many other businesses around the globe, our operations are subject to regular attacks, including invoice fraud, phishing scams, WhatsApp voice spoofing, and deepfakes.”
He added, “the number and sophistication of these attacks has [since] been rising sharply in recent months.”

The instinct is to treat deepfake fraud as a cybersecurity issue. It is not. Or rather, it is not only that.

The Arup attack did not breach a firewall. It did not exploit a software vulnerability. It targeted a person, someone who trusted what he saw and heard and who followed what he believed were legitimate instructions from senior leadership.

“We see a number of deepfake cases where professionals are left scratching their heads over how this fraud could have occurred despite all the controls and safeguards being in place,” wrote Daniel Fu, a forensics partner at PwC Singapore, alongside directors Dmitry Kosarev and Ankur Agrawal, in a white paper on the subject. “Hindsight can be a great thing, but the fact is, this is happening, and organisations are having to learn the hard way.”

In addition, there is a cultural dimension that is perhaps more of a problem in the Asia-Pacific than anywhere else. A 2025 report from the ACCA found that procurement fraud and bribery are the region’s dominant risk categories, and noted that “in many APAC markets, discussing fraud is seen as disloyalty, and hierarchical structures deter whistleblowing.”

The scale of the problem backs this up. Deepfake incidents across Asia-Pacific surged 1,530% between 2022 and 2023, according to Sumsub, and kept climbing up another 194% (YoY) through 2024.

Asia-Pacific is now the fastest-growing market for deepfake fraud on the planet. Singapore saw a 240% increase in deepfake fraud in 2024, tied with Cambodia for the second-highest rate in the region, behind South Korea’s 735%.

The financial toll reflects this trajectory. The United Nations Office on Drugs and Crime (UNODC) estimated that cyber-fraud scams targeting victims in East and Southeast Asia generated between US$18 billion and US$37 billion in losses in 2023 alone. Deloitte’s Centre for Financial Services projects that generative AI-enabled fraud losses will reach US$40 billion in the United States by 2027, up from US$12.3 billion in 2023.

And the attacks keep coming closer to supply chain operations. A near-identical deepfake video call struck a Singapore-based multinational in March 2025. A finance director joined a Zoom meeting where AI-generated likenesses of his CFO and other executives instructed him to transfer US$499,000. He complied, only growing suspicious when a second request for US$1.4 million followed. Singapore and Hong Kong authorities worked together to recover the initial funds, a rare outcome. The Singapore Police Force, Monetary Authority of Singapore (MAS), and the Cyber Security Agency (CSA) had already issued a joint advisory earlier that month, alerting businesses to exactly this type of attack.

What this means for supply chains

The Arup case and its imitators look like financial fraud. And, well, they are. But strip away the deepfakes and the dollar figures, and the mechanism underneath is simple: someone pretends to be a person you trust, and asks you to send money. That trick works just as well on a procurement officer paying a supplier as it does on a finance worker answering to the CFO. Arguably better, because procurement officers deal with far more external contacts, most of whom they have never met face to face.

The numbers bear this out. The 2025 Association for Financial Professionals Payments Fraud and Control Survey found that fraudsters were moving away from their old playbook of impersonating a senior executive. Scams impersonating the CEO dropped to 49%, down from 57%. Vendor imposter fraud, meanwhile, had climbed to 45% from 34%.

Most employees know what their CEO sounds like. Fewer can say the same about a supplier they have never met in person.
Counterfeiting and fraud in supply chains is, of course, far older than deepfakes. But what has changed is the speed, and who can do it. Now the tools are cheap and fast, and the same AI that generates a convincing deepfake of a CFO can just as easily produce a forged shipping document.

Srinivas Allaparthi, writing in the International Journal of Supply Chain Management in 2024, laid out the problem and its mirror image. The problem: counterfeiting in supply chains is enormous, and AI makes it easier. The mirror image: AI is also the best tool available to fight it.

On the detection side, Allaparthi described systems that do something no human inspector realistically can, which is check every single component coming through a production line. As he put it, “AI-powered computer vision systems inspect spare parts by analysing serial numbers, dimensions, and packaging against an authenticated database.” If a part does not match what the verified supplier is supposed to be sending, it gets flagged before it reaches assembly. Siemens has already deployed this in practice, partnering with a firm called Cybord to run AI-driven visual inspections during electronic manufacturing. The Qsystem catches counterfeit and defective components in real time, “enhancing product reliability and reducing recalls.”

The uncomfortable truth, though, is that this kind of detection only works when a company has invested in the systems to support it. And most have not.

The 3.37 million gap

The workforce readiness gap compounds the problem.
Slightly more than half of employees globally still cannot fully identify how attackers use AI, which is a gap that Cisco’s 2025 Readiness Index put at 52%. And the specialist headcount to close it simply does not exist, with 3.37 million cybersecurity roles sitting unfilled across the region, according to the ISC2 2024 Cybersecurity Workforce Study.

That shortfall puts extra pressure on HR departments to make existing staff more fraud resilient.

Donovan Cheah, Partner and Head of Employment & Dispute Resolution at Donovan & Ho, argued that “the most impactful step is for Human Resources, IT, and Legal to jointly redesign how approvals and verification work in practice, not just on paper.” In the same piece, Rodney Pereira, Senior HR Director at Medtronic, echoed this suggestion “to establish a unified, organisation-wide framework for digital identity verification and incident response.”

But what does that mean, in practice? It means callback procedures on any transfer request above a threshold. Multi-layer verification that does not rely on video or voice alone. And, perhaps most difficult of all, a workplace culture where a junior employee feels safe refusing a directive from someone who looks and sounds like the CEO (or Director, or Manager, or so on).

Regulators are moving. But are supply chains?

Singapore’s Monetary Authority released a comprehensive information paper on deepfake cyber risks in September 2025. South Korea enacted the AI Basic Act, effective January 2026, the first comprehensive AI framework law in the region. Hong Kong’s HKMA launched a GenAI Sandbox with 27 AI use cases across 20 banks, several explicitly focused on deepfake defence.

But much of this regulatory activity is aimed at financial institutions. Supply chain operators and procurement teams sit one step removed from these frameworks. And they are no less exposed.

Dmitry Volkov, CEO of Singapore-headquartered Group-IB, noted in the firm’s 2026 High-Tech Crime Trends Report: “Today’s cyber threats aren’t isolated events. They’re links in a supply chain attack ecosystem, where one compromise can reach thousands of downstream victims.” He added that “AI did not create supply chain attacks. It has made them cheaper, faster, and harder to detect.”

Allaparthi’s paper helps explain why. He mapped the problem against Gartner’s supply chain maturity model, a five-level framework that describes how integrated a company’s operations actually are. The insight is simple, even if the jargon is not: the more connected your departments are, the harder it is for a fraudster to slip through.

At the lowest level, which Gartner calls “siloed management,” each department works on its own. Finance pays invoices. Procurement manages suppliers. IT watches for cyber threats. They do, however, not share information. So when a supplier suddenly asks for payment to a new bank account, nobody in finance knows that IT flagged an unusual login on that supplier’s email two days ago.

At the middle levels, companies have started connecting these functions. A payment request from a vendor triggers an automatic check against that vendor’s usual banking details, communication patterns, order history. The kind of basic cross-referencing that, as Allaparthi wrote, allows AI to “synchronize workflows across silos” and flag something suspicious before money leaves the building.

At the top, what Gartner calls “ecosystem orchestration,” a company’s internal systems are wired into its suppliers’ systems too. Data flows both ways. Allaparthi described these environments as ones where “AI enables predictive analytics and fosters collaborative platforms, accelerating maturity.” In plain terms: a deepfake call asking to reroute a payment would be checked not just against internal records, but against the supplier’s own systems. The request would fail multiple automated tests before a human even saw it.

Signs of a response

HR departments across the region are starting to respond. Nanyang Technological University (NTU) Singapore recently signed a research collaboration with identity verification firm Sumsub to develop watermarking techniques that prevent deepfake generation, the first initiative of its kind in Asia-Pacific. Singapore’s GovTech has built INDEPTH, a deepfake detection hub for government agencies.

And in the private sector, firms like DeepFaic, whose detection models were developed with Singapore’s A*STAR now offers deepfake screening for video job interviews and real-time virtual meeting verification, with obvious applications for remote hiring and supply chain partner calls alike.

On the counterfeit detection side, the Siemens-Cybord partnership that Allaparthi documented has shown what is possible when AI is deployed to inspect physical goods at the point of assembly, rather than relying solely on supplier documentation.

But not every sector is as quick to move.

Three years ago, a video call from a supplier or a WhatsApp message from a freight partner was not a security decision. Today, each of them is. And the department best placed to train people to question what they see and hear is not IT, but HR.
APAC’s deepfake problem: HR’s role in countering AI…