Last, now, next: The trends shaping fraud and financial crime

Financial crime moves fast, and the tools to fight it have to move faster. Over the last five years, the methods used by criminals have grown more sophisticated, exploiting gaps in controls, cross-border transactions, and emerging technologies.

According to the UK government’s Fraud Strategy 2026-2029, “fraud against individuals and business is the largest crime type in the UK and cost the economy £14.4 billion in 2023-2024.” For businesses, staying ahead no longer means reacting after the fact.

AI-powered tools, like ID-Pal, are now the frontline, giving compliance teams the speed and precision needed to detect, analyse, and respond to threats in real time and before they hit the bottom line. In this article, we explore how fraud and financial crime has evolved over the last five years, how it looks today, and what firms need to watch for in 2026 and beyond.

The shifts of the past five years

Looking back, one of the most significant trends was the rise of digital-first fraud. In the last 5 years, cybercriminals have moved aggressively into areas traditionally dominated by physical or paper-based attacks.

Account takeover, phishing schemes, and digital fraud became more sophisticated, exploiting gaps in online banking authentication and payment systems. For compliance teams, this meant not just monitoring transactions and changes in customer risk profiles but also keeping tabs on increasingly complex digital footprints.

Another notable trend was the globalisation of money laundering. Criminals became adept at layering transactions across borders, using shell companies, trade-based laundering, and cryptocurrencies to obscure illicit proceeds. As business continues to become more global, growing firms often found themselves navigating multiple jurisdictions at once, needing to satisfy domestic regulatory expectations while contending with the opacity of foreign systems.

Consequently, regulators have stepped up their scrutiny. The Financial Conduct Authority (FCA) and the Joint Money Laundering Intelligence Taskforce (JMLIT) increasingly signalled that gaps in due diligence, controls, monitoring or delayed reporting would carry reputational and legal consequences.

As a result, some of the biggest fines for non-compliance have been handed out:

  • Binance ($4.3bn, 2023): Accused of allowing transactions for illicit actors and failing to file Suspicious Activity Reports (SARs).
  • Danske Bank ($2bn, 2023): Settled with US/European authorities over a massive money laundering scandal in its Estonian branch.
  • OKX ($500m, 2025): Fined for failing to implement adequate KYC/AML systems, resulting in unlicensed operations and lack of sanctions tracking.

Financial crime hasn’t been confined to high-value transactions, either. The past five years revealed a growth in low-level but high-volume scams, targeting both consumers and SMEs. From invoice fraud to social engineering attacks, criminals focused on exploiting the human element, knowing that even the most sophisticated automated controls could be bypassed by an unsuspecting employee.

And yet, while fraud and financial crime increasingly moved online, many firms’ controls were still rooted in older ways of working. Traditional onboarding processes, manual reviews, and siloed systems were common across banks, fintechs and payment providers.

The shifting face of fraud today

Fraud now sits at the centre of financial crime risk in the UK. For many firms, it represents the largest source of financial loss, operational disruption and reputational damage. A UK Finance report suggests that fraud losses in 2025 moved past the £1bn mark seen in 2024, and the trajectory isn’t slowing into 2026 and beyond.

AI has changed how attacks are created and delivered. Deepfake technology can replicate voices and video with unsettling accuracy. Phishing emails can be generated in seconds, tailored to match internal language, reporting lines and tone. This technology has shifted fraud from broad targeting to something far more precise.

Attack methods are also becoming more technical at the point of entry. Injection attacks are being used to manipulate inputs into systems, allowing attackers to bypass controls or alter how data is processed. Camera bypass techniques and presentation attacks are targeting biometric verification, using pre-recorded or AI-generated visuals to trick identity checks that once felt reliable.

Alongside AI-driven attacks, another shift has taken hold. Crime-as-a-service has turned financial crime into a supply chain. Tools, infrastructure and expertise are packaged and sold, making it easy to produce convincing fake identity documents at scale. For a relatively small cost, criminals can generate passports, driving licences and proof of address that are realistic enough to pass basic checks. The scale of this is already visible.

In July 2025, a 21-year-old student in the UK was arrested for selling ‘phishing kits’ linked to around £100 million in fraud, highlighting how accessible these tools have become.

The same model now applies to identity and document fraud, where ready-made templates and AI-generated images and data allow individuals with limited technical skill to create synthetic or fully fabricated identities quickly. That level of access to fake identities increases both the volume and quality of attacks, putting more pressure on onboarding and verification controls that aren’t up to scratch.

In summary

The signs that used to indicate fraud aren’t as clear anymore. Identities can be faked, messages can be generated, and documents can be forged with remarkable accuracy, making surface-level checks almost redundant.

AI-driven phishing attacks, synthetic identities, and insider-enabled fraud are pushing compliance teams away from traditional manual approaches. The tactics being used are moving incredibly quickly, and traditional, manual processes simply can’t keep up.

Today, fraud detection and prevention rely heavily on having the right technology in place. When identities, documents and even biometric signals can be manipulated, basic checks don’t go far enough. AI-powered ID&V solutions like ID-Pal can assess document authenticity, detect signs of tampering, and forensically analyse biometric data to pick up on deepfakes, presentation attacks, synthetic inputs, and much more.

Share:

Featured Solutions:

Related Insights

Russian Sanctions

A misspelt name: the sanctions slip that cost Bank of Scotland £160,000

Regulatory roundup

January regulatory roundup: A busy start for AML, fraud and compliance 

investor onboarding

6 practical steps in investor onboarding for fund administrators

Find out how we can help your business grow