Insight and analysis on the information technology space from industry thought leaders.

2025 Predictions: The Evolving Fraudster Economy

Organizations must adopt multi-layered strategies using AI, behavioral analysis, and dynamic verification systems to stay ahead of increasingly agile fraud networks.

Industry Perspectives

January 6, 2025

4 Min Read
a computer keyboard with a key that labelled with the word Fraud
Alamy

By Ilya Brovin, Chief Growth Officer at Sumsub

While digital access affords many benefits, it also creates new risks. In 2024, about one in every 100th user was involved in a fraud network. Why is this happening? As we move toward total digitization, the playing field gets wider and the barrier to entry gets much lower (and cheaper).  

In 2024, the fraud trends in previous years continued: identity theft, deepfakes, fake documents, and money mules. However, this year’s key difference was the rate and accessibility of these fraud types. We view this fraud proliferation as the rise of the “fraud-as-a-service” (FaaS) economy. Today, for an average expense of about $1,000 per month, fraudsters can cause upwards of $2,500,000 in damage to businesses, according to Sumsub’s 2024 Fraud Report. Fraudsters are no longer lone dark-hooded figures, but modern fraud rings and networks investing in tools and infrastructure at scale.  

The rise of FaaS means even those with little expertise can execute sophisticated and hybrid attacks at a rate never seen before. Less-than-expert fraudsters are outsourcing operations to specialized providers and purchasing ready-made tools like malware, stolen credentials, and phishing kits, to scale operations. And the cost to enter? Shockingly low.  

Related:The Impact and Future of AI in Financial Services

Depending on the depth of information, fraudsters can acquire basic knowledge for as little as $50. And when a single phishing operation can target thousands of users (while not every attempt will be successful) the sheer volume of attacks yields significant returns, making the initial investment seem like pocket change.  

The Fraud Landscape Ahead

The proliferation of AI and readily available AI tools will, no doubt, remain one of the biggest drivers of fraud in 2025. But it may also be the solution. In 2025, companies will need to go above and beyond in training and utilization, taking the fundamental building blocks of AI, algorithms, and data, and using them to analyze data patterns and user behavior to grow security across digital landscapes effectively. From onboarding and monitoring to management, AI-driven analysis is a key component of protection against the growing fraudster economy.  

Beyond leveraging AI, fraudsters are also exploiting gaps in ID verification systems. Almost 87% of end-users choose online services with strict verification and anti-fraud measures. At one time, many traditional verification systems were considered robust with KYC and onboarding checks, but with the rate of today’s technological turnover, verification technology must be ongoing, dynamic, and continually adapt to the evolving fraud landscape, as more than 70% of fraud occurs past the onboarding stage. Enter: the importance of turning to AI-powered defense mechanisms.  

Related:AI Is Making Financial Fraud Easier and More Sophisticated, U.S. Treasury Warns

Trifect(a) to Protect

While cybersecurity and fraud prevention have been separate entities within a corporate structure in previous years, the fraud landscape ahead requires a fusion of AI, cybersecurity, and identity fraud prevention. Smart organizations recognizing the shift in protection needs will merge the two functions, creating a comprehensive defense strategy to incorporate capabilities like API inspection and digital risk protection along with AI defenses to protect both the organization and its users.  

However, a caveat is that AI fraud detection models are not immune to fraudulent manipulation. The key is in the combination of defenses and well-designed, personalized AI models, trained to recognize specific fraud patterns and, most importantly, the underlying intent behind them.  

As AI is used increasingly to defraud and fight cybercrimes, there becomes an issue of personal protection, which organizations inevitably need to take responsibility for when leveraging AI in this capacity. This is where a baseline of regulatory AI framework becomes paramount.  

Related:Phishing Attacks, Deepfakes Top AI-Powered Threats

With the expected rise in institutional adoption of decentralized finance, and other emerging digital-focused industries lacking formal regulations, the onus lies on the companies to implement protections for both the organization and the users as government regulation tries to keep up. 

The battle between fraudsters and organizations will continue in 2025, so staying agile and leveraging technology advancements to one’s advantage is key. Businesses that invest heavily in multi-layered prevention strategies combining AI, behavioral analysis, and robust verification methods will prevail against the ever-evolving fraud schemes in years to come.

About the Author

Ilya Brovin joined Sumsub in 2021 and was appointed Chief Growth Officer in 2023. Ilya has over 20 years of experience in finance and private equity and vast experience working with tech and financial services companies as an investor and board member/observer. At Sumsub, Ilya is responsible for growth and strategy, including key sales, strategic partnerships, fundraising, investor relations, and M&A. Ilya holds a degree in Economics & Finance and an MBA from Harvard Business School. He currently lives in London, UK, where one of Sumsub’s international offices is located. Ilya has a passion for Crypto and Web 3.0 topics, and he is a seasoned expert in crypto operations and regulations worldwide.

Sign up for the ITPro Today newsletter
Stay on top of the IT universe with commentary, news analysis, how-to's, and tips delivered to your inbox daily.

You May Also Like