News

XAI-4-AML Chair: how can AI help combat money laundering?

May 12, 2025 - Big Data & AI - Cybersecurity

Since 2020, Télécom Paris, part of the Carnot TSN institute, hosts the XAI-4-AML research chair. This aims to understand how artificial intelligence could help improve the fight against money laundering and the financing of terrorism. This involves studying the obstacles to its adoption, in particular questions surrounding the explicability of AI.

The fight against money laundering and the financing of terrorism (LCB-FT) requires financial institutions to implement systems to detect potential illegal transactions. Failure to do so exposes these institutions to heavy penalties, which can run into the tens of millions of euros. As a result, they mobilize considerable resources - both financial and human - to comply with the law and contribute to LCB-FT. And the result is clear: these resources enable the seizure of just 5% of funds linked to money laundering and terrorist financing.

The limits of deterministic systems in LCB-FT

How can such gaps be explained? " LCB-FT is a complex job, firstly because of the agility of criminal activities," notes Winston Maxwell, Professor of Law at Télécom Paris. " The individuals involved in these transactions are able to adjust quickly to the regulations and systems put in place by financial institutions. What ' s more, they can spread their funds across several banks, with the latter then only being able to identify part of the problem. "

Current monitoring systems are based on a deterministic approach. This involves a list of several hundred predefined scenarios, to which multiple rules are associated, feeding algorithms whose aim is to raise an alert for each case considered suspicious. These alerts are then processed by human operators, at several levels, who can then report a case to Tracfin, the intelligence service of the French Ministry of the Economy and Finance. Finally, if successive checks confirm the initial suspicion, the process can lead to legal proceedings. It's a long and complex decision-making chain, all the more so as operators have to deal with an enormous number of alerts every day, the overwhelming majority of which are actually false positives.

So, how can we improve the efficiency of LCB-FT systems? Can artificial intelligence help? If so, how and under what conditions? These questions were raised by David Cortés, then head of data and AI at PwC. In 2020, he approached Télécom Paris, a move that led to the creation of the "Explicability of Artificial Intelligence for Money Laundering" (XAI-4-AML) chair, headed by Winston Maxwell and David Bounie, head of the school's Economic and Social Sciences (SES) department. The project has also been joined by Dataiku, a company specializing in AI solutions, Crédit Agricole, and the Fintech-Innovation division of the Autorité de contrôle prudentiel et de résolution (ACPR), the French supervisory authority for financial institutions.

What role can AI play in LCB-FT?

" Originally, the Chair focused on questions of AI explicability, but we quickly realized that this issue could not be tackled in isolation from the rest ", relates Winston Maxwell. The XAI-4-AML team therefore focused on the potential added value of AI in LCB-FT, the challenges facing the technology and the obstacles limiting its adoption, including the imperatives of explicability.

To begin with, researchers have been examining the role that AI could play in surveillance systems. " Today, some banks are using machine learning models to handle some of the alerts generated by traditional algorithms," says Winston Maxwell. " In general, then, we're seeing collaboration between deterministic systems and AI tools. Will the latter eventually be able to completely replace traditional approaches? From both a technical and regulatory point of view, this is not certain.

Artificial intelligence is also currently being used to support human operators, to speed up the analysis of alerts received. Could this work eventually be taken over entirely by technology? " In an area as sensitive as LCB-FT, decisions will always have to be made by humans," says Winston Maxwell. " Reporting a case to Tracfin can have serious repercussions for the individual concerned. Leaving this responsibility to a machine would therefore be contrary to fundamental rights principles. "

What is "explainable" AI?

Consequently, AI must remain confined to a decision-making role. But even for such a function, its adoption comes up against the constraints imposed by the regulator. " Financial institutions are required to map the risks associated with their customers," describes the chair's co-holder. " Then, they have to prove that they have provided a scenario corresponding to each identified risk, and then that their system truly takes all these cases into account. " However, AI models do not work with predefined rules, but rather seek to identify abnormal behavior. At present, therefore, they cannot meet the above requirements.

But it doesn't have to be that way. Indeed, the XAI-4-AML Chair was precisely the opportunity to bring together financial institutions and regulators to find a new common approach. Through the workshops co-organized by the various partners, the issue of AI explicability appeared to be of paramount importance for the confidence of all players. But what exactly is meant by this term? " The explicability of an automatic surveillance system involves associating with an alert the reasons that led the algorithm to consider the movement suspicious," explains Winston Maxwell. " For example, indicating that an alert has been raised because of a cash deposit of 10,000 euros by an individual who does not work in a profession handling large sums of cash. " This is different from an AI model, which generally operates like a "black box". However, work is underway to improve the explainability of these algorithms, notably by Pavlo Mozharovskyi, a teacher-researcher at Télécom Paris.

Furthermore, the ACPR would like to see a higher level of explicability, i.e. for new systems to be able to demonstrate their effectiveness. Would this be enough to win the regulator's confidence? " The truth is that today, the ACPR is unable to precisely define its AI requirements, as the issue is new," notes Winston Maxwell. " Ideally, it would probably prefer the old systems to continue to coexist with the new models. But this would entail additional costs for the banks..."

Is the explicability of AI always beneficial?

The work of the XAI-4-AML Chair has also raised a more unexpected question: is the explicability of AI always desirable? The results of a thesis by Astrid Bertrand - then a doctoral student at Télécom Paris and now a scientific project manager at the European Commission - provided a nuanced perspective on the subject. The approach included a study of 256 volunteers, to whom a chatbot recommended life insurance products according to their profile, following an exchange in natural language. However, the tool sometimes deliberately proposed unsuitable offers, while justifying its suggestion. The result: not only did the explanations provided fail to help users perceive the inappropriateness of the proposal, but the fact of dialoguing with the machine increased participants' confidence in the tool, even to the detriment of their own interests.

" This experiment was not directly linked to LCB-FT, but its results could be applied to the explicability of AI in this field ", points out Winston Maxwell, who directed the thesis. " If a machine learning model provides too many explanations to a human auditor, the latter is likely to place excessive trust in the tool and decrease his vigilance when reviewing it. " Consequently, in order to preserve human analytical capacity, Astrid Bertrand's thesis suggests favoring AI interfaces that promote user autonomy, notably by stimulating curiosity and skepticism.

The XAI-4-AML Chair will come to an end in early 2025, with the completion of a thesis addressing the explicability of AI from the perspective of the philosophy of science. However, the current partners are planning to renew the structure, to which new financial players could be added. " There is still much to be done to stop the scourge of money laundering, so the problem of AI's contribution to this fight is far from exhausted," says Winston Maxwell. " What ' s more, the lessons relating to LCB-FT can be applied to many other regulated banking areas that require control systems and analysis by human operators. "

Latest news

,

[BELLE HISTOIRE] LaserSurf: spotlight on laser surface functionalization

The LaserSurf joint laboratory has been uniting IREPA LASER and the ICube laboratory, part of the Carnot TSN institute, since 2023. The two partners intend to extend their work on the functionalization of surfaces by laser, and successfully scale up their innovative processes to industrial scale.

Vœux 2026: Carnot, a collective momentum

Following announcements by Philippe Baptiste, Minister of Higher Education, Research and Space, on Wednesday January 21, 2026, the Carnot call for applications, currently online on the ANR website, will be modified and the submission of proposals temporarily suspended. Against this backdrop of profound transformation in partnership research, the Carnot Network asked Alexandre Bounouh, President of the Carnot Network, about the issues, challenges and prospects ahead for the network and its institutes, including the Carnot TSN institute.

[BELLE HISTOIRE] A platform for testing the performance of new telecoms systems

The POM project brought together researchers from Télécom Paris, part of the Carnot TSN institute, and Nokia Bell Labs to model and evaluate telecoms systems. This collaboration was based on the use of the TTOOL platform, accredited by the Carnot TSN institute.

Need more information?

© 2022 Carnot Télécom & Société Numérique | Legal Notice