Improving AML processes using Artificial Intelligence – the challenge of false positives

Improving AML processes using Artificial Intelligence – the challenge of false positives

Over the past couple years we have been successfully implementing AML solutions for financial, gaming and payment service organizations. Our flagship product, ComplyRadar, is one of the leading products in this area, specifically focusing on ongoing transaction monitoring.

In this blog we will focus on a problem which from our experience is a primary problem in the area of AML transaction monitoring – False Positives. The intention is to describe one particular AI ComplyRadar feature which we introduced recently to alleviate this problem. In future articles we will explain other related AI features.

First, we will try to contexualise the problem. Due to the growing number of fines issued, there is now an increased drive to hold compliance officers, senior executives, and board members personally liable for failing to have an adequate AML program and transaction monitoring system (TMS). Any alert generated and not closed by the TMS must be reviewed by a human. Current technologies cannot assess a transaction in context. Without human intervention, it is difficult, almost impossible to adapt to the rapidly evolving patterns used by money launders or terrorists. 

Many TMSs deployed today are completely rule based. The rules use fixed risk scores of the customer, product, and geography involved. Based on the risk score, different thresholds are applied within the same rule.  The result is that TMS systems are generating huge numbers of false positive alerts. Each alert must then be reviewed by a human investigator within strict timeframes. Most banks are experiencing a “false positive” rate of about 80%+ percent. 

A high number of false positives impacts negatively the efficiency of AML teams. Once a transaction is alerted as potentially suspicious, this is typically where the process of the AML team begins. Siloed systems require investigators to access multiple systems to gather information on the customer and their transaction history to determine whether a transaction is suspicious. Additionally, the investigator may keep monitoring the earmarked customer for a period of time in order to formulate a better formulate an opinion of the underlying activities. All these activities, increasing regulator demands, and high number of false positives are putting a lot of stress on AML officers.

This is where we have identified that AI can help to improve existing AML processes. ComplyRadar was in essence a flexible rule engine that can monitor thousands of transactions a second. However, we wanted to add an additional intelligent layer that acts as a filter on top of the alerts generated by the rule engine to help the AML officer hone on those transactions that have the highest likelihood of resulting from illicit activities.

As a start, we have framed the problem in a reinforcement learning context. Reinforcement learning is an AI approach whereby there is this notion of an intelligent agent that is monitoring and learning to take actions in an environment in order to maximise some form of cumulative reward. The reward is typically a delayed reward, meaning that the actual success or not of an action will not be known immediately. In our case, the agent is monitoring results coming out from the rule engine and decide whether any alerted transactions are deemed as highly likely to be suspicious or not, hence assigning a risk probability to a transaction. The agent also takes into consideration contextual information, for example, the customer profile, previous activity patterns, geography, etc. A human AML officer will then be automatically redirected to high priority suspicious transactions and kick off the human investigation immediately. Once the AML officer concludes his investigation, which might take days or weeks, and decides whether there is a solid case for illicit activity, the outcome of this decision is fed back as a delayed reward to our AI agent. This is the point where our AI agent adjusts his policy on how to act on similar transactions in the future, enforcing his behavior in the case where he had correctly flagged the transaction as suspicious, or else tuning his actions in the case of failure. This cycle makes the learning process an iterative one where the agent is all the time learning to improve his policy and decisions by validating his actions with those taken by the human AML officer. The result is that with every cycle, the AI agent will learn to reduce false positives.

Contact us to book your free demo of ComplyRadar.

Share this article
Share on facebook
Facebook
Share on twitter
Twitter
Share on linkedin
LinkedIn
Share on reddit
Reddit
Share on email
Email
Share on print
Print