How is Behavioral Analytics Used to Predict New Account Opening Fraud?
New account opening fraud is becoming a bigger and bigger issue for financial services. This makes the challenge of balancing a seamless customer experience without sacrificing risk and security measures that much more difficult. To help solve this, we invented Behavioral Intelligence.
Behavioral Intelligence sits at the intersection of predictive analytics and behavioral economics. We help our customers solve the negative correlation between experience and risk by bridging the gap between customer experience and risk&fraud teams with dynamic customer experiences.
We do not concern ourselves with what a user answers to a particular question, we’re far more interested in how they answered the question. From there, we give our customers insight into how behavior is impacted by the wording and the order of certain questions, allowing them to better optimize and better understand the intent of their users.
What started years ago as a way to predict whether users would complete or abandon online surveys is now so much more than that. From identifying medical nondisclosure and tobacco usage nondisclosure to dynamically adding/removing friction during the application for some of the largest financial service companies in the world, it’s safe to say Behavioral Intelligence has come a long way.
The most interesting part about it is how it is now being utilized in more of a “behavior-as-a-service” capacity. Customers are using our behavioral capture and proprietary real-time processing/signaling capabilities to predict whatever outcome they desire. We’ve found user’s digital body language to be highly predictive of many different outcomes and we’ve become relatively outcome agnostic as a result.
With that said, one of the biggest problems facing insurers today is the rise in fraud, especially as they rush into their digital transformation efforts.
Predicting New Account Opening Fraud
After years of analyzing user’s digital body language, identifying high-risk behavioral patterns, and honing our machine learning models, we’re able to help carriers distinguish between genuine and fraudulent customers much more precisely than traditional risk modeling.
An easy way to think about is similar to how poker players look for “tells” – we’re looking for “tells” for high-risk and fraudulent users.
For instance, application fluency, or how familiar a user is with the application process can indicate a higher probability of being a risky or fraudulent application.
To the untrained eye, application A would likely seem like a genuine customer as almost all signs point to green. Instead, our digital polygraph shows that this use is showing far greater familiarity with the application process, which is common with fraudsters using stolen or synthetic identities, versus application B, which is an example polygraph of a normal, genuine applicant.
But fluency is just one piece of the puzzle. Combined with other behaviors such as copying/pasting certain fields, how a user moves their mouse, switching tabs, and then copying/pasting a form field, all of these behaviors make up the user’s “digital body language” which can be used to predict their intent.
It’s crucially important to keep an eye on these sophisticated cyber-attacks as they continue to rise.
Another example is their knowledge of personal data. When a genuine user is filling out an application, the personal information section is typically very straightforward as they don’t need to think about how to spell their last name or what their zip code is. If a fraudster is misrepresenting themselves on an application, they may display inconsistent typing patterns, lengthier time on questions as they search their spreadsheet for the correct information, and may copy/paste or make corrections more frequently than a genuine user would.
A final example is the use of bots to automate a lot of brute force and phishing attacks. In that case, there are a number of consistent behaviors we have noticed across carriers to determine the likelihood of a user actually being a bot. Given the cat and mouse nature of bot detection, you’ll need to set up a call with us to learn more about how we’re solving that issue.
Those are just a few of the many examples we have but the data speaks for itself. It is estimated that roughly 64% of confirmed account opening fraud cases showed a lack of familiarity with the data.
ForMotiv’s secret sauce is not only our ability to identify these behaviors but the fact that we enable you to dynamically react to applicants based on their behavior. Retroactive analysis and spot checking of policies were sufficient in the past, but the future is real-time analysis and having an application process that adapts to the individual user, both good and bad.
If you want to learn more about ForMotiv enabled dynamic experiences, let’s set up some time to chat.