Request a demo

Find out today the difference that Callsign’s unique solution can make to your business.

Seeing is believing.

General enquiries, support and press

By submitting this form, you agree to Callsign’s Privacy Policy

Success

Error

Thank you for your request

Success

In the meantime, connect with Callsign for insights on authentication and fraud prevention

Back to Knowledge & Insights

The battle for the bots: tackling the chatbot challenge

Threat Detection Digital & Commerce

Chatbots have evolved significantly – and rapidly – in the last few years. They’re becoming increasingly ubiquitous, intelligent and useful. They’re also becoming something of a risk.

The evolution of chatbots might not have reached the point of passing a Turing test, but it’s at a level where the end user may not realize that they aren’t interacting with a human being. This level of sophistication means that people are more comfortable than ever with interacting with chatbots and AIs.

Unfortunately, this hasn’t escaped the attention of cyber criminals and fraudsters, and it’s a certainty that chatbots will become one of their new attack vectors in the coming year.

New steps on a familiar path

The cutting edge slices both ways. With competition constantly on the rise in every marketplace, businesses are under constant pressure to not only bring innovative new products and services to market, but to also persuade their customers to adopt them. Unfortunately, there’s another demographic that has a track record for early adoption, and that’s the bad actors seeking to turn technological advances in their dubious favor.

With a vast array of tools and skills at their disposal, bad actors are highly aware of the proliferation of advanced chatbots. With their own advanced skill sets encompassing everything from script attack to information and social engineering, it’s no surprise that chatbots are already identified attack vectors for bad actors looking to manipulate chatbot technologies to gain access to customers’ information.

A far-reaching concern

This is every bit as serious as it sounds. It’s an oft-quoted adage that data is an organization’s most valuable asset; but the importance that a given business places on fraud prevention might vary according to other priorities – such as compliance, cost, and customer service. The latter is particularly applicable to businesses who are wary of additional security measures adding friction to customer experience, and resultingly accept a certain level of fraud as the price for keeping customers happy and engaged.

However, the potential impact of attacks on chatbots touches on all of these areas. Granted, chatbots have the potential to enhance customer experience and help bring down costs by reducing the reliance on staff at contact centers. But even if a business is underwritten against the initial financial impact of fraud, it has to consider the reactions of its customers; the loss in trust and the reputational impact that will inevitably follow – not to mention the possibility of steep fines resulting from compliance violations.

It’s here that the less-visible damage is done. The business or the customers may get their money back; whether those customers remain customers is another matter altogether. And with a fifth of consumers already affected by identity fraud in 2020, it’s a problem that businesses need to tackle urgently.

Lessons from the front line

The rapid acceleration in the shift to online-first in many industry sectors has also been matched by an increase in volume of contact center traffic – making intelligent chatbots an attractive option to manage the load.

Contact centers are of course no stranger to attacks from bad actors, with these attacks coming in a variety of guises. Fraudsters are increasingly adept at obtaining personal information from sources such as data breaches or identity theft. Bad actors often set up their own contact centers, expressly for the purpose of mining data; Research and advisory firm Aite Group found that 61% of all fraud cases can be traced back to a contact center.

This makes the many businesses who rely solely on static Knowledge-Based Authentication – or ‘shared secrets’ – particularly vulnerable to fraud. Those risks are potentially amplified further if a business relies partly or wholly on chatbots.

For bad actors using socially-engineered data to directly impersonate a person or a business (or to create a synthetic ID based on several genuine identities), interacting with a contact center is often the biggest hurdle. Human agents and voice recognition systems can be trained to spot and stop fraudulent access attempts; in spite of this, however, some fraudsters still manage to slip through the net.

Chatbots are another matter entirely. Most are AI driven, without a human being anywhere in the loop. As a result, even the most advanced chatbot won’t, in isolation, be able to ‘see’ through a concerted and dedicated fraud attempt backed up by seemingly authentic data.

That doesn’t mean that businesses looking to invest in chatbots should accept the status quo of a security risk; far from it. There’s still time to pre-emptively defend against the threat of attacks focused on chatbots; but it’s dwindling.

Fighting the bot war

Fortunately, the lines of defense that will protect today’s organizations from tomorrow’s chatbot attacks exist today.

Chatbots, on their own, are not able to differentiate between a genuine customer or an imposter; with Callsign’s Positive Identity technology in place, it’s a different matter. Our Intelligence-Driven Authentication (IDA) analyzes behavioral biometrics to ensure that only genuine users can gain access.

As well as considering factors such as device location and ID, Callsign’s technology analyzes inherence factors such as keystroke dynamics; not just the memorable data itself, but how it’s typed or entered.

All of this happens in the background and – from the perspective of a genuine user – passively. Callsign’s behavioral biometrics and device fingerprinting allows customers to be authenticated entirely via a chatbot, with no human interaction.

And in the same way that the customer may not know that they’re talking to a chatbot, the authentication process that’s keeping their transactions safe and secure is invisible. For bad actors, however, the outcome is very visible: despite their best efforts, they’re denied access every time.

Callsign’s technology is already helping businesses across the globe positively identify their users – and in doing so, build trust and deliver smooth and secure customer experiences. Just as chatbots are built on Artificial Intelligence and Machine Learning, our solutions use AI and ML to positively identify genuine users.

Intelligent solutions for an intelligent future

Chatbots may not be able to pass a Turing test just yet, but the human element is already there. Constant advances in AI and ML are allowing chatbots to remember, learn, and deliver an increasingly personalized experience.

This is a good thing – as long as the advances in chatbot humanization are being matched with advances in security. Chatbots are here to stay, and they’ll only get smarter; but so will the criminals. And if the old methods of rules-based authentication aren’t fit for purpose today – and they aren’t – then they’re ill-suited to a future where chatbots play a major role.

It’s imperative that organizations looking to thrive in that future start thinking now about the role that Positive Identification is already playing in this arena. The battle for the bots may be in its early days; but with bad actors and fraudsters habitually giving no quarter, there’s no room for complacency.

More Insights

A change in approach for fraud systems?
Account takeover: protecting your front door
The authentication paradox: to replace or not to…

Want to read more?

See a full list of our latest news and articles.

Go to Knowledge & Insights