Blog

What Are the Limitations to Enterprise Use of Generative AI Solutions?

In our previous article, we explored how ChatGPT and similar generative AI services may transform the customer experience (CX) by improving the self-service process for customers and drastically improving the tools available to customer advisors within the contact center. In this article, we will explore some of the limitations of gen AI-powered natural language processing services, and the implications, for example, of generative AI customer support.

These generative AI services tools look astonishing. They make people go WOW! It is as if a search engine has developed a superpower: every time you ask a question the answer is fully formed, rather than just being a list of websites where you might find the answer if you look hard enough.

However, there are some limitations that all CX executives need to be aware of before deploying a generative AI solution.

Some of the key limitations to using generative AI for customer service

The most serious is that AI is a ‘black box’ – you can’t see into the way it formulates an answer. This means that it can be very hard in a customer service environment to guarantee the quality of the answers provided. This is because a generative system will think about each question individually and will create a bespoke answer, so even if you ask the same question again, it will answer it differently.

For this reason, it is difficult to manage the quality of the response because we can never be certain of the exact answer. These systems are very smart so will inevitably improve over time – and so therefore will our trust in them. But think about the situation where you are managing customer service in a regulated industry – let’s take banking as an example. The regulator asks: “How can you guarantee the advice your chatbot gives about mortgage interest rates is correct?”

With Generative AI solutions it is currently difficult to guarantee that a customer question about interest rates will be answered the same way within the regulatory parameters each time. We know that it should be the right answer, but it can’t be guaranteed because of the black box.

Brands not only want to control the quality of content that is delivered, they want to control the tone. The only way to do this is with fine-tuning and private data added on top of the large language model (LLM) used for the initial training. You can’t simply use these systems out of the box.

AI hallucination is another problem with generative AI that can cause a problem for brands that want to rely on it for customer interactions. Hallucination is when the AI confidently responds to a question with the wrong answer – there is no sense of ‘I’m not 100% sure about this, but this could be the answer…’

Hallucination comes about because the model has not been sufficiently trained, through inherent biases that occurred during training, or just a lack of real-world understanding. They can quickly lead to both an erosion of trust in the answers, and also ethical questions if stereotypes or misinformation are included in responses.

Then there is the question of data use and customer privacy. If used in a customer service environment, then the chatbot will require access to customer data. For example, if a customer asks about an unusual payment on their account, then the chatbot needs to connect the account to the customer so it can explore the problem.

Many companies will be wary of this, in addition to regulators. As already discussed, the black box nature of these bots means that the processes they use are not entirely in the control of the brand using the bot —therefore they may be very wary about allowing the bot to access and use personal customer data.

Data location is another potential privacy issue. Many companies are restricted in where and how their customer data can be processed. Creating a local instance of the bot may prevent this issue, but if a European company wants to use a tool like ChatGPT out of the box on US servers, then that is unlikely to be allowed by European data regulators.

In summary, these tools are exciting. We are witnessing an acceleration in the possibilities offered by AI and in most cases, it is now possible to have a natural conversation about a range of subjects with a chatbot – including customer service questions.

However, there are some serious considerations that CX executives need to be aware of:

  • Reliability of black box: if you can’t see the decision-making process then it is hard to trust the answers 100%.
  • Requirement to add private data: these tools can’t support your business out-of-the-box – they will need additional training data.
  • Tone of voice: ensuring you make responses sound like your brand requires additional training.
  • Hallucination: AI tends to create new facts when it can’t find the correct answer, which undermines trust in all answers.
  • Data privacy: we have spent years refining data privacy for human advisors, but now we need to start again and think about how AI can access personal customer data.

The opportunities are endless, but it is worth being aware of some of the current limitations before considering that AI could be a ‘silver bullet’ for your business. This is where a reliable and skilled technical partner can help to develop enterprise-ready solutions and avoid the pitfalls mentioned above.

Contact Concentrix

Let’s Connect