This client is a popular U.S.-based fast-casual food chain with thousands of locations, a net income in the hundreds of millions, tens of thousands of employees, and an obsession with convenience for their customers.
Customers can already place orders on the restaurant’s website for quick pickup in-store—a great option that’s social-distancing friendly and cut down on long lines indoors. But the restaurant chain wanted to take convenience to a whole new level: allowing customers to order with their voice-enabled device, so they wouldn’t have to type their order into an app.
The chain partnered with Concentrix Catalyst for its first Amazon Alexa voice ordering capability. The idea was simple: customers would tell Alexa to place an order, then get a notification when the order was ready for pickup. We made it possible to reorder from past orders using fewer words (avoiding a long and convoluted web experience) with next-gen, conversational AI and a personal “favorites” menu.
But let’s start at the beginning. Our process started with a human-centric design approach and an outside-in perspective—putting the customer at the center of the journey and visualizing the intent behind requests.
With that understanding, we could dive into the tangle of sophisticated strategic design thinking. Machines and robots don’t always know what humans are trying to say, because the technology can recognize an almost infinite number of ways to ask for the same intent. Understanding intent relies on natural language processing (NLP) and context to get to accurate results. (Think about it: something like “hold the beans” could mean lots of different things without context!)
To solve for this, we leaned on the technology’s ability to improve through machine learning, based on past interactions—like understanding that there are multiple ways of asking a question (“what’s the weather?” vs. “could you tell me the weather for today?”). The intelligence had to be built by understanding all the intents and entities that would go into a voice-enabled experience skill—no small task!
Here’s what we focused on to pave the way to voice maturity:
- Exploring CX goals: We looked at different avenues with specific business and consumer goals in mind to please the people who matter the most: customers.
- Creating a vision: We visualized how customers would use the voice functionality (and benefit from it!) before we started the journey.
- Starting small and failing fast: Rather than experimenting across every audience interaction channel, we tested bots with a single audience segment or interaction type. We then analyzed, trained, and expanded interaction types from there.
- Continuous feedback: To make sure processes and practices were working, we gathered and analyzed metrics at regular intervals, pushing for continuous improvement.
The solution was built with Azure Bot Service, Azure Cognitive Services (LUIS), and Azure Bot Framework (SDKs), enabled using Amazon Alexa (part of AWS).
Want fries with that (someday)? This solution lays the groundwork for future opportunities, too: refining the intelligence so customers can select items that aren’t from past orders, connecting to delivery services, and integrating directly with the restaurant’s website (so the voice-enabled experience can work for customers who don’t already have the restaurant’s app).
Listening and understanding customers first—before jumping into solution design—made for a user-friendly, human-centered solution.
The Alexa voice-enabled experience increased operational efficiency by 40 percent for the restaurant chain by reducing lines at individual locations. It also increased profits by 10 percent by helping the restaurant serve more customers per day without significantly burdening employees. (Alexa, we’ll have what they’re having!)
Learn more about Concentrix’s human-centered approach to CX design.