#144: Transforming Customer Service with AI
Episode Number: 144
Speakers: A conversation with Cristina Fonseca, VP of Product in charge of AI/ML at Zendesk, and Jake Burns, AWS Enterprise Strategist
In this episode, Jake Burns, AWS Enterprise Strategist, and Cristina Fonseca, VP of Product in charge of AI/ML at Zendesk, share insights on the optimal implementation of AI in customer experience, underscoring the importance of striking a balance between automation and human interaction. Cristina explores the potential of AI to elevate customer journeys, from refined searches to personalized responses, while prioritizing human assistance for intricate cases. The episode delves into AI's transformative prowess across diverse industries and underscores the collaborative synergy between human ingenuity and technological advancements, steering AI towards positive impact.
Listen to all episodes of the AWS Conversations with Leaders Podcast | Read AWS Executive Insights
——————————————————————————————————————————————————————————————————————————————————————
Full transcript here:
Jake Burns:
So, hey Cristina, I'm Jake Burns. I'm enterprise strategist at AWS. Thank you for being here. Can you introduce yourself and tell the audience a little bit about yourself?
Cristina Fonseca (00:20):
Of course. First of all, thank you so much, Jake, and AWS for the invitation. I am Cristina Fonseca. I am a VP of product in charge of AI and ML at Zendesk. Fun fact, I joined two years ago through an acquisition. So I used to be the CEO of an AI company, an automation company for CX, and Zendesk acquired us to accelerate roadmap and to integrate some of our technical artifacts.
Jake Burns (00:49):
That's great. So maybe we could talk about, to start off with what everyone wants to know, what is the right way to implement AI or artificial intelligence?
Cristina Fonseca (00:58):
So I don't think in the past we had a good playbook of how to implement AI and CX, right?
Jake Burns (02:15):
Right.
Cristina Fonseca (02:15):
So to me, the right way to implement CX is first of all, let me understand the customer, let me understand what customers contact me about. And after that, I need to distinguish very well between what should be automated and what should not be automated. Because when we think about automation, that's great, but not everything can be automated. There's queries, there's things that really require a human agent either because they need to go look information up in internal systems because cases are too sensitive. And I think there's no one size fits all.
(03:03):
So to me, the first job of AI should be help CX leaders understand what should be automated versus what should not be automated. And if we have AI do that split, we can optimize for the perfect experience, which is what makes sense. Otherwise, I think it's going to be very tough for everyone to embrace AI because we are very good at pointing when AI does the wrong thing, right? "Oh no, it doesn't work well. It's just giving me wrong answers. It's not understanding me." But I think if we can distinguish and have a strategy around using AI when AI can really help and leveraging humans for the tough cases, the ones that require human assistance, the ones where empathy can really make a difference, then everyone can win.
Jake Burns (03:56):
Mm-hmm So there's a lot of people out there trying to figure out right now where they could use AI within their organization. You said use it where it makes sense and use humans where it makes sense. Can you give us an idea of what your opinions on where it does make sense to use AI right now?
Cristina Fonseca (04:47):
Look, to me, let's maybe go through the customer journey and try to understand the role of AI in each of the phases. So first of all, and usually the first thing that customers do when they require help is to search, right?
Jake Burns (05:04):
Right.
Cristina Fonseca (05:05):
So when you search, right now the search tools we have available could be improved to not give me maybe a bunch of links or a bunch of other pages I can go read to find information. But we have technology that can give me the actual reply right now. So I would say there's a trend and there's an opportunity here to, I search, I get the reply. Same thing in bots or in conversational agents. I contact the company. If the answer to my question can be found online or can be just a predefined answer, technology should make that job, go fetch the reply and send that back to the customer.
(05:56):
Then there's another level which is I'm asking a question, which is about my order or about my account. And in that case, I need to be able to go get information from an internal system and craft a reply depending on that. Some companies are a little bit further ahead and the system's sold to one another already. So if that's the case, AI can also automate those cases, understanding what want, understanding where to fetch that particular information, crafting a reply that's personalized depending on my case, sending it back to me, fine. If I'm happy, I'm happy.
(06:34):
If I'm not happy, I should be able to say, "Well, look, I still have further questions. The AI is already smart or should be smart to talk to me in a conversational way." But there's questions and there's topics that require escalation to a human. And we need to work at that end of, we need to understand what's the right point in time to escalate and what are the mechanisms we need to build to make sure... I mean, AI does a piece of the job, but then when it's time to escalate and when the AI can help me, I'm not in a loop of, "I don't understand. I don't understand. I don't understand." And I have no other option for you, so you're going to be stuck here talking to a bot that cannot be very helpful.
(07:21):
And then I would say there's another level which maybe technology is ready, but I'm not sure humans are, which is for example, I ask, "Hey, I need a refund..." And Amazon is probably way ahead. So I know you do these kinds of things. But for the majority of companies, me saying, "Hey, there was a problem with my order. You got it wrong. I need a refund." Automatically doing actions that used to be decided by a human agent, but the AI can make the decision now and interact with the system and refund the money and treat the customer well and inform the customer everything has been done, it would probably be very quick. But I think here we would have machines talking to machines. And I think although technology is ready in some cases, I mean these are complex integrations, I'm not sure humans are, because the question is how comfortable are we with instead of having a human decide on an action, having a machine, trusting the machine will do the right thing.
Jake Burns (08:43):
That's interesting. I think I agree with that. I think that's true for most new technologies and most transformations. The technology tends to become ready more quickly than we're prepared to do it, right? So in working with companies and leaders within those companies that are adopting cloud even a few years ago-
Jake Burns (09:03):
... and leaders within those companies that are adopting cloud even a few years ago. The technology is the easy part. It's getting people to be comfortable with it, like you said. So I'm curious to hear your thoughts on this, what do you think it will take for us as a species to become comfortable with the capabilities of AI that we even have today so we can use them to their full advantage?
Cristina Fonseca (09:24):
It's a very good question, and it's one, especially in our context that we think about a lot. I think it's a matter of applying AI where it should be applied.
(09:42):
I'm also an entrepreneur and for example, I would never build a company that would use AI, for example, to do accounting. It needs to be right 100% of the times. But in CX, I mean it should be right as much as possible. But for the majority of the cases, if we are transparent about a prediction we make about a reply we give being automated, I think we are okay with, one, if you're not very confident to have agents have a final say. So not automate things that we are not super confident about. This is where having a human in the loop in certain cases makes a lot of sense and we can leverage agents for that. But also if I send something, for the majority of cases, if I send something to a customer that's not 100% accurate, and I make that transparent and I tell my customer, "Look, this was an automated reply. If we got it wrong, please reach out and we will assist you." That's not going to be super damaging.
(10:47):
So in the way we build product at Zendesk, first of all, we like to make the confidence level of our AI predictions transparent to the user. One, because we believe that's going to help them, one, build trust in the technology and... Let me give you a very good example. One of the functionalities we have available is it's very common that customer service agents use prebuilt replies, like templates for the most common cases, and we recommend which template, which reply to use for every single case. Sometimes I'm 95% sure that's the reply you need to use. I've seen that thing like a million times before, it's just click a button and go, so my confidence level in that case is going to be high. But there's other instances where I don't have a good template. That's a super unique case. I've never seen that. In that case, I don't have a good reply so whatever I recommend will have very low confidence level.
(11:56):
If I don't show that to the agent, it will be very easy for the agent to say, "Oh, this AI thing doesn't work because it's giving me the wrong predictions." But if I make the confidence level available, the agent understands, "Okay, AI is just trying. Doesn't have a good reply, but it's trying and I can act on that."
(12:15):
Same thing in regards to automation. I think because everyone is still building the best practices in terms of implementing AI, and when we hear automation, sometimes that scares people a little bit. "Oh, I'm not ready to automate," either because I am a bank, or my customers are enterprise, or I'm just very skeptical of the technology, or I'm very risk averse. So there's a million reasons for companies to say I'm not ready to automate yet. But if I give them the confidence level of what I think that particular request is about, they can start small.
(12:58):
It's not that I need to automate everything on day one, but I can say, "Well, maybe when customers ask me about where's my order." I can send them an automated reply and tell them, "Well, here is how you check where your order is." Absolutely no risk. Start small so you can go from there. I think for us to build confidence in technology, we need to start small.
(13:26):
There's also now generative AI. There's a promise of, "Okay, I click a button, I deploy a bot, the bot talks to customers, everything is magic," and this is possible. And there's companies that are totally like, "Okay, we doing that." But the majority of our customers want to start small, see how things are working, test them, maybe deploy them first to their internal employees and then go from there.
Jake Burns (13:53):
Right.
Cristina Fonseca (13:54):
So yeah, exciting times.
Jake Burns (13:58):
Yeah, that makes a lot of sense. I really like what you were saying about instead of, perhaps at least as a first step, having AI system talk directly to customers, maybe have the AI system assisting your employees who are talking to the customer, so it's like a cooperative effort between your human employees and your AI systems in order to provide better service for your customers through maybe just a message popping up saying, "Hey, maybe suggest this to the customer and that to the customer" in a customer service type situation. So it sounds like you are not saying that this is going to replace human employees, at least not immediately, not completely, right? Would you agree that it sounds like, at least in this phase, it's really more of a augmenting and supporting employees to allow them to be more effective?
Cristina Fonseca (14:47):
For sure. I think there will be efficiency gains of course, and that maybe comes with you not leveraging technology maybe to deal with seasonality and peaks of volume. But what we've seen within our customer base is that by automating these manual tasks, you free people to do more meaningful work. I'm giving you the basic example of CX, which is pretty much every company has to have agents manually label what each request is like, and that's usually needed because you need to understand what are customers contacting me about? So I can do root cause analysis, I can understand where are the opportunities for improvement and I can plan and I can run my operation. But this is a highly manual task agents hate doing. They don't do a good job at it. It takes a while for you to learn how to appropriately classify an email or a chat. And machines can do this way better than humans.
(16:02):
Now, if we just train a machine to do this, everyone will be happier because it's one less task that agents need to do. There's no value added in labeling emails for reporting purposes. So we've seen lots of customers that have people fully dedicated to triage requests, so we can route them, make sure they are handled by the right team, make sure we prioritize the ones that are critical. We've seen our customers just free people to work on more meaningful tasks because suddenly they don't need to do this.
(16:39):
Massively decreasing the number of agents, I have not seen that happen. I think there's just a lot of work in CX that can absorb these extra capacity. And I've seen other interesting jobs being created and people go work on knowledge management. Knowledge is an area that's not paid attention enough because in a world where of course bots can reply to customers automatically, but bots need to have a super strong knowledge base that's up-to-date, that's maintained, that's complete and humans need to be involved in that. So the role of the knowledge manager, we will see that play a super important role in the future. Also, we know now there's bots trainers and bots supervisors and people optimizing bots, setting things up and making sure the interactions have quality, optimizing the setup and so on. So I would say there's still a very important role for humans in this world, but I'm very happy we can just automate the mundane tasks that shouldn't exist anyway.
Jake Burns (17:58):
Well, it's a very optimistic view.
Jake Burns (18:01):
By the way, I'm seeing the same thing, making people's existing jobs better because they don't have to do that work that's maybe boring or non-differentiated, right? They're not actually building something, they're not actually creating something. They're doing kind of a repetitive task over and over again. But let's let the bots do that. But then the other thing is these new jobs that are being created and these new skills that are being needed within organizations to manage AI. You mentioned a couple knowledge managers and the people who write and maintain the bots. What other skill sets would you recommend people learn in order to prepare for this and to be more valuable in the workplace given this AI revolution that we're undertaking?
Cristina Fonseca (18:46):
I think in this era where things change really fast, the best thing you can learn is to be open, to be empathetic, to try to really understand the problems of your organization. And I think this is the same with being a product manager. It's being obsessed about solving the customer issue. Maybe I need to dig deeper because, what makes a good bot trainer? What makes a good knowledge manager? What makes a good support agent? It's the empathy and it's the desire to really understand the customer problem and going the extra mile to solve it. I would say these functions that are appearing in support would take advantage of the same types of skills, which is, well, can I ensure a positive customer experience? But making it maybe a little bit more automated.
(19:58):
I think in the past, maybe we tried to offload these AI systems to technical people. And the mission of those would be, let's automate as much as possible, and that's the wrong way to look at the task. The way to look at the task is, how can I ensure a very good customer experience but still take advantage of the technology? But also, I think it's not just the users of the systems, it's also whoever develops the technology needs to make it easy to understand and use. I don't think a person working in customer service like an agent or a bot trainer or a knowledge manager should just get technology that works. Maybe you don't need to know how our model works or what accuracy means, or how do you guarantee the quality of your system at the technical level. And I think lots of the tools that were designed in the last decade would ask for that.
(21:15):
And I think that was partially responsible for the fact that we have bad implementations of AI. I think also software companies have a very important role to play in making AI simple to understand and use. AI is just embedded, intelligence is just embedded in the way these people work and in the software they use in their daily jobs. I don't know if you're expecting me to say, "Oh no, they need to have low code skills, or maybe they have to go to engineering school." I think that's the job of the software company is making it very, very easy for them to use.
Jake Burns (21:58):
Yeah, it's a great point. For this to be truly transformative for business, it's going to have to be accessible. If we're expecting everyone to learn how the models work and all of that, it's kind of been in the past, as you said, that's too much of a barrier. So I think it's a very great point. Fortunately it's happening, right? You could see the work that AWS is doing and others, Zendesk, who are making it more accessible to more people who don't have that deep technical background and they can actually take advantage of this technology at scale, again, to kind of help customers as much as possible.
Cristina Fonseca (22:34):
And just to add to that, I think also in the last decade, because AI used to be complex to understand and implement, I think only the enterprise big companies could afford it and allocate the time and the money to make it happen. And I think through these new wave of models and technologies available, basically AI is accessible to SMB, to mid-market, to enterprise customers, and that's where the revolution starts. So I'm very, very excited with that.
Jake Burns (23:15):
I totally agree.
Cristina Fonseca (23:16):
It's a similar revolution to the cloud. Before you'd have big companies could set up servers and have their own infrastructure deployments, and then the cloud changed that. And I think right now it's the same for AI.
Jake Burns (23:31):
Yeah, I see it exactly the same way. It's democratizing it. What I see that resulting in is much more innovation in the world because it's not just limited, like you said, to your point, to these large companies that have all these resources that could develop their own models, that could have huge data centers full of servers. With the cloud and with these AI services specifically, any company, any person with an idea, with very little overhead, with very little capital, with very little time, could be productive with this technology and actually build something, which is very exciting.
Cristina Fonseca (24:15):
For sure.
Jake Burns (24:18):
Let's talk a little bit more about AI in general, because there's some questions that I'm sure our audience wants to know about your opinion. Do you think that AI has the potential to offer an existential danger to companies and people? And if so, how do we steer it in the right direction and keep it doing net good?
Cristina Fonseca (24:39):
Look, I think the answer to that is, as any other technology, it has the potential to do good and bad depending on what humans do with it. So a lot of concerns, the global concerns with AI have a point, right? I'm not very pessimistic as you can say, but I think it all comes down to what are we going to do with this technology? Even models that have the potential to generate data and go a little bit further, there's lots of effort in making sure those are tweaked. We put the right foundations in place, so they stay within the use cases that were designed to cover. So I think that's very important, and I think there has been lots of investment in the area in regards to this.
(25:56):
Of course, when we teach machines based on data that might have biases and so on, the machines will just mimic the human behavior, and we know we need to act in regards to some cases. So I think it's interesting that in New York, there's a recent law that says if you're in HR and if you use AI to screen candidates, you need to have an independent audit that assesses the system you're using for bias, because of course, we don't want to introduce bias in an HR system. I mean, I would say humans are biased anyway in screening candidates, but I think it's very... Maybe machines can be less biased if trained properly. And then I would say there will be maybe regulation, but I don't believe in global regulation. I think industry by industry, there's things that make sense. For example, if we think banking--
Cristina Fonseca (27:03):
There will be regulation in the sector that will say, "Well, the credit score models need to have a couple of characteristics, so I'm not just introducing buyers in the system and denying people credit because an algorithm was not properly trained or the data was not balanced." So I think that will exist more and more, but I don't believe in global regulation. I think that will be very, very hard to apply. And being an actor in an industry that's leveraging AI more and more, what I can tell is that we are all concerned with deploying AI right.We need to work on making it safe, secure, private. But again, I think we're going to get there.
Jake Burns (28:10):
I think so too. It's interesting, the topic of bias because you kind of alluded to it, but what the AI is doing, what these models are doing is actually putting a mirror in front of us. So any bias we see in there, I mean, it's trained on our data, so perhaps we should be looking to ourselves more than to the technology to fix that. But it's interesting also.
Cristina Fonseca (28:31):
Do you have more faith in humans or in the technology to solve that?
Jake Burns (28:36):
Personally, I have faith in us working together. I think the technology has a role to play, humans have a role to play, and I think as long as we can learn to amplify each other's strengths for a common good, I think we'll be fine.
Cristina Fonseca (28:50):
I like that.
Jake Burns (28:53):
I feel cautiously optimistic.
Cristina Fonseca (28:57):
I think that's the perfect case.
Jake Burns (28:58):
Well, it's up to us, right? We need to be active participants in this. Everybody in the world needs to be active participants in this. We can't just sit back and watch what it does. We have the ability to shape what it does. And I would say that's true with this more than previous transformations just because it is a reflection of us in a large sense, for many reasons. So, that's my advice for people is to get involved in this. This is our industrial revolution. This is our ability to shape the future, and we really have a lot of leverage right now in these early days.
Jake Burns (31:16):
So it sounds like you're probably in agreement with me that, for example, the way we fight bias in AI. One of the greatest use cases of AI is to discover bias. It's very, very powerful in that way, so it seems like instead of trying to pause this technology or stop the progress of this technology, let's utilize this technology to solve some of these problems that we're seeing because if we don't, then again, we're just sitting back and watching others do whatever they're going to do with it. The good guys need to be active as much as possible in this.
Cristina Fonseca (31:48):
Yeah, completely agree. Completely agree. And again, it's very easy to find the believers, but it's also very easy to find the ones that say, "Oh no, we are going to be killed by AI," and I think if we focus on how we can leverage this technology to solve a million problems we have in the world, then everyone wins. We could stay here for hours just to mention the potential in lots of industries, in research, in medicine, in climate change, beyond CX and technology, in deep problems we have in the world, so I think let's keep developing it, and let's pay attention to the negative aspects, try to overcome those, and use AI for the greater good.
Jake Burns (32:44):
Absolutely. So, can you tell me a little bit about what Zendesk is working on now? Anything interesting? Any projects that you'd like to share with us?
Cristina Fonseca (32:54):
A million cool things. So, that's my job, which I'm very passionate about. We really believe in the power of AI for customer service, so we've been investing a lot in building more and more automation tools for our customers while democratizing access to AI. So from SMB to enterprise customers, with Zendesk, you click a button, you have AI helping you label everything that's coming in, inform you what you should automate, helping you escalate to agents, making them more productive with assistance that help them get better at their jobs. Of course, generative AI has been quite popular in the industry, and I can tell you that we are going to launch big things in the area, so also our customers can leverage it. But I would say lots of things in the automation space, generative AI for productivity and automation are maybe the big ones I would highlight.
Jake Burns (34:13):
Cristina, thank you so much for joining us today. This was fascinating. I really appreciate it.
Cristina Fonseca (34:18):
Thanks, Jake. Really appreciate the invite. Thank you so much.