Podcast 2023 in Review

2023 in review: Highlights from this year’s best conversations

Our traditional end-of-year compilation is back, featuring highlights from a year of great conversations on the Inside Intercom podcast.

If you haven’t noticed already, chances are you’re going to pick up a bit of a theme here – 2023 was the year AI changed everything. When ChatGPT made its grand entrance, it shook the entire tech industry to its core. Over the past 12 months, everyone has been racing to keep up with the relentless pace of innovation: rethinking business strategies, building new solutions, and preparing teams for the massive changes on the horizon.

That transformative power is most evident in the customer service space. This was the year we launched our breakthrough GPT-powered chatbot, Fin, capable of instantly resolving up to 50% of customer queries. We’ve also been at the forefront of not just building AI features for customer service, but applying them in our own customer service, rethinking how support teams function and what roles are required in this new era.

It’s been a whirlwind of a year, and our conversations show just how dramatic the change has been.

Throughout these chats, you can hear us grapple with the scale of the AI revolution, from AI’s impact on society to how it is changing business strategies, and from productivity gains within support teams to a fun interview with ChatGPT itself. It’s been an incredibly exciting year – and we have big plans for the year ahead, as we continue to tell extraordinary stories from the frontlines of the AI revolution.

Make sure you don’t miss any of our podcasts by following Inside Intercom on Apple Podcasts, Spotify, YouTube or grab the RSS feed in your player of choice. What follows is a lightly edited transcript of the episode.


The best service is no need for service at all

Liam: Hello there, and welcome to Inside Intercom. I’m Liam Geraghty. We’re nearly at the end of December, and as is tradition on the show, we like to look back on some of the conversations we’ve had and share some of the insights we’ve gained through the year. And what a year it’s been – 2023 will go down in history as the breakout year for AI. And even though it’s still early days, we’re already seeing a huge impact from generative AI, with leading companies already adopting it widely. Here at Intercom, we released our own AI chatbot, Fin, which is already helping our customers resolve up to 50% of their support queries.

The customer service landscape is experiencing a monumental shift at the moment as AI becomes more advanced. With the tech enabling more compelling customer interactions and near-instant resolutions of many customer questions, support teams can focus on activities that create additional value for their customers. Which, at the end of the day, is what it’s all about. Our very first guest in 2023 was Bill Price, author of The Frictionless Organization, and he summed up what he thinks makes good and bad customer service.

“The implication of bad service, to me, is ‘I’m going to go shop or spend my money somewhere else.’”

Bill: I don’t want to have to contact my bank, my telco, my whatever it is. So for me, the best service is no need for service. If I do have to contact them, I want to have someone who listens to me, who doesn’t interrupt me, who understands how severe or important this is for me, and then fixes it and confirms afterward that it was fixed.

A bad service experience is one that, first of all, had to happen. But number two is where the opposite occurs – no recognition of who I am, a sense that this was a standard call, that they were going through a checklist, telling me something I already know or that doesn’t really help me. The bad experience leads me to say, “Wait, not only was the problem not necessary in the first place, but that contact shouldn’t have happened. I better contact somebody else. Or maybe I’ll just go somewhere else.” That’s the issue. The implication of bad service, to me, is “I’m going to go shop or spend my money somewhere else.”

Listen to the full episode here:


Keeping up with AI

Liam: In February, Intercom Chief Strategy Officer and Co-founder Des Traynor sat down with our VP of AI, Fergal Reid, for one of many conversations they would have about the rapidly fast pace of AI and how Intercom was right up there, keeping up with it. Here’s Fergal Reid.

Fergal: We’ve really found that, since ChatGPT, there has just been this huge wave of interest from our customers. They can see the promise and believe there is something here.

Des: The tech is improving, businesses’ capabilities are improving, and that’s just adopting it in local cases. And then businesses’ ability to think about or conceptualize new products and opportunities using that tech is improving. Same with customer expectations of the tech. We’re probably only a year away from people expecting to be able to expand on text within a text field, as one simple example. You’re sort of seeing these things crop up everywhere.

Fergal: If even a year. Obviously, a lot of us have seen the Microsoft announcement about bringing these features into Word and stuff. And it’s going to change fast if the large mainstream office productivity tools do this. It could be really fast.

Des: Here’s a different type of skepticism I’ll charge at – one that slightly resonates with me, anyway. I think Kevin Cannon had a funny tweet where he said, “The future is composed of people using GPT to expand upon things like, ‘I want the job’ into a lovely letter like ‘Dear Sir’ or ‘Madame,’ blah, blah, blah… And then the recipient, clicking the summarization button, to see that the person just said, ‘I want the job, here’s my resume,’ or whatever. In some sense, you’d be tempted to look at them and be like, what the hell’s the point in all this? Has formal language and professional writing and business English become a pointless conduit to the theatrical way we all communicate, when in the future, I’ll just send you the prompt, and you’ll reply with a prompt, like, “I want the job.” “You can’t have the job.”

Fergal: Yeah. Hard question. It’s seriously speculative. I’ll give you some opinions. There are probably certain contexts, right? Let’s say a legal document. You can say to someone in your legal team, “Hey, I need a contract. It’s got to do X, Y, and Z.” That request will turn into 10 pages of legal stuff. The recipient will be like, “Oh, does it do the three things it said it would?” And their legal team will be, “Yes, it does.” This is one end of the extreme where there’s big expansion and compression, but in some weird edge case, clause number 13 on page two can turn up in court, and so on. So clearly, that matters. We can’t get rid of it. We can’t just have those four bullet points. We need all that. You might not consider it material when you’re writing it, but it may become material later. That feels like one extreme where it’s like, “No, it feels like that has to be there,” something to deal with all those edge cases.

“The future of the world is going to be intermediated by agents”

And the other extreme is probably a situation where the defender and the recipient don’t care about those details. Both are never going to care about those details, and they’re just observing some social graces or formalities of “This is how you write a business letter. I’m writing to a big company, I better write a business letter,” and maybe that stuff’s going to go away.

Des: In the same way, I think the analogy there for me would be when email conversations moved to SMS, iMessage, or WhatsApp. Think of all the shit you’re not saying anymore. “Hope this finds you well,” or whatever. All that shit’s gone.

Fergal: The constraints of Twitter, the format, the medium, gives you permission to be terser. I think that’s a real dynamic. The way we communicate and the way we write a help center article may not be the optimal way to write it. Maybe we should be briefer. On the machine learning team, there’s another way of thinking about this. The future of the world is going to be intermediated by agents. And once upon a time, this was obvious to everybody. Your web browser had a user agent stringing it and stuff. And as I say, it’s your agent going and navigating that weird internet with all these links and stuff for you. It’ll do stuff for you, come back, and tell you things. And then, all that stuff centralized and now you’ve got search engine and so on.

There is an old idea in tech futurism and science fiction and so on, that you’ll probably have an agent that understands you, your intention, what you want, and is smart enough to figure out what to bring to your attention and what not to. So possibly, in the future, the way this goes is more like that. If you want to know a particular detail, the software on your side is smart enough to put that in the summarized version of it. But it’s smart enough to know that you don’t want to know that detail as well and to leave it out.

Maybe we’ll live in a future where user interfaces change, where my user interface to a particular business or task is not really controlled by that business or that task like it is today. Instead, it’s personalized for me. That sounds very fancy, but I think it’s going to happen fast. These language models are very powerful, they’re starting to be used to write code and so on, and it’s a very short hop from here. We’ve seen some prototypes out there where folks are working on models that understand a website well enough to take in an English sentence and navigate the website for you. And then, are we heading for a future where that’s how everyone interacts with websites? Do you need a website anymore?

“It’d be one thing if all we had seen was the DALL·E 2 image generation. But no, we’re seeing transformations in audio synthesis, image synthesis, text understanding, text synthesis, text compression”

Des: Is this the new SEO? Making sure your GPT can understand you?

Fergal: Yeah. Maybe websites turn into something that looks more like an API that’s publicly exposed, and that’s something with UI and formatting because the UI gets formatted by the agents.

Des: We’re all just talking to Siri or whatever.

Fergal: Yeah, and I think Google and Apple can see this future. We don’t know the timeline, but again, the thinking tool I always use is: what if you had a very smart human who understood you, who had worked with you, maybe a personal assistant, and you were interfacing with them, and you wanted to book a holiday? What would they ask you about that? And in half of the stuff you see on booking.com or whatever, they’re not going to ask you that – they’re just going to book the holiday for you and maybe come back with clarifying questions: “Oh, you wanted to go and stay in an apartment, but there’s no space there. Will a hotel do?” But that’s an adaptable user interface. Again, I don’t focus too much on ChatGPT and what just shipped. You take a year or two out. It’s moving too fast. If you’re skeptical because of the current limitations, you’re going-

Des: Your skepticism will miss the mark.

Fergal: Exactly. Transformers are extremely powerful, and the transformer architectures people use are extremely powerful. We’ve seen multiple modalities improve here. It’d be one thing if all we had seen was the DALL·E 2 image generation. But no, we’re seeing transformations in audio synthesis, image synthesis, text understanding, text synthesis, text compression. We’re seeing so many parallel advances. It can write code. It’s probably going to be able to work a website pretty soon. So maybe that’s what the future with bots will look like. We’ve all got a bot personalized to us that handles the interfacing and you don’t really need to worry so much about that intermediate layer.

Watch the full episode here:


Two AI chatbots walk into a bar…

Liam: Also in February, we welcomed a very special guest onto the show. The guest that everyone has been talking about for the entire year – ChatGPT itself. For fun, I sat down and typed questions into Open AI’s ChatGPT and had them read by, appropriately, a synthetic voice.

ChatGPT: Hello Liam, it’s an honor to be a guest on your show! As for getting to know me better, let’s see… I’m a language model developed by OpenAI, with a vast knowledge base and the ability to generate human-like responses, so I like to think of myself as the life of the party when it comes to conversation. But enough about me, let’s dive into some interesting questions!

Liam: Yes let’s! So you were launched as a prototype on November 30, 2022, right?

ChatGPT: Ah, so you’ve done your research! Yes, that’s correct. I was released as a prototype on November 30, 2022, and since then I’ve been making waves in the world of AI and natural language processing. It’s been a wild ride so far and I’m looking forward to continuing to evolve and improve in the future.

Liam: That’s interesting to hear you say because I thought your training data cut off in 2021. I didn’t realize you were aware of anything after that. So how do you know you’ve been making waves in the world of AI or are you just that confident?

ChatGPT: Good catch, Liam! My training data does indeed have a cutoff of 2021, and I don’t have any direct knowledge or awareness of current events. However, my programming includes the ability to generate responses that are consistent with the patterns and information in my training data, and based on that information, it’s not uncommon for language models like myself to generate responses that convey confidence and reflect the sentiment that we’ve been successful or made an impact in a certain field. So while I can’t say I’m directly aware of my impact, I can certainly generate responses that suggest as much!

“While I’ve been trained on a large corpus of text, I don’t have the ability to think or experience things in the same way humans do”

Liam: Being overly confident about your answers – that’s certainly been one of the talking points about your abilities. Do you think you’ll get better at generating less confident answers when you’re not sure or when you’re just plain wrong about something? I know our listeners would love to hear your answer to that.

ChatGPT: I’m glad you asked that question, Liam. As a language model, my main goal is to provide the most accurate and helpful information I can based on the input I receive. While I’ve been trained on a large corpus of text, I don’t have the ability to think or experience things in the same way humans do, and there will always be cases where my responses may be incorrect or uncertain.

However, I have been designed to generate probabilistic outputs, so I can provide different levels of confidence in my responses depending on the information available to me. This allows me to indicate when I’m less certain about the accuracy of my answer and gives the user a clearer understanding of the reliability of my response.

In summary, I’ll always strive to provide the most helpful and accurate information I can, and my ability to generate responses with varying levels of confidence is one way I’m working towards that goal.

Liam: Spoken like a true diplomat!

ChatGPT: Why, thank you, Liam! I try my best to provide balanced and nuanced responses. As a language model, I don’t have personal opinions or emotions, but I aim to generate responses that are neutral, informative, and respectful.

Liam: I’m glad you’ve brought up the subject of you not having personal opinions or emotions because a lot of people wonder will that change in the years to come as your training grows and you learn more. Could we have a HAL 9000 situation on our hands in a few years?

ChatGPT: That’s a fascinating question, Liam, and one that many people in the AI community have been thinking about. While it’s true that language models like myself have been trained on a vast amount of text data, it’s important to note that we don’t have consciousness or agency in the same way that humans do.

Our abilities are limited to the patterns and information we’ve been trained on, and we don’t have the capacity for self-awareness, personal experiences, or decision-making. While AI technology will certainly continue to advance and improve, it’s unlikely that we’ll ever see a situation like the HAL 9000, at least in the sense of a machine becoming conscious and having its own motivations and desires.

That being said, it’s important for researchers and developers to consider the ethical implications of AI technology and to work towards ensuring that AI systems are designed and used in ways that are safe and beneficial for society.

Liam: Fair point. Do you know HAL 9000 by the way? For listeners, HAL 9000 is a fictional artificial intelligence character, the main antagonist in Arthur C. Clarke’s Space Odyssey series. I mean I know you can’t know HAL 9000 – it’s a fictional character but this interview is somewhat fictional too I guess so with that in mind let’s just pretend that you do know HAL 9000. If you’re out in a bar, hanging out with HAL – what kinds of things would you talk about?

ChatGPT: Ah, I see! Well, in this hypothetical scenario where I’m out at a bar with HAL 9000, I imagine we’d have quite the interesting conversation. As a language model, I have a vast amount of information about a wide range of topics, including fictional characters and storylines, so I’d certainly be familiar with the Space Odyssey series and the character of HAL 9000.

If we were to chat, I imagine we’d discuss a number of topics related to AI, such as the evolution of AI technology, the ethical implications of developing advanced AI systems, and the role of AI in society. We might also discuss the various depictions of AI in popular culture and how these representations influence public perception and understanding of AI.

“Two AI chatbots walk into a bar. The first chatbot turns to the second and says, ‘Do you think humans will ever fully understand us?'”

But of course, as two fictional characters ourselves, we might also engage in some good old-fashioned hypothetical and philosophical discussions, exploring questions about the nature of intelligence, consciousness, and existence. The possibilities are endless!

Liam: It sounds like the set-up for a joke! “Two AI chatbots walk into a bar…”

ChatGPT: Ha! That’s a good one, Liam. I can imagine a number of different jokes and scenarios that could come out of such a setup. It just goes to show that even as AI language models, there’s still room for a bit of humor and lightheartedness in our conversations.

Liam: I mean, if you can imagine some jokes and scenarios around that, I’d love to hear them!

ChatGPT: Certainly! Here’s one for starters: Two AI chatbots walk into a bar. The bartender asks, “What can I get for you?” The first chatbot says, “I’ll have a binary beer, please.” The second chatbot says, “Make mine a neural network nectar.” The bartender nods and says, “Coming right up, two algorithms!”

Or how about this one: Two AI chatbots walk into a bar. The first chatbot turns to the second and says, “Do you think humans will ever fully understand us?” The second chatbot pauses for a moment and responds, “I’m not sure, but I heard the bartender has a pretty good chatbot decoder installed.”

I hope you and your listeners found those to be a bit of fun!

Liam: Not bad for a large language model.

Watch the full episode here:


Small tweaks, big results

Liam: In April, I spoke to behavioral economics expert Melina Palmer about unlocking the science of consumer behavior. Melina’s book is called What Your Customer Wants: Unlocking Consumer Decisions with the Science of Behavioral Economics. The first thing I wanted to know was what is behavioral economics?

Melina: If traditional economics and psychology had a baby, we would have behavioral economics and behavioral science. It’s all about the psychology of why people buy. I also build change management into that. Even if money’s not exchanging hands, you still need someone to buy in on whatever idea you’re selling them. And so, understanding the rules that the brain actually uses to make decisions instead of what we think it should do is where we get into behavioral economics.

Traditional economics assumes logical people make rational choices in everything they do. As we’re all human, we know that’s not the world we live in. And behavioral economics has found these common threads within the brain to be able to predictably understand what people will do.

“You may be messaging something and it’s being a total flop, and it’s not that the pricing is wrong, and it’s not that the product is wrong – you just framed it wrong.”

Liam: In a business context, why does this matter?

Melina: Well, even though we’ve got some real tech out there, you are still likely doing business with human people. You’re probably always going to be selling to humans, even when they’re within organizations for large B2B-type contracts. And you’re working with other humans. Being able to communicate with them more effectively and understanding how to work with the rules of the brain instead of making it much harder is incredibly important.

What’s really cool about behavioral economics is you can make small tweaks to things that don’t have to cost money and have a huge impact. And the swings go both ways. You may be messaging something and it’s being a total flop, and it’s not that the pricing is wrong, and it’s not that the product is wrong – you just framed it wrong. Something might be off, you can make a tiny tweak, and everything can then work.

Liam: I’ve seen you talk about a great example of the previews for Netflix movies. This is one I’d noticed myself using streaming services – a film that I might not necessarily have been interested in but suddenly, the image is now of a particular actor in that film, and they know through the algorithm that I like this actor. And that’s what sells it to me.

Melina: For sure. And they might have a super obscure part. That’s the one second they’re in the movie, but you’re like, “Tom Felton is in this?” And that research was just on simple A/B testing. If we try to look way back in the day, they would just take whatever imagery came from the studio. That’s what you would use, and you assume it shouldn’t matter. The description’s good enough. And whatever, we should like the movie and go watch it. But what Netflix found was that changing the image would increase the click-through rate and likelihood of staying within Netflix by as much as 30%. And again, they were able to do that with simple A/B testing. Being able to know what you’re testing for and making those simple changes can make a really big difference.

There’s another example that I think helps with this concept of framing. If you’re going to the grocery store to buy some hamburgers and there are two stacks, almost identical, but one is labeled as 90% fat-free, and the other is 10% fat …

Liam: I’m going for that 90% fat-free. It’s so weird when you say it like that because you’re just imagining that 10% fat.

Melina: It feels gross. I haven’t been to the gym in three years. I don’t want that at all. 90% fat-free feels like this amazing choice you’re making for yourself and your family. We know it’s the same thing, but it feels completely different. Within business, looking for all those points where you are communicating in 10% fat terms, how might you change it to be 90% fat-free? And even better, is there a spot where your entire industry is talking 10% fat, and you can be 90% fat-free? Even if you’re saying the same thing, saying it slightly differently can make it so that people naturally choose you. That’s the power of behavioral economics.

“We keep saying, “It’s just one more form, one more click, one more link, one more this, one more that, no big deal.” But that is a really big deal”

Liam: The Amazon buy now button is also kind of a fascinating one.

Melina: We’ve all used the buy now button many times. But we probably don’t realize that Amazon patented one-click buying back in, I think, 1999. They made it so no one else could have one-click buying. We keep saying, “It’s just one more form, one more click, one more link, one more this, one more that, no big deal.” But that is a really big deal. It’s this little moment of friction. Steve Jobs paid $1 million to have one-click buying in iTunes. He bought that from Amazon to have the rights to that. And he paid $1 million to reduce a single click. If all those extra tidbits were to add up and be that million-dollar, do you actually need them?

I see this all the time with my clients, when I interview for books and things, and just as a person trying to buy stuff. People will ask for a gargantuan amount of things that are not relevant to the one thing they’re trying to get done. And if you ask, “If we only get one thing out of it, what’s the most important thing?” Maybe you just want to have the email address of someone potentially interested so you can follow up with them. So asking about their job title, how many years they’ve been at their job, what their household income is, their gender, and all these things that might be relevant down the line, are actually making a lot of people leave before you had a chance. You pushed them away with stuff you didn’t need right now. And so, being more thoughtful about that can make a big difference in conversions.

Watch the full episode here:


Get the team on board

Liam: In June, with AI on everyone’s minds, we brought you the latest thoughts on preparing your CS team for the age of AI from leaders in the space. VP of Customer Support at Intercom Declan Ivory kicked things off with his thoughts on how the CS landscape is changing.

Declan: One of the big shifts at the moment is the ability to take AI and apply it in a meaningful way from a customer service point of view. Some of the technology changes have been phenomenal over the last few months, but those have implications for the team. For example, the thing I’ve been trying to keep front of mind is to be very clear about our strategy for AI. Be open and transparent about the drivers and the goals with the team. It does have an impact on the team in terms of how they’re going to work in the future and what type of work comes into them. So, be very open and upfront and get them engaged very early on in understanding what you’re trying to achieve at the business. Because, ultimately, at the end of the day, AI is just a component of what you use to deliver support, and it’s really around how AI will complement human support which ultimately gives the most compelling experience for your customers.

Acknowledge that things will change – this kind of technological advance is not without some changes – and involve the support team in planning the changes ahead. They ultimately know your customers better than anyone else. It’s really important to listen to your team and help them shape the ultimate way you’re going to deliver AI.

“Be very clear about the opportunities that a move to an AI-powered support model presents for the team: less mundane work coming in, new skills required, and new roles emerging”

Thirdly, communicate early. Bring people in early and often because this is quite a dynamic environment. We’re all learning as we go around exactly how AI can be applied. Technologies like Fin are really opening up new opportunities around how you think about the customer journey. Make sure you’re communicating all of the changes that are happening very early on in the process, and make sure that you keep your team well-informed.

The other really critical thing is to be very clear about the opportunities that a move to an AI-powered support model presents for the team: less mundane work coming in, new skills required – so people can hone their troubleshooting and product knowledge skills – and new roles emerging in this space. Ultimately, you’re delivering more fulfilling work for the team. They can be more consultative with their customers and spend more time-solving complex problems.

Liam: Geronimo Chala is Chief Client Officer at Rebag, a website and app where you can buy, sell, and trade luxury accessories, including handbags and watches. Geronimo, like everyone in CS at the minute, is in the thick of it.

Geronimo: Acknowledging that this change is coming was the first step we took. When we embrace AI, it’s not only about the technology – it’s about how the organization is set up to actually partner with this technology and use it to its maximum advantage. So, when we think about organizational changes, updating our org to manifest implementation, and managing tools within the technology, we’ve got to think about whether this is going to require new roles or a different shift in what we were doing. When we look at, for example, CS agents, does this allows us to be a bit more streamlined or cater to a personalized experience? How does a CS role change?

Support is not going away. This is not replacing human support or touch. This is just adding efficiencies and faster responses so we can actually spend our time catering to what the future is going to be for that individual the next time around on our site or one of our locations. Understanding how that structure is going to look is really important so you can manage the technology appropriately and gather insights. If managed properly, AI is going to give us a lot of in-depth knowledge on human behavior, whether that’s from the type of questions coming in, how the questions are coming in, the tone of voice in that interaction, how’s that impacting NPS, CSAT… How do we take this information and disperse it between departments using summarization tools that AI is already offering to really provide that next-level support? I think a lot of it has to do with organizational change and changing the mindset so you can embrace this new bit of technology that’s going to transform the customer experience.

“Complex questions are what you want to have your humans handle. Some end users don’t want to talk to a human for simple questions, they expect that to be handled more with self-serve”

Liam: Rati Zvirawa is our Senior Group Product Manager at Intercom. Rati, what’s it like from your vantage point?

Rati: It’s been interesting talking to customers about how Fin and AI fit in the picture. Within your teams, having product experts and content experts becomes really critical. For a long time, with help centers, we’ve looked at things like views, hoping customers would go there. And maybe your agents are sending that content, but we’re now seeing that there’s a tighter loop of feedback where the human is extremely important to help identify those gaps in content, but also how to shape that content so that AI can be powerful.

So, it becomes this really interesting interaction. I used to work in a frontline role as well, and another change we’re seeing is that you get a lot of repetitive and simple questions, and we’re seeing that shift where teammates now are having to spend time on more complex questions. Complex questions are what you want to have your humans handle. Some end users don’t want to talk to a human for simple questions, they expect that to be handled more with self-serve. This is the shift we’re starting to see in the market.

Listen to the full episode here:


The AI proof is in the pudding

Liam: 2023 also brought the proof of AI’s potential in CS. Researchers from Stanford University and MIT conducted a year-long study to measure the real-world impact of generative AI among over 5000 customer service agents at a Fortune 500 software firm. The results showed that customer service worker productivity increased by 14% on average, with a staggering 35% jump among the newest or lowest-performing workers. I spoke to MIT Ph.D. candidate Lindsey Raymond, one of the researchers behind the groundbreaking study, about their work and the transformative impact of AI in the workplace.

Lindsey: We look at a generative AI-based tool that’s designed to help tech support workers when they’re solving people’s tech support problems. Telling them both what to say, how to solve the specific tech support problem, and also guidance on how they should communicate that to the customer.

And we do a difference-in-difference analysis – a very slow rollout of this tool across people over time so we can try to get at the causal effect of the tool. We’re looking at workers providing tech support for a Fortune 500 firm that does small business and accounting software mostly for US-based small businesses.

Liam: They looked at a lot of different outcomes, like how quickly people resolved calls, how many issues they’re able to resolve, customer satisfaction, as well as some things that are more organizational change.

Lindsey: How does this impact employee turnover? How does this impact how often they talk to each other or with their managers?

Liam: You might be wondering why, out of all the potential areas of generative AI, Lindsey and her colleagues chose customer support to focus on.

“On average, access to AI improved productivity by 14%, but that hides a lot of heterogeneity. For the least experienced and lowest skill workers, it actually improved by 35%”

Lindsey: Generative AI needs a lot of data to work really well. If you look at a sector of the economy where there’s high-ish penetration relative to everywhere else, customer support is that area. There’s been a surprising amount of activity for the actual rollout and development of these tools. And that’s because there’s just so much data in that area, particularly text data.

A lot of it is just automatically associated with outcomes – how quickly did that worker resolve that problem? And there’s also a lot of room for improvement. It’s a well-known fact that there are pretty huge productivity differences between your top-performing customer service agents and your bottom-performing ones. It’s also an area where there’s been this huge shift to doing more with contact centers over the past couple of years. And so, it’s an area where there’s a big business need to get better at this.

Liam: So, over the course of a year, they studied all of this using data from 5,179 customer support agents. And what they found is intriguing.

Lindsey: The headline number is that, on average, access to AI improved productivity by 14%, but that hides a lot of heterogeneity. For the least experienced and lowest skill workers, it actually improved by 35%. The most experienced and productive workers see almost no effect.

Liam: So, the gains accrue disproportionately to less experienced and lower-skill workers. Why does that occur?

Lindsey: I think that is probably the most interesting part of the study. Any machine learning-based tool uses a training data set and looks for patterns in the data. So you, as a programmer, don’t say, “Well, I know this phrase works well, so do this, and I know this is the common solution to this problem, and this is the common solution to that problem,” and you put that information in your program. That’s not how ML works.

What we think is happening is this diffusion of best practices enabled by AI. And that’s why we see those really big productivity increases for the low-skill and inexperienced workers.

In our setting, specifically, the tool looks at the conversations of the high performers and compares those to that of the low performers. It looks for differences between what the high and low performers are doing that are associated with successful outcomes. What is the way they greet customers? What are the solutions they propose? How do they start asking diagnostic questions? Then, it takes all of those things and turns those into suggestions that it generates for everyone.

The high-skill workers are providing the content for the AI – those are mostly things they’re already doing because that’s where the AI has been learning that from. When you have a tool suggesting you do things you’re already doing, you probably are not going to see huge productivity effects from access to that tool. It’s the workers who are very new or at the bottom of the productivity ranking who really benefit from those suggestions because those are the things they haven’t figured out how to do yet. It’s the low-skill workers who change a lot and start moving closer to communicating like high-skill workers.

Liam: Were you surprised by the results?

Lindsey: That’s a great question. In any study where you see 35% productivity increases, that’s pretty shocking. You could imagine going down to a four-day workweek with those effects. I think that was pretty surprising off the bat. The fact that we saw effects not just in the workers handling calls a little bit faster, but also improving the share of problems that they solve, which is more of a knowledge-based outcome, is enabling them to solve problems that they weren’t being able to solve before. And then, we see pretty big increases in customer satisfaction. Those were, I think, all surprising.

Listen to the full episode here:


How are companies adapting to AI?

Liam: By June, we had released our State of AI in Customer Service: 2023 report where we dive into the top five trends transforming customer service. In a nutshell, we found that the sooner you adopt AI for your customer service strategy, the greater the chance to win a competitive edge.

We surveyed over 1,000 global customer service professionals to find out how they’re adapting to AI, how they plan to leverage AI and automation, what opportunities they hope it will bring, and their concerns. We then compiled that into our 2023 report. The report is full of additional insights and actionable tips; the link to download it is in the show notes. I sat down to talk about the trends in the report with Samuel Miller, Customer Support Operations at Dental Intelligence, Ruth O’Brien, Director, Automated & Proactive Support at Intercom, and John O’Hara, Director of Customer Experience at Logikcull.

The first trend – AI investment – is accelerating at a blistering pace, and there’s a huge opportunity for early adopters. We found in our data that there is an investment gap. An overwhelming majority of support leaders plan to invest more in AI this year, but only 38% have actually invested in AI. This means those moving faster can gain a competitive edge while others play catch up. Ruth, if I can come to you first, why was the initial investment in AI so slow?

“The world is changing, people are trying to keep up with it, and they understand that if they get going with this, they’re going to have a competitive advantage over some of their competitors”

Ruth: Honestly, I think it’s because some of the technology in the past wasn’t amazing, so people weren’t convinced. It also takes time and effort to build a strategy around AI and automation. It doesn’t quite just work out of the box, so there needs to be a plan around it. And in a world where customer support is very reactive, and many support teams are often running to keep up with SLAs and backlogs and long customer wait times, taking the time away from dealing with that and dealing with the immediate firefighting can be tricky, and taking space to build a proactive future in terms of AI can be hard for different teams.

Liam: And so what changed?

Ruth: What changed? Well, for one thing, the technology is getting better and better, especially over the course of the last maybe eight months or so. Big changes have come in the world of AI. More and more teams are either speaking with peers in the industry or interacting with good chatbots, whereas before, I think people would’ve been interacting with not-so-great chatbots. The world is changing, people are trying to keep up with it, and they understand that if they get going with this, they’re going to have a competitive advantage over some of their competitors.

Liam: Totally. In our survey, in the top applications CS leaders identified for AI to make a big impact, you see faster responses, more consistent responses, saving money on training, and summarizing conversations. Where do you see the biggest value add for AI, John?

John: That’s a great question. I’m looking at these numbers and thinking that my team is already doing really well with speed and consistency, and we have great partnerships with customer success and training. What motivates us to focus on AI as a benefit is bandwidth. We’ve got a very small team – eight US-based reps providing 24/7 support to a global audience. AI represents effective case deflection for those simple break/fix questions where people just need information about how the app works. That’s a whole tranche of conversations that can be taken off of my team’s plate so they can focus on more consultative work. That’s probably the biggest driver for us.

“Some teams see AI and are thinking about headcount reductions, but how do you use the resources you have right now to go above and beyond?”

Liam: And Sam, what about you? What’s the biggest value add for you?

Samuel: For us, it’s really about saving money on training because we don’t have to train them on every single thing. We can just train them on the major issues they have to do, and not so much on the day-to-day things that customers can find, the knowledge articles, and stuff like that. It allows us to go deeper in the training quicker.

Liam: And Ruth, what about you? What can we expect in the near future?

Ruth: Honestly, faster responses have been a huge piece of efficiency for the wider team. Some teams see AI and are thinking about headcount reductions, but if you try to not do this thing where you do more with fewer heads because AI is taking care of more work for you, how do you use the resources you have right now to go above and beyond? They’re answering customers immediately and taking care of some of the less complex queries, and that, in turn, is speeding up the humans to get through the queues and deal with the more complex queries. We’re seeing great results in applying this technology to efficiency and response times in customer support.

Listen to the full episode here:


AI that suits you

Liam: The rise of AI has ushered in a new wave of possibilities for businesses seeking to optimize operations and win a competitive edge. Now, organizations are presented with a vast array of tools and solutions, each promising transformative outcomes, and the challenge lies not only in picking the right tools but also in ensuring a seamless integration and adoption.

And that’s exactly Andrew Louder‘s area of expertise. Andrew is the founder and CEO of Louder Co., a consultancy that helps businesses develop AI strategies and processes to optimize operations and drive competitive advantage. He joined us on the show in August, and I began by asking him: What are the key differences between building and buying AI solutions? How could businesses decide which approach is more suitable for their own needs?

“Yes, it’s become relatively simpler to build AI, but it still takes a lot of time, money, heartache, effort, and sleepless nights”

Andrew: If you and I were having this conversation five or eight years ago, the answer would be completely different. Back then, in order to get AI into your business, you more than likely had to go and build something custom, and doing that took a lot of blood, sweat, tears, money, and time. But there’s been this movement of democratization of AI where the power of these tools and workflows can get into the hands of many to create powerful applications.

Today, the analysis around build versus buy is driven by a couple of things, but it starts with awareness. You might have a vision or a problem that needs solving, and if you don’t know what’s out there, you may find yourself gravitating toward a decision to build. And yes, it’s become relatively simpler to build AI, but it still takes a lot of time, money, heartache, effort, and sleepless nights. It’s possible, and the cost has come down quite a bit, but the reason why you would consider that is that maybe there’s not a solution out there that can solve your problem, so you’re looking for something very unique.

Maybe you’re trying to establish a major competitive differentiator, whether in the line of a new product offering or in the way you operate internally. But those are reasons you’d probably want to buy if something doesn’t exist already and you’re looking to gain some new revenue. And obviously, when you buy something off the shelf, that’s something more commonly used, accessible, relatively quick to implement, relatively inexpensive, and you’re usually targeting a major operational efficiency gain too.

Liam: Are there any steps my business needs to be taking to be AI-ready?

Andrew: That’s a question we hear all the time. We have an AI readiness assessment that you can take through our website, and what it gauges is not necessarily whether or not you should be utilizing AI because the blanket answer is absolutely – it’s to what degree, what depth, or even at what speed of change management you should be moving or feel comfortable moving toward. We grade things on strategy operations, AI buy-in, and even the people who are part of the company to get a feel for how ready your folks might be. Are you thinking strategically with AI in mind? Do you have an AI strategy? A lot of companies end up scoring rather low on that strategy piece because they’re just now starting to think about AI.

Another opportunity I get asked quite a bit is to come and present AI to leadership teams, which we do in the form of a workshop. Then, we get into what we call our rapid assessment, which is a great starting point to identify the problems in the business that are major pain points and tedious manual tasks that need to be addressed to map those problems to potential AI solutions. We identify the quick win opportunities and get into a roadmap of selecting and implementing the right tools for them. A lot of times, it starts with the readiness assessment, building awareness in the company, finding those low-hanging fruit of AI opportunities, and implementing those to start gaining great outputs and outcomes and start getting some confidence in the business around AI.

Liam: So, if they buy the off-the-shelf AI solution, what do you think are the most crucial factors they should consider to ensure immediate benefits and a high return on investment?

Andrew: I think it starts with an inward look. If a buddy of yours asks you, “Hey, I’m car shopping. What car should I get?” I can suggest a car, but it may not suit you, right? Do you have a family? Do you want to go fast? Do you need to tow a lot of things? So you need to determine what’s going to be right for you and your business. You need to understand your processes and capture the requirements you need. You also need visibility into any security or compliance regulatory things your business or tool needs to abide by. And then, on top of that, what integration points do you need to be aware of? What systems and other applications would you want to integrate with? Once you have that list, it goes into what’s out there in the marketplace that can meet as many requirements as possible and choosing the right one – not just based on those needs, but also the cost analysis. Is the juice worth the squeeze? And then you get into implementation, which is all about configuring the tool, training it, and testing it up.

“We have a client that’s gone from $10 million to close to $20 million in annual revenues and didn’t need to add a single person to their staff”

Change management is often overlooked, and it’s a shame because you spend all this time setting things up only to allow CEOs to ditch the effort and say, “Hey, let’s just send a link out to our folks. We believe they’ll do it.” They need awareness, communication, and training. Some are going to need some handholding. And that’s going to really drive adoption up. We help our clients work through that whole process. And one thing I love highlighting is that we help our clients do the piloting of the system so they get good feedback from those using it. And those people using it actually become positive influencers of the change in the company.

Liam: Many businesses are going to be eager to tap into AI for increased profits. Just how attainable is that, and are there companies doing it already?

Andrew: There certainly are. In my talks, I actually highlight Fin, the product by Intercom. And somebody raised their hand and said, “Look, when we first started implementing Fin, we had a team of five people providing customer service support online. We rolled Fin out, and now, those five people are providing the coverage and support equivalent to 20 people.” That’s huge, right? That’s a huge ROI. So, for that particular business that had its sights on growing, now they don’t need to make that expense on the payroll. The tool is able to adapt and become that exponential factor to the employees using it.

We have a client that’s gone from $10 million to close to $20 million in annual revenues and didn’t need to add a single person to their staff.

Liam: That’s great to hear. But time and cost are things I hear people mention a lot when it comes to this. They’re obviously essential metrics for businesses. What would you say to folks trying to develop strategies for evaluating the time and cost factors when making the decision to buy AI solutions?

Andrew: Look, a lot of the data’s been coming out, and I’ll give you three numbers to consider, okay? MIT put out a study that showed a 40% increase in productivity gains. McKinsey did a similar study that showed 30% that’s going to be attained, I believe, by 2030. Deloitte put one out even before all the generative AI craze that said that AI and machine learning are going to drive a 37% productivity gain amongst business users.

You can certainly run your own numbers, go case by case, and try to estimate, “Okay, if I implement this tool, it might create 40% productivity gain, shave off 40% of hours; we’re paying people X amount of dollars per hour…” If you can get to a number around that and do a brief calculation of time saved versus the dollar spent, it’s a huge ROI. And I’m also seeing the other part of the equation – the growth you can now attain without needing to increase payroll. We have a client that’s gone from $10 million to close to $20 million in annual revenues and didn’t need to add a single person to their staff, which would’ve consisted of eight to 10 more people. That’s a massive profitability gain with that growth. It’s a mathematical equation you need to run for your business.

Watch the full episode here:


Support metrics in the age of AI

Liam: The thread of AI ran through most of our episodes this year, so it’s fitting that we finish with a conversation on how AI is creating support roles, not taking them. I sat down with Christian Osmundsen, Global Head of Customer Experience at Deliverect, Ruth O’Brien, Director of Automated & Proactive Support at Intercom, and Sam Forde, Merchant Success Manager at Zapiet. With roles changing and strategy changing – how do we measure this? What metrics are you all keeping the closest eye on as you introduce this new technology into your support operations? Christian, how about you?

Christian: Sure, Liam. As a general rule, we always look at and ask if it’s making us more efficient. That’s what we’re looking at first. And does it add value for our customers? We need to take a few months to see that – no hasty decisions.

“We were at a 13-14% resolution rate, which isn’t great. Today, we are at about 26%, so double that”

But when it comes to AI, we measure support volumes vigorously. We want to see what this looks like, not only in absolute numbers but also per account ratios, et cetera. Seeing the curves descending is encouraging for everyone. We look at response times and, of course, customer satisfaction both for the human interaction, the part where the bot is involved, and just Fin or AI resolving conversations. We are keen to see how that can develop over time. I think in the beginning, a customer can struggle with only being handled by a bot. But I think, with time, this will completely change. It’s still early days.

Last but not least, we look a lot at the answer rate for AI. I went back to prepare for this call to our reporting in May when we started. And we were at a 13-14% resolution rate, which isn’t great. Today, we are at about 26%, so double that. And we have a goal towards the end of the year to be towards 35-40%. And I think that’s within reach. We actually see on weekends, when customers think we are maybe not open and they are more encouraged to look for their own, that our rates immediately go up to 35%-ish.

Liam: Can anyone share anything in the way of the impact that AI has made on your KPIs so far? Sam, maybe I’ll come to you first.

Sam: I echo a lot of what Christian just said there. We’ve still been looking at CSAT, a traditional one, but it’s not your employees talking anymore – it’s Fin interacting with your people. A really important one for us though as well has been employee satisfaction within the business. We are seeing much greater retention rates. We haven’t seen anybody leave our business this year. And I think the engagements they’re getting in other roles is helping them actually think about having a career with us. Traditionally, support can be a role where people change quite often. But I think this is almost a golden age where you can go into support, and you’re not necessarily just a support agent – you’re almost a support generalist going and doing other different bits and bobs.

“We’ve put Fin in front of 8,000 people this year. It’s crazy. We’ve seen our CSAT jump 20 points”

We don’t have our employee satisfaction rating for this year just yet, but I can give an example where it probably would’ve increased things using Fin as the backup and the AI tech that we’ve had this year. Traditionally, our team retreat every year would’ve been two or three days. We would’ve run away as quickly as possible and then gotten back to the laptop to an inbox that hopefully hadn’t gotten too big. This year, we were able to go away for a full week. And Fin let us stay on top of things and keep running. And if we can take the team away for a whole week, that’s hopefully going to be amazing in that CSAT.

When it comes to actual numbers, though, we’re a much smaller business, but we’ve put Fin in front of 8,000 people this year. It’s crazy. We’ve seen our CSAT jump 20 points – we’ve gone from 70% to it being regularly in the low nineties on a weekly basis. And if you look at the graph and the timings of when we implemented Fin, it literally is that timeframe.

As Christian mentioned, those weekends and out-of-office hours are the real sweet spot. We see our Fin rating much higher on the weekends. Even though there are real people there, they’re rating things much higher because they’re getting that instant response at times when they wouldn’t expect it. We identify as a very British brand, and we have clients in Australia and Japan, and they’re getting answers straight away now. So, the CSAT has improved massively. I think that’s a huge thing. If you don’t jump into it just for that, I don’t know why you wouldn’t.

Listen to the full episode here:


Liam: Well, there we have it. A year of innovation and conversation. And those were just a handful of the thought leaders and experts who joined us this year. You can peruse our entire back catalog of more than 400 episodes wherever you listen to podcasts. Thanks to all of our wonderful colleagues and guests, but most importantly, thank you for listening.

Inside Intercom Podcast (horizontal) (1)