Intercom on Product: The new dawn of Machine Learning

We've been here before. Between media buzz, overstated claims, and the work on the ground, sometimes it's hard to distinguish fantasy from reality when dealing with Machine Learning. As neural networks mature and stand out from the pack, can the tech live up to the hype?

In the past five years, we’ve seen neural network technology really take off into its own. GPT-3 can create human-like text on demand, and DALL-E, a machine learning model that generates images from text prompts, has exploded in popularity on social media, answering the world’s most pressing questions such as, “what would Darth Vader look like ice fishing?” or “what would Walter White look like if he were in Animal Crossing?”

We wanted to know what’s up with this surge, so we’ve asked our Director of Machine Learning, Fergal Reid, if we can pick his brain for today’s episode. Despite the work still being very much a balancing act between what’s possible and what’s feasible, things, it appears, are just starting to scale. The tech landscape is changing, the business applications are (potentially) game-changing, and, spoiler alert, Fergal very much believes the hype.

In today’s episode of Intercom on Product, Paul Adams, our Chief Product Officer, and I sat down with Fergal Reid to talk about the recent buzz surrounding neural networks, how machine learning is powering businesses, and what we can expect from the technology in the next few years.

Here are some of our favorite takeaways from the conversation:

  • Neural networks have made significant headway in the past five years, and they’re now the best way to deal with unstructured data such as text, images, or sound at scale.
  • In CX, neural networks will likely be used with more traditional machine learning methods to choose actions that provide the best interaction possible with the customer.
  • Building ML products requires balance – it’s pointless to start with the problem if the solution is unattainable, but you shouldn’t start with the tech if it can’t meet real customer needs.
  • AI has been quite overhyped in the past. While it’s likely that more realistic claims close fewer accounts, it pays off in customer retention.
  • ML teams tend to invest a fair share of resources in research that never ships. Match it as much as possible with projects that have an actual impact on the customer’s experience.
  • If you want to invest in ML, hire someone with experience on both the tech and the operational side so they can start working with the product team from day one.

If you enjoy our discussion, check out more episodes of our podcast. You can follow on iTunes, Spotify, YouTube or grab the RSS feed in your player of choice. What follows is a lightly edited transcript of the episode.

The hype strikes back

Des Traynor: Welcome to Intercom On Product, episode 18. Today, we have an interesting topic to discuss. It’s all about artificial intelligence and machine learning. I’m joined, as ever, by Intercom’s Chief Product Officer, Mr. Paul Adams. Paul, how are you?

Paul Adams: I’m good, Des. Thank you.

Des Traynor: And today, we have a special guest, Mr. Fergal Reid, who’s our Director of Machine Learning. Fergal, how’s it going?

Fergal Reid: It’s good, Des. I’m really delighted to be on the podcast today. Looking forward to getting into it.

Des Traynor: Excellent. I think you’re our first or second ever guest, so you should feel very, very grateful.

Fergal Reid: I feel very privileged, indeed.

“We’ve seen sustained progress of something new and exciting – neural-network-driven technology – really starting to come into its own and be useful”

Des Traynor: Well, let’s start at the end, in a sense. It feels like the AI hype machine is once again in overdrive. This happens every few years from my vantage point, but what I can genuinely see happening is people creating a lot of art. The DALL-E generation has kicked off, and some of the created imagery is breathtaking. The other day, I saw there was a marketplace for DALL-E prompts where you can literally buy prompts that create images for you, which is as meta as it gets. In a more practical sense, GitHub copilot can now augment your code as you’re writing, which is pretty incredible; I’ve played with GPT-3 from OpenAI, and asked the questions and let it craft little paragraphs and stories for me, and it’s been pretty impressive. If we zoom back up a little bit, what’s actually going on? Has something happened in the last while? Has this got to do with any particular chain of events? What’s up?

Fergal Reid: It’s a complex thing to unpack – there’s a lot going on. There’s so much investment into this area of AI and machine learning across companies, so it’s difficult to unpack exactly what’s happening. If you look at arxiv, where people put their machine learning papers, there’s a torrent of new stuff every day. So, it’s difficult to tread a narrative through that. In my opinion, for the past five years, we’ve seen sustained progress of something new and exciting – neural-network-driven technology – really starting to come into its own and be useful. You mentioned GPT-3, OpenAI, and that’s what we call a large language model, which is a big neural network trying to predict the next word and a sequence of words it’s seen. And they are just scaling that up. They just added more and more compute to it, and it started to do amazing things.

Des Traynor: So, maybe just a couple of dictionary definitions. So, adding more compute, is that more CPU power?

Fergal Reid: Yeah, exactly. To go back a long way, the CPUs in our computers, the brain of our computers, were really, really fast at doing general purpose things. And maybe in the mid-to-late nineties, primarily driven by video games and stuff, we had this mass market of adoption of these GPUs, or graphics processing units.

Des Traynor: Like video cards and shit like that?

Fergal Reid: In video cards and your 3dfx card and everything. And they were really good at making graphics for computer games. Then, in the early 2000s, people were like, “Oh, the sort of operations we do for video games are really good for matrix and multiplication.” And turns out that sort of stuff is also really useful for the operations you need to do when you’re training a neural network. And so, after a long time, video stock value goes through the roof because there’s an AI and a crypto mining revolution.

The rise of neural networks

Des Traynor: You referenced a new uptake in work on neural networks. I feel like I heard about them when I was in college, back in the day. Has there just been more work put into them? Have they emerged as a primary way to do machine learning? Is there an alternative that we’ve moved away from?

Fergal Reid: Yeah, I would say that there is an alternative we’ve moved away from. Now, I don’t want to oversell neural networks. Neural networks are the new hotness, and almost all the breakthroughs you’ve seen in the last five years are of neural networks. However, that is a subsection of machine learning. In the machine learning team here at Intercom, neural networks is maybe 30% of what we do, using the same logistic progression stuff for predicting what someone’s going to do next.

When there’s unstructured data like masses of text or images or sounds, neural networks are now definitely the best way to deal with that data. For the breakthroughs you’re seeing – the visual stuff, the sound stuff, the text synthesis – you need a massive model that can really capture a lot of dependencies in that data, and neural networks are the main way to do that. People have invested a lot in making them scale, and you can run them much bigger. Some of the models you’re reading about might cost $10 million worth of compute just to train that model.

“In the past, for any unstructured text or image data, we’d look at it from a machine learning perspective and be like, ‘Don’t know what to do here’”

There are a number of things going on. We’re getting better at training them at scale. We’re getting better at expressing the problem in a way that we can make progress on and make sense of. In video, we’re continuing to improve performance. So, there have been a lot of technological revolutions. It’s a confluence of a whole lot of different trends.

Des Traynor: To transition into the product aspect, what’s possible now that wasn’t before? DALL-E can take a prompt and produce an image; GPT-3 can produce pretty realistic-looking generated text. If you wanted to analyze a load of text and work out what it’s saying, reduce it down or simplify it or check for sentiment or whatever, is there some sort of list of capabilities we now have? The reason I’m asking is that I’m trying to tie this closer to how PMs should think about this.

Fergal Reid: Yeah, there are a few different ways to think about this. In the past, for any unstructured text or image data, we’d look at it from a machine learning perspective and be like, “Don’t know what to do here. The size of this, and the number of possible paragraphs of text that could be in my document is crazy high. I don’t know how to deal with that with traditional machine learning.” And you can do stuff like extract features, say, “I’m going to break this into a bag of words and extract stuff.” But what’s different now is your methods to work with that data are just going to work much better than they would’ve in the past. And you don’t need as much hand engineering of features. You can use a neural network.

We’re starting to see middle steps, middle layers emerging. There’s this thing we call embeddings, where you can take one of these big neural networks that have been trained on a ton of text data, and then they’d be released by Google or one of the big players, who spend that $10 million on training, and you can go and use that to take any text you give it to convert it into a vector of numbers. Then, you can do stuff with that vector of numbers. So, there’s been breakthrough technology, but it has given building blocks that startups can actually work with to make products.

“If you’re in any startup ecosystem dealing with a lot of unstructured data, particularly large volumes, where you’re maybe trying to make decisions with it, you should definitely be paying attention”

Des Traynor: So, the first X percent is done for you by the bigger companies?

Fergal Reid: Exactly. Or open consortium, as well. There are people forming a consortium that put together a lot of money to train something big that is then released.

Des Traynor: So, if your product involves lots of human-written text, either creating replies, writing it, parsing it, or understanding it, you should assume that the ground has shifted under your feet over the last couple of years?

Fergal Reid: Yeah, I think that that’s a fair assumption. If you’re in any startup ecosystem dealing with a lot of unstructured data, particularly large volumes, where you’re maybe trying to make decisions with it, you should definitely be paying attention. The capabilities landscape has changed. 10 years ago, there was nothing you had to worry about, but now, maybe there’s something cool you can build that you couldn’t before. We’re starting to see a change in stuff as simple as search. Six, seven years ago, you would get Elasticsearch or something like that and use these tried-and-true algorithms to handle your searching. Now you can use neural search. And we’re starting to see emerging technology and products in that space.

In search of the next best action

Paul Adams: One thing I’d love to ask you about is the products that promise the next best action. I think this is important for product teams for two reasons. One is just products in that space – if you have a customer communication product or a product for sales teams, there’s a lot of promise around telling the salesperson what the next best action is. And product teams are often trying to get their customers and users to do more and engage more, so it’s a tool for them to drive growth. How much of that is hype? How much is real?

Fergal Reid: There’s always a problem with these machine learning products, and I say this as someone who builds machine learning products for a living, which is it’s very difficult to tell how much is hype and how much is real from the outside. I can’t speak about specific products unless I’ve analyzed and benchmarked them. I would say that the next best action stuff is actually less likely to be neural networks. Or if they’re there, they’ll be there as a component of it. To put it in an Intercom context, I’ll take the text of the conversation that’s been happening between the support rep and the end user and use embeddings to try and make sense of that. But then, I’ll probably put that together with a bunch of other signals about what’s going on, maybe the value of the account or where the customer’s at in their customer journey, and use a more traditional machine learning classifier or regressor to try and predict, “Okay, what’s the next best thing I could do?”

“As the accuracy increases, increases, increases, it crosses a critical threshold where it’s like, ‘It’s not always right, but it’s useful, and I don’t have to think. It helps’”

And this stuff works pretty well. We have features in our products that use more traditional machine learning methods that attempt to predict, say, what someone’s about to ask when they come to a website and open up the messenger. We do that based on all the data and signals we have about that user, and that works pretty well. If you can make good predictions with that, it’s a short step from there to something more general that’s the next best action.

I bet that stuff works pretty well. I would have reasonable expectations of accuracy. All these things work well when they’re augmenting and helping somebody. If the accuracy is too low, it’s like, “Oh, this is annoying. It’s crappy. It’s not worth paying attention to it.” But then, as the accuracy increases, increases, increases, it crosses a critical threshold where it’s like, “It’s not always right, but it’s useful, and I don’t have to think. I can just look at it and recognize that it helps.” That’s what we’re looking for with these products, and I’m sure there are people in the industry that have things like that.

Des Traynor: Yeah. I feel like Gmail’s auto-complete has crossed that perceptual cliff where I wouldn’t turn that feature off. You’re typing in a reply, it guesses the next two things you’re going to say, and you can hit tab, and maybe you would change a sentence or a word or something like that, but it’s directionally more valuable than not.

“I see a future where we can learn what suggestions prompt teammate behavior that gives better CSAT or better lifetime value of the customer in a win-win way”

Paul Adams: It’s funny, though. I think it changes behavior. I look at the suggestion and go, “I wouldn’t quite say that, but it’s close enough.” Tab, tab, tab. Enter, send.

Fergal Reid: I wonder if they ever do experiments where they measure the suggestions and the sentiment of suggestions they produce, and how they changed the real world. Facebook infamously did some experiments like this back in the day. If you look at something like Intercom, I see a future where we’re starting to make smart reply recommendations within the inbox. I see a future where we can learn what suggestions prompt teammate behavior that gives better CSAT or better lifetime value of the customer in a win-win way. Those low-friction prompts. I think about that whenever I write “I love you” to my wife. Sometimes I get the suggestion for “I love you,” and I’m like, “I’m typing that myself.”

Des Traynor: Yeah. There is something Marshall McLuhan-y about it – we shape our tools, and our tools shape us. You could imagine that a CS rep newly onboarded to a team that uses Intercom will actually end up talking and typing a lot like their colleagues based on the fact that Intercom’s telling them that this is the behavior that seems to perform best. It’s almost like a school for customer support.

Fergal Reid: We talked to some customers who loved the idea of a low-friction training ramp for new reps, which seems like the best practice. That’s what the system nudges you to do in a good way.

Problem vs. tech

Des Traynor: If we go back up a level, I feel that a lot of the narrative, even when, say, DALL-E came out, the most popular threads on it were things like, ” Can anyone name a good use case for this?” Or, “Here’s my best idea.” Obviously, everyone’s mind goes, “Oh, you could build a t-shirt company,” or whatever. My best stab at what that could be useful for is the ability to annotate a children’s storybook. Imagine a tool where there’s a child-type of story and images would appear to augment it. You could also see how it could be a plugin for Squarespace or Mailchimp to replace stock photography. Keynote or Google Slides’ would be similar.

I do feel like we’re approaching this backward, though. We’re saying, “Given that we can now take text and produce images, let’s build a company out of it,” which is not where the best companies come from. Usually, they tend to want to solve a problem in the world. What’s the best way for a fan or a PM to think about this space? Generally speaking, they probably obsess about a problem, not about a particular piece of new neural tech.

Fergal Reid: This is a very complex question. A lot of the time, the standard advice is that if you’re building some new technology startup, you never want to be a solution looking for a problem. You want to find a real concrete problem and then approach a solution. I think that’s generally good advice. At Intercom, we’ve got a principle around starting with the problem. But I think there are exceptions to that. With genuinely disruptive technology, where you’re like, “Something’s changing the world, it’s changing the landscape, there’s new capability here, and I don’t know what it’s good for, but I know it’s going to be revolutionarily good for something,” I think it’s okay to start with the solution and then looking for problems.

“There’s no point in starting with the problem if you’re trying to build a technology solution that just isn’t capable yet”

I believe the hype about ML and AI at the moment. I’d say that this time it’s real, and so it’s fair game to say, “Look, we’ve got a revolutionary capability here. Where are all the great opportunities where this can be applied?” Then, obviously, there’s an interplay. When you think you’ve found an opportunity, you probably want to go and start with the problem.

The machine learning team here at Intercom is a little unusual compared to other teams. We adapt to the product principles a little bit more than other teams do because we have to be in this grey space between starting with the problem and the technology. There’s no point in starting with the problem if you’re trying to build a technology solution that just isn’t capable yet. So, we’ve got to start with the technology a little bit, do some prototyping, get some idea of what’s possible, and then go and really sweat the problem and ask, “Is it useful or not?”

Des Traynor: It’s almost like you have to look through both the demand and supply side of innovation, in a sense. Out of all the problems we can solve and the capabilities we have, where is a good firm in interconnection? If we take our product Resolution Bot, how would you articulate that as a problem/solution pairing?

“With Resolution Bot, we didn’t use neural networks or anything for our version one, but we had the conviction that it was possible to build something good here”

Fergal Reid: When we started out, we were aware that there was a move in the technology and the product landscape where bots had been really poor, and they were starting to give compelling experiences in very limited circumstances where, “Okay, there’s something here.” And then it was like, “Okay, can we take our particular domain, can we take chat and conversations and see if there’s that marriage, that match between the problem and the technology that’s going to give great customer experiences?”

With Resolution Bot, we didn’t use neural networks or anything for our version one, but we had the conviction that it was possible to build something good here. We built minimal technology investment, validated that a crappy knocked-together prototype would actually help customers and that people would actually want it, de-risked that, and then iterated it and iterated it and iterated it. We’re now on version three or version four of our technology, and it uses very modern, fancy neural networks, and it gets best-in-class performance and accuracy. But the first version was Elasticsearch off-the-shelf just to validate that this would actually help people.

You want to guide that search. You want to be like, “I know there’s something good in this general direction of the product space.” I’m not going to end up with a validated demand for a product that is impossible to deliver. You don’t want to be there. You also don’t want to be like, “I have an amazing algorithm that will definitely move the needle for something nobody cares about.” You’ve got to iterate on both sides of that equation and find some landing zone in the middle.

Too good to be true?

Paul Adams: There’s actually a third leg of the stool. There’s a problem, there’s a solution, and then there’s the story, or what you can say about it. One of the things I’ve struggled with when it comes to AI and machine learning is what you feel good about saying externally and what other people are saying externally. At the worst of this is a tragedy of comms where all companies come out and make huge claims, and the people who actually know what they’re talking about say, “They’re ludicrous claims.” “But there’s this competitive dilemma. If our competitor says 80%, and there’s no way we think they can get that, but ours is 50, what do you think about that? What do you think about the claims you can make and the balance between the problem, solution, and story?

“I come across products in the market and assess their claims, and I’m like, ‘Does it actually do it? How do you evaluate that?'”

Fergal Reid: I mean, it’s very difficult. I think I would separate the internal product development from the success in the market. With internal product development, and this is true in Intercom, if I come and say, “Hey, guys, I’m pretty sure we can give a good enough product experience,” I’m at least accountable if it turns out that that’s not the case at all. So, internally, you’ve got to work with people and explain things well, but at least the incentives are aligned.

Externally, when people are competing in the market with machine learning products, it is really hard. I come across products in the market and assess their claims, and I’m like, “Does it actually do it? How do you evaluate that?” Even if I see a new research paper promising something amazing, and it’s got examples of “we said this to the AI, and this is what it said back,” my first question is always like, “Well, was that a cherry-picked example? Does it do that 9 times out of 10 or one time out of 10?” Because it’s very different depending on each. There’s always this implicit, “well, what’s the performance, actually?” You can’t really tell unless you do some sort of head-to-head where you sit down and play with it. Our customers are doing more head-to-head proofs of concept and evaluations, and I love that. That’s wonderful. That’s what we want to see.

“You can definitely overpromise, underdeliver, and then watch the account churn”

In terms of the space in general, I think you’re seeing people make demos publicly available more and more. People go to DALL-E 2, and they get access to independent researchers earlier. Or they write stuff in the papers saying, “this is what a produces in one run on a standard prompt.” That helps people get their heads around it.

Des Traynor: There is a question of what sort of revenue you want because you can definitely overpromise, underdeliver, and then watch the account churn. Or you can say, “Here’s what we think we can do for you,” risk losing the deal, but know that if they convert, they will get what they converted for. I think it’s a dangerous world to be in – taking the high road versus the lower road; taking the customers who are going to get exactly what they thought they were going to get versus getting a lot of angry customers in month 11 because they didn’t get anywhere near what they hoped for. It’s a challenge.

Fergal Reid: It is a challenge, and there are so many facets to that challenge. We have to manage expectations, too. Machine learning is getting way better, but it’s still not perfect. We sometimes have customers who buy our Resolution Bot, and it’s good, best in class, but it still makes mistakes. Every software product still makes mistakes. So you’ve got to manage expectations on all sides to have that positive relationship.

Des Traynor: How do you think about resourcing machine learning? At Intercom, we have a team led by yourself who is separate from all the teams and then partners to deliver machine learning functionality. Do you think it’ll stay that way? Do you think teams should have embedded ML engineers? Every team at Intercom has its own designer – we don’t have a design team floating around looking for bits of design to add. Does it make sense the way it is? For our listeners out there, how would they dip their toe in? Would they start with a dedicated ML sort of pod, or would they have a person? How should startups start to bring ML in?

Fergal Reid: I have a strong opinion that a centralized machine learning team is better for organizations of our size or smaller at this point in the technology development. We’re dealing with immature technology here. The technology is hard to use and easy to go wrong. There’s a set of skills that overlap with the skills of software engineering, data science, or analytics, but they’re not the same as it. I think it makes a lot of sense to have a centralized team who can work and hone that set of skills and learn the pitfalls because machine learning products have unique pitfalls. They’re probabilistic. As we mentioned, they sometimes get it wrong. And so, when you’re designing or building a machine learning product, you’ve got to really sweat. Is the accuracy rate good enough to provide a good customer experience? That’s hard.

“I think that a centralized model that then goes in to help on a project-per-project basis is the right model at the moment”

When you talk to a designer, one thing we often see is it’s difficult for them at the start to get their head around the idea that you can’t just think of the golden path where everything goes right. You have to consider all the paths where things go wrong and errors can accumulate. That’s difficult.

We’re at this weird intersection of software engineering, and we have to be able to deploy these products with data science or research. We’ve got to run a product team. We’ve got to be lean and efficient, but we also have to run a bit like a research team where we create space for innovation. Spent two weeks working on something and it went nowhere? That’s fine. We’ve got to be willing to invest in that. So, I think that a centralized model that then goes in to help on a project-per-project basis is the right model at the moment.

Keeping it real

Des Traynor: How do you deal with the fact that somebody like Fergal says, “Hey, Paul, we’re going to have a go at a product that could transform the nature of our customer support product, but it might not work, and you might not see anything on the first side of all this.” At the same time, someone like me is saying, “Hey, we need to hit the roadmaps, and we need to tell the company what we’re building and tell the sales team what to sell.” How do you resolve that complexity?

Paul Adams: As someone who worked for years on products that never shipped, I have deep, deep scar tissue about any sniff of a thing that isn’t going to ship as soon as possible, as small as possible.

Des Traynor: This would be your former employer, to be clear, right?

Paul Adams: Yes. In my former employment, yes. But from day one at Intercom, Des and I have always been obsessed with shipping and starting small. We obsess about scoping and getting something out as fast as possible, the smallest quickest solution to the problem we’ve identified. So I have a desire that’s always the case.

“I come from academia, and anyone who’s had time in academia has probably seen so many projects that promised the moon on a stick and then never do anything”

Now, obviously, this is different. One question I’d love Fergal to answer, though – a bit of a side note, but I think it’s important – when you were answering Des’ question earlier about how you resource a machine learning team, you’re talking about the ML engineers. For almost all of our ML team history here, it’s been ML engineers. But we recently hired a ML designer. Can you tell us briefly about that, too? Because I think that’s an important part of the answer here. What does the ML designer do, and what’s the difference?

Fergal Reid: So, that’s a hard question. This is the start of her week three, so I don’t want to be talking on the podcast about what she’s going to do before talking to her…

Des Traynor: At a higher level. What do you think of machine learning design versus regular design?

Fergal Reid: Let me invert the order back again, and I’ll come back to this. I hate working on things that don’t ship. I have a Ph.D., I come from academia, and anyone who’s had time in academia has probably seen so many projects that promised the moon on a stick and then never do anything. And part of it is necessary waste, right? You’ve got to try a lot of things because it’s so risky. But part of it was always never going to work. And so teasing those two things apart is absolutely key here. I want the machine learning team to be as exploratory and risky as possible and no more exploratory and no more risky than necessary.

We try and tread two worlds here. We try and keep these extremely firm Intercom principles: if you’re going to fail, fail fast; start with the problem; start small, move in small steps. We try very hard to follow those principles. But we do the research and the risky stuff when we need to if we’re pretty convinced someone would want this. We want to be very, very clear about the risk we’re trying to eliminate at every phase in the development. So yeah, that’s how we operate. I would say we are more research-y than the average Intercom team, but probably more thoughtful about moving in small steps and about exactly the risk we’re trying to reduce than the vast majority of ML teams in the world. Certainly much more so than a research lab team would tend to be.

With that in mind, we’ve recently hired a designer for the first time in the five years we’ve had a machine learning team. That’s partly because the team is getting a bit bigger and can kind of support that, and since our team has grown, our work is touching more and more parts of the Intercom product, and we can do better than handling the product design ourselves and having a ML engineer figure out the product design envelope. We’ll be able to increase the scope of things we can interact with less with disruption to the teams in whose area we’re operating.

“Even when you’re doing something as simple as testing a ML system, what if it’s not working the way you expected? To unpack that, you’ve got to be willing to engage with a lot of complexity”

Des Traynor: Is it a different type of design?

Fergal Reid: There is definitely a certain type of design specific to ML and ML products. And we can see this when we’re interviewing or interacting with designers in this space. There’s sort of a product design or a systems design, and a lot of the time, its aspects are closer to API design than UI design. And obviously, there’s a big spectrum here. We have a great design team at Intercom. We have people used to working in all different parts of that spectrum. But there’s definitely a difference.

And also, you’re looking for quantitative skills to make progress in this space. It’s very immature technology. There will come a time, I’m sure, in five, 10 years, I don’t know, when Amazon and Google and everyone will have figured out the best API. And it’s going to have really nice docs and explain to you all the premises of that space, but we’re not there yet. We’re a very, very long way away from there. There are just so many trade-offs. Even when you’re doing something as simple as testing a machine learning system, what if it’s not working the way you expected? To unpack that, you’ve got to be willing to engage with a lot of complexity. Some designers are great at that, but for other designers, that wouldn’t be how they’d like to work. And so, you’re looking for something that treads all those needles at once.

The future of conversations

Des Traynor: We’re coming towards the end here. I would like to try a little quick-fire round that I haven’t prepared you for at all.

Fergal Reid: Sounds good.

Des Traynor: Here’s my proposal. I name a product or a product space, and you tell me something that you think is possible that people don’t think about.

Fergal Reid: Oh my God. Okay. This is going to be futuristic and speculative and wrong, but let’s do it.

Des Traynor: Let’s start. Issue tracking.

Fergal Reid: A lot like customer support. You can probably do a lot more with clustering issues and detecting when a bunch of issues have one root cause. You can probably do much better in terms of the next best action about suggestions and resolutions for common issues you’ve seen before.

“We’re heading for a future where any suggestions, smart replies in our inbox, or the Gmail-style ahead suggestions are starting to become an agent”

Des Traynor: Cool. This is going well. Project management apps. Say Basecamp or Asana or something like that.

Fergal Reid: Project management apps. Probably an insights layer, and you can build on top of that stuff. People always say that. It’s always easy to say and extremely difficult to make an insights product work, but it’s probably an insights layer. There’s probably a lot you can do with unstructured assets that are part of the project you’re tracking, and you can start to make sense of those that you couldn’t in the past. Definitely field research, but there’s probably something there.

Des Traynor: Cool. Paul, do you have any?

Paul Adams: Well, one hot space we mentioned earlier is communication products. I’m thinking about Gmail, for example. Gmail was the same for the best part of a decade. And now, suddenly, it feels like there’s an explosion in all sorts of cool things happening in Gmail.

Fergal Reid: This is going to sound very self-serving, but I think that the sort of domain we’re in at Intercom is going to drive a lot of extremely exciting innovation that is going to percolate out to broader, more horizontal products like Gmail. To give an example of why I think this: if you look at our customers as they use Intercom, they’re having so many of the same conversations again and again. There’s so much structure in that domain. It’s this mix of unstructured texts that they’re writing, but then there’s so much structure to the conversation, whereas Gmail is broad and horizontal. If you look at my inbox, any email could be completely different than the last. So, I feel that companies like us working in areas like customer support are going to be in a great position to innovate because there’s just so much structure to take advantage of there. You’re going to have suggestions, recommendations, and automation. And I think that will percolate out. That will go out to these broader, more horizontal products after it has proven its business value. We’re heading for a future where any suggestions, smart replies in our inbox, or the Gmail-style ahead suggestions are starting to become an agent. It’s starting to scratch that direction, and we’re going to see more and more of that.

Paul Adams: It’s easy for me to imagine a world where a WhatsApp exchange between any one of us is literally just tap, reply. Tap, reply. Tap, reply.

Des Traynor: We’re trying to go for dinner. Here are four places. Bang. What time? Bang.

Paul Adams: But literally, you’re just inserting the recommended content.

“If you ever wonder what the future looks like, put the two humans doing the task where one person’s allowed to touch the computer and the other isn’t, and see how those two humans interface”

Fergal Reid: Now, is the future of the inbox something where you’re sitting almost looking at this dashboard of a conversational tree unfolding and your job is to guide the AI to answer questions for you? Maybe the future of coms and conversations looks a lot less like typing on a keyboard. You’re still going to do that sometimes, but maybe 80% of what you do will be much more structured, like guiding the branches on a tree.

Des Traynor: The AI stops augmenting you, you start augmenting it.

Fergal Reid: Exactly, which is good. It’s almost like you’re managing somebody who’s answering your conversations. And that’s always my number one thinking tool for AI disruption. Imagine you had a very well-paid, smart human doing the task, and you told them what to do. What would that look like? People were like, “Oh, there’ll never be software that automatically programs, or it will never have visual programming. We’ve tried it a whole bunch of times.” Well, I used to do programming competitions, and if you have a highly skilled developer, you don’t say the exact keystrokes. You say, “Oh, now flip the list around. Okay. And then filter out those two things.” If you ever wonder what the future looks like, put the two humans doing the task where one person’s allowed to touch the computer and the other isn’t, and see how those two humans interface. That’s a great starting point for where AI could take us.

Des Traynor: That’s fascinating. Okay, last question. You mentioned that AI is a disruptive force. If our listeners haven’t already embarked upon an AI, ML journey, and let’s say they have the budget for a couple of heads, what’s the best thing to do?

Fergal Reid: Like any disruptive tech change, the first question is always: Is it still worth ignoring? And how long can you ignore it for? And I’m not saying to race and do something now. Maybe you’re in a space where you should continue to ignore this as it matures for another few years. If you can’t ignore it, or if you want to do something in this space, I would say to concentrate your resources and your budget and get someone who’s got some deep experience at the tech end and enough of a product head on their shoulders that they can go and work with your product team productively. And start exploring opportunities from there. Start building relationships with the designers and the company. Start sketching and figuring out, “Okay, what would be possible to design here?” And start mocking up some designs. Do they look exciting? Do they look good? Show them to your customers. Get customer feedback and take it from there.

After that, you’re into the standard product development – how do you know how to resource something or not? But yeah, concentrate your resources and get someone who knows what you’re doing as much as possible, and then have them pair with your existing organizational assets. Don’t go and say, “We’re going to do some blue sky thinking. We’re going to start a new lab and put it off-site, and they’re going to go and build nothing for two years, and it’ll be very impressive.” No, definitely tread that balance. That’s what it’s all about. It’s about marrying and balancing technology and design.

Des Traynor: Fergal Reid, thank you very much for joining us. Thank you, Paul. And we’ll see you all again for Intercom on Product. Take care, everyone.