Intercom on Product: Riding the AI wave in 2024

Join us as we dive deep into the highs and lows of the past year – and what awaits us in the months ahead.

Generative AI took 2023 by storm. It dominated every podcast, presentation, strategy meeting, and end-of-year wrap-up – including this one. Because the truth is: one, we haven’t seen everything yet, even with this current advanced state of technology. We haven’t, as our own Emmet Connolly puts it, extracted all the juice there is.

And two, it’s all just getting started. Much like the arrival of smartphones and the mobile revolution in the past, we’re at the cusp of a new technological transformation – and it goes beyond generative AI and software. We’re talking hardware, devices, wearables. We’re eagerly awaiting the next moves of both OpenAI and Anthropic and tech giants like Apple, Meta, and Google. We’re starting to see AI products – some exciting hits and some unfortunate misses – not to mention the beginning of a wave of new, AI-first companies. And we couldn’t be more excited to see what’s in store for 2024.

And that’s what we’re going to be delving into on today’s episode of Intercom on Product. I sat down with Paul Adams, our Chief Product Officer, and Emmet Connolly, our VP of Design, to chat through the good, the bad, and the ugly of 2023, as well as our expectations and predictions for the year ahead.

Here are some of the key takeaways:

  • Identifying optimal use cases for deploying AI is key. It can easily be misapplied by trying, for example, to simplify structured, infrequent tasks with precise outputs that need to be reviewed by a human anyway.
  • As we step into 2024, we can expect to see increasing market readiness, deeper investments in AI, and the first truly AI-first companies and products coming out to market.
  • In the world of wearable tech, timing is crucial, and design is king. Integrating seamlessly into users’ lifestyles usually trumps creating “status symbols” that no one wants to use.
  • Much like with design, we can expect customer service to evolve into a continuous, infinite model that emphasizes ongoing relationship-building over a set of finite, ticket-solving tasks.
  • Designing for the future involves understanding a tool’s capabilities and AI’s suitability, and potentially incorporating under-the-hood flexibility while automating simpler tasks.
  • In an AI-driven world, there’s a risk that relying on automated tools and magic wand solutions may potentially compromise critical thinking and judgment.

If you enjoy our discussion, check out more episodes of our podcast. You can follow on Apple Podcasts, Spotify, YouTube or grab the RSS feed in your player of choice. What follows is a lightly edited transcript of the episode.


AI takes center stage

Des Traynor: Welcome to Intercom on Product. I’m joined by Paul, our Head of Product, our Chief Product Officer. Hey, Paul.

Paul Adams: Hey, Des.

Des: And because this is a special roundup, we’ve brought in Emmet, our VP of Design. Hey, Emmet.

Emmet Connolly: Hey, folks.

Des: Today, we’re going to talk about 2023. I know we’re a bit late doing that. We’ll look back on the highs, lows, and weird shit that happened. And then, we’re going to look at 2024 and set some expectations, predictions, and just general ideas about what might be happening.

“Every presentation, and end-of-year wrap-up, it’s all generative AI”

We’ll start with 2023, a year when obviously generative AI ruled the roost. There was very little airtime for any other topic. Every failing startup strapped generative AI to its name and every boring incumbent decided to lean heavily on it to try and make themselves sound interesting. Paul, what made you excited this year?

Paul: Yeah. It’s hard to get away from AI. Every presentation, and end-of-year wrap-up, it’s all generative AI. The end of ’22 was ChatGPT, and then I can’t remember exactly when it was in 2023, but when OpenAI launched their Vision API, and when I perceived the Vision API, I think it was a demo, and that was incredible – it could recognize photos and answer questions. It was just such an obvious step forward. A massive step forward in capability.

It just changed how I think about it. I ended up having what I was calling my first AI holiday, where myself, my wife, and my kids went to Seville in Spain. And it was the first time I was actually just walking around the city like, “What’s this? Translate that.” Some of that stuff was possible before with Google Translate and so on. But I literally had a camera on my phone that could now do AI things, you know? And it literally changed how we moved about the city, the choices we made, and the restaurants we went to. It’s the beginning of something huge.

Des: Were you at Google when Glass was launched?

Paul: No. I think it launched after I left.

Des: Why didn’t Glass work like this?

Paul: Well, it’s an interesting conversation. We should unpack it a little bit. I think Glass was maybe the right form at the wrong time. You see Facebook with the Ray-Bans now, and I think that could be big. But back then, it was like, “Buy this kind of Google nerdy glasses.” It might’ve been the right idea, but just way ahead of its time.

Des: All I really remember from the demo was like, “Okay, Google take a picture,” or some shit like that. I don’t think the software was there.

Emmet: I was at Google X when Glass was being developed, for what it’s worth. I mean, the software was there – it had things like directions, you could search the web, get answers, things like that. A lot of what you could do on your watch today, or a few years ago, at least, or on your smart speaker with a small visual you could do.

“It’s a bunch of fairly foundational breakthroughs in terms of how physics systems work and all of the very smart puzzles they were able to do”

Des: What about you? What was your breakout moment for 2023?

Emmet: In an effort to not give Paul’s answer and the obvious answer, which is AI and all the associated stuff, I’m actually going to give my real answer. The most excited and enthralled I got from software this year was a video game, Tears of the Kingdom, by Nintendo. It’s a sequel, and first of all, they spent five years on the first version of the game and then seven years on the sequel, even though it had the same fundamental model and system. It’s fantastically well-designed, but it’s amazing to see what seven years of polish actually does for something. And it’s not just the craft – it’s the depth and complexity and the entire system of how all the different things you have in your inventory work together in perfect balance and harmony. It’s a bunch of fairly foundational breakthroughs in terms of how physics systems work and all of the very smart puzzles they were able to do. Doing this on years-old, already underpowered hardware is a massive technical achievement.

Des: Give us an example of the physics puzzle.

Emmet: Maybe you could imagine a roped bridge or something like this, and these are things that are notorious… If you pick up a rope in the middle, the end of it acts very differently, and it’s very hard to, in physics, computationally simulate all that. They’ve managed to figure out how to do it because they spent seven years perfecting the engine on top of the previous five, and that was on the shoulders of giants as well.

“Maybe the reason it stood out to me is it’s just super refreshing to see software that isn’t iteratively done”

Des: What engine is it on?

Emmet: Proprietary. I have watched Japanese language game developer conference videos trying to learn more because I became mildly obsessed with the game and how they designed and built it. Lots of cool in-house stuff they use for designing the levels and things like that.

Des: It’s a very different way of building software to what we typically do and what’s expected in our industry, which is like ship to learn, iterate fast –

Paul: They should be on Zelda 11 by now.

Emmet: Yeah, 100%. Or a Zelda release patch 11 or something like that, which would totally take the magic out of the thing. It’s a fundamentally different type of thing, although lots of software and games now exist with monthly updates the way you’re describing. But maybe the reason it stood out to me is it’s just super refreshing to see software that isn’t iteratively done. I’m obviously a huge fan of the benefits of iterative software development, but it’s nice to see the exact opposite approach executed so well.

The good, the bad, and the useless

Des: What about if we go the other direction? Maybe lowlight is too strong a word. What’s the downside of AI? Are there areas where we’ve gone too far?

Emmet: Personally, I think it’s a bit too early to say we’ve gone too far, but I will say, even internally, I’m not making a comment on the industry at large or launches, as we on our design and product team have worked with it, I’ve noticed it’s very easy to come up with great applications and misapplications for AI. Some of the misapplications I think we’ll see in a lot of products and features coming out this year or next year with those misapplied versions are the ones where maybe it tries to take an infrequent but quite structured task, and tries to make it easier with AI. So, instead of going through lots of clicking to set up a very deterministic workflow in your product, you could type a description and have the bot build a workflow for you.

The problem with something like that, because the output is deterministic and very precise, is you have to go and check the work. It’s also an infrequent task, so you don’t get a lot of time-saving at all from doing that in the first place. I think we’re still figuring out what it’s good for and what it’s not. Do you remember, five or six years ago, the previous hype cycle of bots and conversational commerce? There was lots of “check the weather” or “order flowers”, and that’s not really what I need to use a bot for. Even within products or applications of AI, I think we’ll find the good and bad applications.

Des: Out of curiosity, what’s the calculus there with “check the weather”? You’re basically writing the words “check the weather,” and that’s more characters than typing weather.com. I always felt it was optimizing the wrong part of the experience.

“Current AI tools have really opened up the possibility space for user input. But I think we have a lot of work left on the product side, on the output”

Emmet: And many fewer taps than tapping the weather. And that’s a daily thing I tend to do as well. And that’s a very basic one. The number of taps is probably a good starting point.

Des: So, is the anatomy of misapplication of AI about the determinism of the output? Is it about the precision of the output? Where do you not want to see your designers applying AI?

Emmet: Yeah, probably in that thing… The current AI tools have really opened up the possibility space for user input. You can throw a bag of words or a couple of sentences at the thing, and it can do a lot with it. But I think we have a lot of work left on the product side, on the output. If it’s anything more than just a couple of paragraphs of text, which is often not super useful for workflows… We need to do more product work to actually create that output experience. I think you saw that with previous voice assistants as well. In the end, things like Alexa, Siri, and the Google Home assistant are for what? Alarms, checking the weather-

Des: Play a song.

Emmet: That’s pretty much it. I bet that’s 80% of the usage. And that’s because those are the outputs that worked really well. The feedback is instant, and you understand precisely what was done.

“The areas to avoid are probably the ones where you need the output to be extremely correct, precise, deterministic, with no mess-ups, if anything goes wrong, it’s a disaster

Des: It’s also pretty unambiguous input as well.

Emmet: I think those systems can handle a lot more input and more complicated demands than setting an alarm, but a lot of the work that needs to be done there is probably still on the output side. To answer your question, Des, the areas to avoid are probably the ones where you need the output to be extremely correct, precise, deterministic, with no mess-ups, “if anything goes wrong, it’s a disaster.” Those are the areas where you’ll still see people doing a lot of manual clicking to get the right results.

Des: I’ve seen a lot of tools like Kittl where you give a text description of the image you want generated, and it will produce a pretty high-end SVG of an image that you can then play with. That works when you have a relatively broad spectrum of acceptability, but as soon as you get into, “No, this really has to look like the rest of my product,” before you know it, you’re back to drawing rectangles and changing colors.

Emmet: Anyone who’s played with Midjourney will tell you it’s awesome if you’re like, “panda on a skateboard,” and it gives you images you didn’t have in your head. But if you have a precise image in your head and you’re trying to get the thing to create it, i.e. a deterministic output you’re trying to create, it’s really frustrating and doesn’t work.

“If the cost of validation is effectively the same as the cost of creation, what are you doing? The AI is not helping you a lot”

Des: I had Victor Riparbelli from Synthesia on, and he was describing another frustration, which is the slot machine experience of generative AI. You get a Panda on a skateboard and you’re happy with it, but you wanted a red skateboard, and now you get a totally different panda on a totally different skateboard. The skateboard might be red, but everything else is gone.

There’s an interesting kill zone, at least in B2B-targeted AI features. I often give the Workday example. It’s a well-known fact that I’m not a fan of Workday. I don’t like using it. I don’t like using it to book time off. The Workday version of “Play that song” is, for me, “Book October 14th off” or whatever. Again, it’s precise input, pretty easy to verify output, and not susceptible to misinterpretation. When you’re saying something like, “Design me a chatbot that asks the user to…” there are too many ways where that goes wrong. And if the cost of validation is effectively the same as the cost of creation, what are you doing? The AI is not helping you a lot. If you have to go and read the entire screen to work out if it did the thing you wanted it to do, that starts to eat into the productivity gain.

Emmet: I’ll tell you what’s a good example of what you’re just talking about. You mentioned the specific example of telling a chatbot to create a chatbot for you, which is how custom GPTs, the Open AI product, works, right? The first time I used it, I was like, “Oh my god, it’s a natural language bot training UI.” You’re chatting to the bot about the bot you want it to be, basically, but then you flip to the other tab and you realize it was a bit of a Wizard of Oz switcheroo because there is a load of form fields under what you’re actually creating that are a lot more structured and broken out. As soon as I realized that, I was like, “Oh, to hell with chatting to the bot, I’m just going to go straight to the actual output the bot is creating.” Sometimes you’re like, “You know what? It’s quicker to know what I’m creating and just create it myself.”

Cautionary tech tales

Des: Paul, where do you think we lost ourselves in 2023 in terms of either over-hyping or over-criticizing or just too much extreme opinion?

Paul: To set the context for what I’m going to say, I think we’re at the start of this new S-curve, but technology happens in waves every five to 10 years. And the current cycle, the last one would be mainly smartphones. I think it’s really clear that looking at smartphones, we’re at a plateau. Whatever we end up calling it in the future, generative AI, AI, it’s really clear to me that we’re at the bottom of this new S-curve.

And I think the S-curve includes not just generative AI, but also different types of devices. This year we saw Rewind, we mentioned the Ray-Bans from Facebook earlier, and then the Humane pin came out. It was just fascinating for me to watch the launch of the Humane pin. For anyone unfamiliar, Humane is a company that has been in stealth mode for a long, long time, and they’ve come out with this wearable pin, that you put in a pocket or attach to your top or whatever. And the amount of hate that generated from our industry was sad to see. Now, maybe Humane doesn’t do the right form factor, but a lot of people were very quick to criticize it, and I think there’s a lot to this space. It was kind of a downer. There was a two or three-week moment where-

Des: The Open AI dev day.

Paul: The dev day, the Humane pin came out, the Rewind pendant came out at the same time-

Des: Tab. This guy, I can’t remember his name, produced a product called Tab, which is effectively a far cheaper version of Humane.

Paul: So, I think this new S-curve that’s taken off includes AI, of course, but I want to see all sorts of new types of devices and wearables and all sorts of stuff. Those few weeks were incredible to watch. I’m absolutely certain we’re at the start of this new S-curve. We saw the good, the bad, and the ugly, and the Humane one was kind of the ugly. It’s a sign of the times too, but good for them, they tried to do something new. If you look back to any of these S-curves, it took Google two or three versions of an Android hardware device to really get it right. The first one was this flip phone thing, and Blackberry totally lost the road and ended up being the company that kind of failed. Innovator’s dilemma and all that. But that was, for me, a very insightful moment because I think there are different ways for people to approach those types of things, and there’s something cool going to come out of that space.

Des: I think that’s true. People took the piss at Google glasses we spoke about earlier, but it’s pretty clear we’re all going to have some sort of technology-enabled headwear of some sort. And similarly, people took the piss at Segway, and you go to any European or American city, and you’ll see a lot of people on scooters. Often, it’s not that these ideas are bad – they might be just early. Or they might be the right idea even at the right time, and it might just be that with V1, the form factor wasn’t what was needed, and V2 was what you needed to get.

“I remember we were tapping the glass, and everyone’s like, This thing’s a piece of shit. Amazing keynote, Steve Jobs, but no one’s going to want this

And you have to remember that we remember the iPhone launch differently from what the actual reaction was. Literally, everyone was taking the piss at this $700 phone that didn’t have a hardware keyboard, I think they didn’t have 3G, it didn’t have GPS… A lot of shit was missing. And everyone was just like, “What the hell is the point of this device?” And they were all accurate criticisms that were very quickly addressed in future revisions, and then it became the most dominant device in the world.

Paul: Totally. I worked in the mobile team at Google when the iPhone came out, and we were literally running down the street trying to buy them the first day you could. I remember getting that first iPhone back to Google, and this sounds stupid now, but phones before that had a keyboard. The Blackberry was phenomenally successful. And I remember we were tapping the glass, and everyone’s like, “This thing’s a piece of shit. Amazing keynote, Steve Jobs, but no one’s going to want this. It feels horrible. The glass is cold and hard, and people want buttons.” And again, two or three years later, what does Android look like? An iPhone. There are versions of this that’ll play out for sure.

Emmet: Did you see the iPhone case that someone released this week that has a keyboard in it? Lots of people who basically never saw a Blackberry in their lives are like, “Oh my god, amazing! a hardware keyboard for the iPhone.” Timing is everything on these things as well. It’s hard to tell unless you’re a couple of years removed.

Paul: I think timing and perseverance. There’s an interesting contrast between the things we’ve talked about so far. Zelda, software that took years and years and years, and hardware obviously takes a lot longer than software to build and get right. Humane launched their V1, it’s going to take them a year at least, I’m sure, to have a V2 that might be different. It’s interesting to think about the next version of these things and what shape and form they’ll take. I think we’ll see all sorts of wearables – pins, pendants, glasses, necklaces, watches. The watch is another one.

Emmet: If you’re to use the iPhone and ChatGPT as the instigating moments of the S-curve that you’re talking about, where would you say we are? Are we at the iPhone 4S stage? I think we’re in flashlight app territories still with the custom GPTs.

Paul: Very early start. All of us are guilty of looking at something and thinking it’s a bit shit, and God knows I’ve slagged off the Segway many times. But I think there’s something to the idea that we should be extremely open-minded. I think we look back in three, four, five years at this, and the skepticism and criticism of today will look quite naive.

Des: So much tech just takes a while to find a home. Paul, I think Facebook are on to a total winner with the RayBan, and the reason is the LLM they have is good. The ability to see a scenario and do something useful, like, “Translate the menu” is one obvious application. I also think sunglasses are a thing that people already wear, and so, unlike many other wearables, where you have to convince people to wear them, sunglasses are a thing you’re going to wear a lot of the time. And these take a very little hit in the aesthetics for being a tech-enabled pair of sunglasses. They still look and work like sunglasses. They released the second version just before Christmas – twice as good camera, twice as good sound. They have AI, et cetera. It is on one end of the spectrum where I think it will be an everyday wearable AI-powered piece of tech.

And then the other end of the spectrum is, say, maybe Vision Pro. I think Vision Pro is going to be quite expensive. It’s going to probably be something you will not leave your house with. It’s going to be quite an immersive experience. I don’t even know if you’d call it a wearable. You’ll probably call it a different type of computing form. When you think about wearables, and I know you have a lot of background in the area, is the watch done for in a world post all this, or do you think the watch will have a resurgence? Do you see watches with cameras where you can point at things? Where do you think all this will go?

“Google Glasses looked like a computer on your face, and something thats incredibly important here is RayBans are timeless”

Emmet: I think the watch has a good chance of being an important form factor. My reaction to the Humane pin was like, “Wow, cool.” But I really expect the next version of the Apple Watch, if they’ve got their skates on to do 100%-

Des: Do you see watches with camera phones?

Emmet: Yeah, we prototyped them back in the day. There’s no technical barrier, you know what I mean? This is the timing thing. And it is funny to hear you talking about sunglasses with compute because I am still living in the past in which Google Glass was a complete failure, fully mistimed. And by the way, they didn’t look like sunglasses or glasses, and that was a very important misstep.

Paul: I think Google Glasses looked like a computer on your face, and something that’s incredibly important here is RayBans are timeless. There are certain kinds of things in fashion where people are like… Adidas Samba? Timeless. Converse? Timeless. Some things are timeless, and RayBans are timeless. The classic RayBans are timeless, and no matter what’s happening, RayBans will always be there. It’s an incredibly smart partnership and one reason why it might succeed.

Emmet: The original vision, for what it’s worth, for the glasses was a contact lens in which you could see information directly, and they kind of compromised their way back to the glasses. You realize now that starting with a chunky pair of hipster sunglasses is a way better place to start because you can fit a computer in it.

Des: And people are already wearing them. I think people overlook that. When Apple launched the watch, they partnered with actual watchmakers to get a strap that made it look like a real, classic watch. Fashion matters when you’re asking somebody to wear something 24/7, and the Google Glass looked like lab goggles with a computer attached to one corner. It was an odd decision.

Paul: I think a mistake a lot of companies make in this space is they get wrapped up in their own ego and belief, and they like to think the devices they’re going to ship will become a status symbol. So, it intentionally looks different, and people want to turn their nouns into verbs, like, “I’m Hoovering the sitting room,” or, “I’m Dysoning the sitting room.” There is this ego-driven aspiration, and I think that’s a mistake. For something like this, it’s much better to try and fit into people’s habits. Your question earlier about why I did not use Google Translate… I’m used to now using ChatGPT on my code. It’s an app I use quite a lot. And so, it was a habit I’d formed already; it wasn’t a new habit I had to form.

Des: What’s your prediction for Vision Pro?

Emmet: I bet when you try it, you’ll be like, “Oh,” similarly to the experience you had with the iPhone. But the keynote looked excellent to me. It looks like they’ve done a really good technology job. I’m looking forward to trying it out. The social questions around it or the fact that you’ll more than likely use it on your own at home in your home office or whatever probably gets them out of jail in that regard. I think it’ll be interesting to see. With something like that, you clearly have to let Apple be Apple and get back to me in three years and we’ll see what they’ve managed to turn it into in terms of getting the cost down and getting the applications to be more applicable to regular people and so on. I have a hard time believing it’ll be as broadly adopted as the iPhone, but they might have another watch or AirPods on their hands, in which case, great for them.

Des: There’s one manufacturer we haven’t mentioned, and they haven’t released any hardware, that’s why. But have you heard the rumors about Open AI?

“We’ll see if [OpenAI] can pull it off because the likes of Meta and Apple who have all that vertical integration and have their own awesome AI labs and everything probably start from a stronger place longer-term”

Emmet: I think Open AI are in an interesting position right now in that it seems like they’re still in the midst of trying to figure out what they want to be, and what Open AI might want to be is a hardware company, to your point, a platform services company providing ChatGPT as a service, a consumer company, that app is on the home screen of your phone now and so on. And they’ll probably start to in the next-

Des: And a lab as well.

Emmet: And a lab, to try to bring about general AI, which is probably their ultimate top-level reason. I think they’ll see lots of competition. With Apple, it’ll be very interesting to see what happens when Siri becomes properly AI-enabled, and you were saying you’ve got your habit now and ChatGPT is ingrained, but most people don’t have that habit, and yet that habit will be a lot more ingrained when it’s an OS-level integrated thing.

I think they’ll see a lot more competition on the consumer side from startups, things like plugins, and even their custom GPTs haven’t quite, for me, at least, captured what I expected them to be. I think they’ll have to figure out what they want to be in the next year. But naturally, they’re the hottest company in the world right now, their ambitions are expanding and pointing in many different directions at once. We’ll see if they can pull it off because the likes of Meta and Apple who have all that vertical integration and have their own awesome AI labs and everything probably start from a stronger place longer-term.

Paul: I think one to watch here is Meta. OpenAI are an incredible company, and they’ve already changed the world. The future is so open, and there are so many opportunities. It will be interesting to see which ones they take. And there are other providers here, of course, like Anthropic and so on, but I think Meta have flown under the radar a bit, and if you start adding up the pieces, there’s obviously the glasses we’ve talked about, they also have LLaMA and they open-sourced it. Open-sourcing LLaMA and giving it to everyone is a completely different way to play this game. And as you said, they have the integration and Oculus and all sorts of different pieces of the puzzle.

Des: And WhatsApp. There’s an interesting set of tools they have. We all have this experience when you go home, which is the home tech, whatever you want to call it, is all fragmented to shit. You’ve got your ring doorbell, your Nest camera, your Hoover… I just wonder if it’ll end up like that, where your glasses talk to Facebook, your phone talks to OpenAI, your watch talks to Apple… Will we have that problem, or will somebody actually just nail the full kit?

Emmet: Apple will definitely want you to be all in on their ecosystem and see it as yet another thing like the Watch and the AirPods that hooks you into buying an iPhone every year. I’ve heard this expressed as Dunbar’s number for bots, which is to say how much room in your life do you have for how many bots? For every single product that you go to, do you want a different copilot in the sidebar saying, “Hey, I’m your Workday copilot,” and you switch over to Intercom, and you have one there, and then you have one at the OS level, and another in your phone… Or is the Dunbar’s number for bots like one or two, and there’s one that sits on your operating system and one for that tool that you use for your job all day and that’s it. How many messaging apps would you say you use every week? Three, four, five? There’s some kind of dynamic that will play out there, and maybe you’ll have different bots for your personal life, work life, etc.

From thin wrappers to deep dives

Des: Let’s talk about 2024. What do we think the future holds – the future being the next 50 weeks?

“We’ll probably see some of the first truly AI-first companies and products that required a year and a half to get start coming out this year”

Emmet: A lot more, and maybe a lot more of a continuation of last year. I think we definitely haven’t extracted all the juice there is from this at all. In fact, I think there’s an AI overhang in a couple of dimensions. One is that, if the current models didn’t change, we would still have loads of work to do to work with them and optimize them. More companies will get into training their own models and things like that as well. There’s probably also an overhang in terms of consumer readiness to adopt things. Last November, we were all excited about AI. You were saying OpenAI have changed the world. I think what they did is slowly rolling out across the world and changing it slowly. Most people doing their jobs, even knowledge workers, do not use AI tools all day long to do it.

And then, the last one is we have a lot more product to build. A lot of the stuff we saw coming out this year were features that you could put together in a few weeks or a few months, and we’ll probably see some of the first truly AI-first companies and products that required a year and a half to get start coming out this year.

Des: ChatGPT dropped in November 2022, and loads of YC startups and companies got funded in Q1 or Q2. We might start seeing the fruits of the AI wave landing in the market this year. The AI-native startup born because of ChatGPT is probably realistically coming to market now.

Emmet: All those .io zombie startups are probably ready to pivot to being .ai startups as well.

“Whether you’re a startup doing a thin wrapper or some bigger company with some kind of token investment in AI, you’re going to start to really see companies realizing they have to go deep”

Paul: I think there’s an investment question as well. In 2023, a lot of companies – not Intercom, we actually went deep – went shallow. AI is a thing and is at the beginning of this new S-curve, “We’re unsure, but hey, we better put it on our marketing page.” And so people built thin wrappers over ChatGPT or just surface-level stuff. And I think what’s quickly going to happen is companies are going to realize that’s not enough, that this is a fundamental change.

I think it’s really useful to draw parallels back to the beginning of the last S-curve, which was smartphone, mobile, et cetera. But with phones, there were a lot of times when people said things like “Ok, that will never work on a phone. No one will ever do that on a phone.” And lo and behold, two, three, four years later, everyone does it on their phone and not on their laptops at all anymore. It totally transformed the behavior.

I think we’re going to see something similar this year in 2024 where companies that stay shallow with the thin wrappers I think will start to really struggle. Whether you’re a startup doing a thin wrapper or some bigger company with some kind of token investment in AI, you’re going to start to really see companies realizing they have to go deep. They need to train all their staff on AI. This isn’t a specialism. Yes, you should have a specialized MLT (Machine Learning team) and all sorts of stuff like that, but PMs, designers, everyone needs to be fluent in the language of AI, and you’ll start to see deep investments and deep products come out of that.

Des: I’m quite bullish on whenever Apple ship something in the AI space. iOS became the sort of standard for what software people are used to and raised the bar for design across many aspects of our lives in general, but certainly our software. And I think because of that, everyone had to go and get good. And in my opinion, it kind of gave birth to the post-web 2.0 UX, but like raw product design, the emergence of, say, dribbble and people caring about the aesthetics in a deep way and all that. I think that all flowed from the iPhone. It just got to the point where the stocks app on your phone was better designed than every piece of software on your desktop, and people started to try and change that.

“That will kind of change market readiness, which will make it so that every B2B SaaS providers like, Oh shit, turns out people are now used to talking to their products. We better get on board

I think Apple will launch something in AI, Siri will become AI-powered or something like that, and it’ll probably be quite good because the LLMs are already quite good. Talking to ChatGPT is already pretty impressive. So, you can imagine it when it’s able to do things on your phone, all the stuff you would’ve loved to have been able to do with Siri, and I think that will change consumer readiness for AI. It’ll change the expectations for AI where it’ll just feel really stone age when you have to do all the pointy clicky shit when you just want to say, “Order me a pepperoni pizza,” or whatever.

There’ll be a load of cases where it’s very easy to say anything you want to happen, and it’s very precise to say anything you want to happen, and you don’t feel the need to validate the thing you want to happen. I think it could give rise to talk or text as the new core input to software. But I think Apple are going to be the largest driver here. Google, to some degree, as well. I think that will kind of change market readiness, which will make it so that every B2B SaaS provider’s like, “Oh shit, turns out people are now used to talking to their products. We better get on board.”

Paul: In the same way that 15 years ago, they were like, “Turns out design matters, and this can’t look like dog shit.”

Customer service as an infinite game

Des: Emmet and Paul, what about our own world of customer support? What do we think will change in the era of post-AI or during AI?

Emmet: None of this is a prediction – it’s a fool’s game, really, doing predictions. But the change around design, if you’ll allow me to go on a riff here, if you go further back, 30 years ago, before desktop-publishing software, it was X-Acto knives and pots of paint and that was how you did publishing. And then that process got completely upended with desktop publishing software. Design was reinvented completely already, and I’m sure a new iteration of the tools will come along and do the same.

“The game becomes less about building a brilliant thing than building the most likely directional thing the fastest – and trading speed off against your likelihood of being correct”

But it changes the nature of the work as well. It’s back to the Zelda conversation where we’re working towards that point of hitting publish and then the thing is out there in the world forever versus working iteratively. I think customer service, to answer your question, might undergo a similar change. There’s a book called Finite and Infinite Games by James Carse, and it’s this model for thinking about systems, and it calls it games. There are games that you play that might be finite, where the rules are externally defined, there’s an end and there’s a win state for the game that you’re playing or the model that you’re interacting with. And that’s similar to publishing something out there and putting it out there into the world forever.

Design changed to an infinite game once we especially started shipping things to the web and getting much more iterative about the work that we were doing. And the game becomes less about building a brilliant thing than building the most likely directional thing the fastest – and trading speed off against your likelihood of being correct and so on. I think customer service will shift to more of an infinite game from, let’s say, resolving as many tickets in an hour as you can, and that very top-down defined, externally defined success criteria, and the end stage is to close the ticket and get rid of it.

And then you think about that shifting, and all those things get sorted, crudely speaking. Then, the job of customer service becomes about building the relationship on an ongoing basis with this person who’s a customer on an ongoing basis because they’re paying you a subscription every month or whatever. Is CSAT, or the number of tickets resolved, as important in that world? I would suggest they wouldn’t be. Maybe we’ll have new stats or new ways of measuring success. But broadly speaking, I like to think about it shifting from this finite-bounded game towards much more of an infinite game.

“Everything changes there – not just the CSAT or measurements, but the culture within the team as well”

Des: Paul, any ideas?

Paul: Just on that one real quick. I think you gave me an example before of a soccer match as a finite game, and this is probably really stretching the analogy, but tennis is a finite game. You’ve got two players back and forth, and the match ends. Customer service is a bit like that. You’ve got a customer asking questions, or many customers asking questions, they’re already hitting tennis balls and someone trying to literally hit them back, hit them back, hit them back, and the shift ends. And most customer service workers, I’d imagine, post-shift, don’t think a lot about all those shots they had to play back. Everything changes there – not just the CSAT or measurements, but the culture within the team as well.

You’ll probably end up with people having two split roles where you have people playing the infinite game, building relationships, and people designing the game. The system will need to be designed to orchestrate it, so a lot of people are going to start doing things like making sure that it’s good quality control and all sorts of stuff like that, which is a much higher-impact, higher-level job I think. I think it’s cool.

Emmet: For what it’s worth, in business as well, there’s a lot of this finite thinking where we are going to win and beat the competition. And well, you’ve only beaten them until they beat you again. Neither of you is going out of business, so it’s a useful mental model for thinking about things other than customer support.

Critical thinking in the age of AI

Des: How do we think the disciplines of design and product will change over the next year given the nature of the software we expect to be developed in the coming year? Emmet, you lead a large team of designers, how do you think design changes post-AI?

Emmet: Again, it’s very difficult to make precise predictions here. The tools and capabilities of the tools will obviously lead the conversation. People will follow the tools that serve them best, and that will drive the changes. But I have thought about this a bit and chatted with people on the team. We have a design principle on the Intercom design team that says that we should build things simple by default, but flexible under the hood – make it easy to do the obvious common things, but if you really want the power, there’s some progressive disclosure you can dive in. And I think we’ll probably try and figure out something similar for AI. This is a bit back to what you were asking about what it’s suitable for, what it’s good at, and what it’s not. A lot of those things can be automated by default and manual under the hood might be a principle we adopt.

“Where designers need to get good is understanding the capabilities of AI so they don’t end up producing beautiful designs for things that shouldn’t exist in a post-AI world”

Maybe I’m just living in the past, or I’m not able to detach myself mentally from the current model. I still think we’ll have plenty of manual UI. I don’t think every UI is going to turn into a chat box with a blinking cursor in it because a lot of the precision deterministic stuff that we were talking about is better searched by clicking on a precise point or choosing from an exact item on a list or whatever it might be. But we will see a lot of that stuff. The obvious prediction here is everything goes a bit more chat-based. And I think that will happen. But I think it’ll be more additive than replacing the existing GUI interface.

Des: First of all, yes, you want precision input – no one wants to wrestle with a text area to try and explain what they mean when they can just click on the area and drag or whatever they need to do. I think that bit’s true. I think where designers need to get good is understanding the capabilities of AI so they don’t end up producing beautiful designs for things that literally shouldn’t exist in a post-AI world. Some stuff can literally be done by, say, picking the winner from a set of adverts we run based on LTV:CAC data.

You could imagine a workflow where you drag them around and sort the table and so on, or you can imagine a decision the designer made, which is just noted somewhere in Figma as a little text or asterisk, which is to pick the winner automatically. I just wonder, in a post-AI world, if designers truly understand what AI is capable and reliable for. Is there stuff that actually doesn’t need to exist? If they don’t understand AI, they might end up designing whereas, in practice, that design is irrelevant.

Emmet: The barrier to building products is probably way lower, not just because coding GPT assistant-type things like copilot can write a lot of the code for you and you can be a lot faster, but because you probably can, if you’re a startup, just build the text version of the thing and not have the manual fallback. You probably see it approached from lots of different sides there.

Des: Paul, what about product management?

“There’s a world where AI accidentally turns a lot of these otherwise brilliant product managers into the bad type, where you’re literally like, Click, magic wand, fill, send, next problem

Paul: For all this conversation, we’re clearly all in AI, believe in its potential power, and it’s a very positive conversation. And I have a concern. What makes a good product manager? I think the best product managers probably have three core attributes. One is they’re extremely strong at critical thinking. They can look at something objectively, they can take in all the input, really think about what’s true and what the implications might be. They’re extremely progress-oriented. And obviously, communication is the third category. So, what makes a bad product manager? Glorified project management, not critically thinking about anything and just kind of moving the trains along the tracks. Brilliant product managers are great at critical thinking, and maybe that’s the defining characteristic. And I kind of worry a bit that in an AI world, where you can do things where you don’t listen to interviews directly and go, “Hey AI, please summarize these 12 interviews”, or you don’t write the product strategy yourself or don’t write the product brief yourself-

Des: Click the magic wand.

Paul: …click the magic wand, and that will generate the thing you then send to the team. Suddenly, there’s a world where AI accidentally turns a lot of these otherwise brilliant product managers into the bad type, where they’re more like doing project management. You’re literally like, “Click, magic wand, fill, send, next problem.” And because these people are progress-oriented, they’re going to probably want to click the magic wand a lot. And so, there’s something there to keep an eye on. This probably applies to lots of jobs, not losing the ability to think critically.

Des: There’s a part there where it’s just the details matter. I could argue a CS person might say, “Well, you’re okay with me clicking a magical wand to answer this question, and you don’t seem to see an issue with that one.” I think details matter. Generally speaking, there’s one answer to a customer support question. As a result, the input/output matching is pretty tight. But something like our strategy for redesigning Messenger or something should be a pretty open-ended thing. And if the AI can guess it, we’d have to wonder if it’s a strategy at all.

I don’t have a great degree of understanding of how large language models work, but I know they focus on something that looks like an answer as opposed to a really strong, opinionated piece. I don’t really believe LLMs are going to be particularly compelling fiction writers because they lack the ability to properly surprise you because they’re not trying to surprise you. They’re trying to do things that look like things that should happen. So, I do worry about the magic wand creation for product managers, for project managers, for other cases anywhere where the details really really matter. So, I worry that, in general, if the path to PM (product management) efficiency is true, not caring about the details either in the output or in the summarization of all the user feedback, you’re just going to end up with some real vanilla shit, right?

“Theres a bunch of PM docs and meeting notes where it’s like, Awesome, give me the AI-generated version, no problem. It’s the thinking that you don’t want to lose the ability to do”

Paul: I think it applies to both cases, though. In customer service, it’s absolutely the case that lots and lots of questions have one correct answer. I think the future bots will answer all those questions anyway, and we will have an interim period where people could use magic wands and pre-fill, copilots, and so on, but ultimately, those will be answered by bots. But the other type of query that shows up in customer service a lot is the more complex query, the troubleshooting query. People write, “My thing doesn’t work.” And that can be anything. And you’re trying to work through the problem, but it could be all sorts of stuff. I think, in lots of cases, you don’t want customer service people to get the magic wand either. And you can apply the reverse to both sides. I’m sure in PM land, there are plenty of times when AI can help people and accelerate work. But I go back to the critical thinking part, and I go back to judgment. Great product managers have great judgment. Where does great judgment come from? Experience, listening directly to customers, and details; your brain is incredibly good at synthesizing and summarizing.

Emmet: The other thing, though, is that writing is an incredibly good method of thinking. You might think you know what you think, and then you try and write it down and realize, “Shit, I didn’t at all.” But then you struggle through for an hour and actually manage to get your thoughts out. And then, when writing is free, you’re less forced to do that. On the other hand, as the cost of generating writing goes to near zero, maybe the value does as well. There’s a bunch of PM docs and meeting notes where it’s like, “Awesome, give me the AI-generated version, no problem.” It’s the thinking that you don’t want to lose the ability to do. And writing is intrinsically tied up in thinking, especially the more remote we go.

Paul: That’s a great example. The three I have are critical thinking, communication, and progress. The meeting notes thing is communication. And that’s where AI can really help. When I think about the best product managers I’ve worked with, they weren’t the ones who were the best at taking meeting notes. They’re typically good at that too, but that’s not what makes them great. What makes them great is the critical thinking piece.

Des: Thank you, Paul. Thank you, Emmet. This has been Intercom on Product, and thank you all for listening.

Fin AI Copilot CTA (Horizontal)