ChatGPT is here to stay. Where will it take us?

GPT is unfolding before us, and we’re not missing a beat. As we explore the potential and limitations of this new tech, will it live up to the hype?

Since it was launched in late November, ChatGPT has captured the hearts and minds of people across the world for its ability to generate human-like responses. However, as with any new technology, there’s often a gap between hype and reality. And generative AI, in particular, is quite the natural hype machine.

We understand the skepticism – large language models are great at appearing plausible, even when they’re wrong. But truth be told, we’re very optimistic. Over the past eight weeks, we’ve built AI-powered features and shipped them to 160 beta customers, and the feedback has been overwhelmingly promising. So much so that earlier this week, our Director of Machine Learning, Fergal Reid, and I released a special episode on the podcast to share our insights on what we’ve built and what we’ve learned.

This time around, we’re going beyond Intercom, and even beyond customer service. In today’s episode, we’re delving into all things GPT – from the skepticism to its role in business communications, from the disruption of creative work to the future of user interfaces.

Here are some of the key takeaways:

  • It’s not just the tech that’s improving – we’re getting better at understanding what to build, how to customize it and integrate it, and all of these dynamics are driving the industry forward.
  • Perhaps the future of human-machine interfaces will be facilitated by personalized AI-powered bots that understand the users’ intentions and mediate their interactions.
  • These models have unlocked many new capabilities and fool someone at a glance, but they still lack the common sense reasoning needed to pass the Turing test.
  • GPT may disrupt the customer service industry, but if automation increases the agent’s productivity, it can ultimately unlock capabilities that enhance their value to the business.

Make sure you don’t miss any highlights by following Intercom on Product on Apple Podcasts, Spotify, YouTube, or grabbing the RSS feed in your player of choice. What follows is a lightly edited transcript of the episode.

Beyond the hype

Des Traynor: Hi, and welcome to the Intercom podcast. I’m joined, again, by Fergal, and we’re going to talk about all things GPT. Fergal, it’s been eight whole weeks since ChatGPT launched. Already, people have been building useful products against it, and already we’ve had a wave of skeptics saying that this is a toy, it’s immature, it’s not ready for anything ever, which is just classic new tech reactions. Where’s your head at? Is the skepticism founded? Have we crossed some perceptual cliff that really matters?

Fergal Reid: Yeah, I think there’s a certain legitimacy to the skepticism. These things are almost natural hype machines. It’s just so easy to look at it and see it do something that seems good. Unless you really dig into it, it looks wonderful. And then, when you dig into it, you’re like, “Ah, it’s wrong, I feel misled.” These machines are designed to generate plausible things, and if plausible, but wrong is a problem for you, it can be very disappointing. So I understand the default position of skepticism that a lot of people come to this tech with.

“ChatGPT, as it is on the internet today, might be a toy, but that doesn’t mean we can’t take the underlying tech and build incredibly valuable business features from that”

However, we’ve always been very bullish on generative AI, and since we spoke last, we have built some features, we have shipped them to Beta, we’ve had 160-something customers on our Beta using these features: summarization, some features in the composer more experimental to try and make people faster. And then we have a wave of other features in prototype form that are not quite there yet – big-ticket value things – but we think we see a line of sight to that. And so, we understand the skepticism, we were optimistic, and now we’ve got data – real customer usage, real customers telling us that there’s a particular job they want to do, that they do all day every day, and it’s transformative for that. And that, to me, makes the position of skepticism start to look a little shaky.

Des: The fact that people are actually using the thing to do parts of their job on a daily basis.

Fergal: Yeah. That’s the ultimate arbiter of this. It’s very difficult to be skeptical when people are actually using it, and not just as a toy. There’s a line of logic out there. We’ve seen some articles in The Atlantic and places like that where people are saying, “Hey, look, this thing is more of a toy than a real valuable product.” And ChatGPT, as it is on the internet today, might be a toy, but that doesn’t mean we can’t take the underlying tech and build incredibly valuable business features from that.

My team has probably been one of several teams to do that over the last few months, to be like, “Wow, this is delivering real value,” and I think we’re probably one of the bigger companies in customer service that has successfully had a development cycle where we’ve actually got this in the hands of hundreds of customers and gotten feedback from it. And they’ve really told us, “Yes, this is really saving me time; this is making my job faster.” Yeah, the skepticism becomes harder to sustain. That’s one argument. I think you can also attack the skepticism from a different position: we’ve seen a lot of tweet threads and articles skeptical about this tech because the previous generation didn’t cross the chasm and wasn’t transformative. And they’re skeptical because new technology is always overhyped. But those are not good arguments. Those are arguments that are right for a while and then become terribly wrong. You can fail here by being overly optimistic, but you can fail just as much by being-

Des: Blank and pessimistic.

Fergal: Exactly. Blank and pessimistic.

“There are so many dynamics happening at the same time that are going to feed back and magnify each other”

Des: The parallel you see is people who rubbish every new startup idea they’ve ever heard of. And the thing is that 90% of the startups don’t work out. So 90% of the time, you’re spot on and look really smart. Then you rubbish one that goes on to be a trillion-dollar business, and everyone’s like, “Right, it turns out…” I think it was Nat Friedman who said, “Pessimists sound smart, optimists get rich,” or something like that. And there’s some truth to it when you actually put weight behind each opinion: the degree to which you’re wrong when you’re wrong bulldozes the degree to which you are minorly right just by being tech skeptical.

Fergal: Yeah, 100%. Look, I’m a believer in AI and its value. I think we have enough evidence of real value. Over the last decade, we’re seeing the trend of machine learning and AI in general increase and increase. We’ve got new capabilities. I feel my team has enough evidence that there are at least some capabilities unlocked for GPT-3.5 and for other large language models that weren’t there six months ago. I believe there’s an overhang; there’s way more product we could build now that hasn’t been built yet than there was. So yeah, we’re bullish, and we’re starting to see customers who we shipped beta to tell us, “Yeah, this works, this is great.”

We haven’t quite crossed the last piece of it. We know this is transformative in terms of core value for the tasks our customers spend 99% of their time doing. So, we ship the summarization feature and other features to save time in the inbox. But there are big things coming here that we haven’t yet built, that internally, we’re working on, but we haven’t seen them out there in a marketplace. And so, we think the real excitement of this is still coming.

Des: There’s some hierarchy of transformation that I’m sure someone will correct me on in the comments, but we have already put live features to transform specific workflows, and by transforming, we mean to reduce the cost of doing this to 5% of what it once was. In the case of, say, summarization. Then, it might be transforming very common workflows. Then, it might be transforming the job, then transforming the org, and at the very top, it’s transforming the business. But it’s pretty clear as we identify more and more use cases where we can deliver a lot of value that we’re weaving our way up through this, to me, inevitable transformation of the world of customer service.

“We went internally to our customers and had to close the beta recruitment much earlier than we wanted because it was one of the biggest responses we’ve ever got”

Fergal: Absolutely. There are so many ways that this is changing in parallel. There are so many dynamics happening at the same time that are going to feed back and magnify each other. The first one is kind of obvious: the technology available is getting better. That’s not stopping. OpenAI and other players like Anthropic are building new models all the time, and they’re exciting. That’s not stopping. So, that’s one dynamic. And there’s another dynamic, which is we’re getting better at building products around those. We’re getting better at taking those models and figuring out the type of thing they’re good at. And there’s another dynamic, which is we’re getting better at customizing them, building the right prompts, and integrating them into our existing systems. And then our customers’ expectations are getting higher and higher.

We’ve really found that, since ChatGPT, there has just been this huge wave of interest from our customers. They can see the promise and believe there is something here. With the beta, we went internally to our customers and had to close the beta recruitment much earlier than we wanted because it was one of the biggest responses we’ve ever got. People wanted to be on it. So, all of these things together are going to magnify much more than any one of them on their own, in my opinion.

Des: It’s interesting how you break that down. The tech is improving, businesses’ capabilities are improving, and that’s just adopting it in local cases. And then businesses’ ability to think about or conceptualize new products and opportunities using that tech is improving. Same with customer expectations of the tech. We’re probably only a year away from people expecting to be able to expand on text within a text field, as one simple example. You’re sort of seeing these things crop up everywhere.

Fergal: If even a year. Obviously, a lot of us have seen the Microsoft announcement about bringing these features into Word and stuff. And it’s going to change fast if the large mainstream office productivity tools do this. It could be really fast.

The rise of the AI assistant

Des: Here’s a different type of skepticism I’ll charge at – one that slightly resonates with me, anyway. I think Kevin Cannon had a funny tweet where he said, “The future is composed of people using GPT to expand upon things like, ‘I want the job’ into a lovely letter like ‘Dear Sir’ or ‘Madame,’ blah, blah, blah… And then the recipient, clicking the summarization button, to see that the person just said, ‘I want the job, here’s my resume,’ or whatever. In some sense, you’d be tempted to look at them and be like, what the hell’s the point in all this? Has formal language and professional writing and business English become a pointless conduit to the theatrical way we all communicate, when in the future, I’ll just send you the prompt, and you’ll reply with a prompt, like, “I want the job.” “You can’t have the job.”

Fergal: Yeah. Hard question. It’s seriously speculative. I’ll give you some opinions. There are probably certain contexts, right? Let’s say a legal document. You can say to someone in your legal team, “Hey, I need a contract. It’s got to do X, Y, and Z.” That request will turn into 10 pages of legal stuff. The recipient will be like, “Oh, does it do the three things it said it would?” And their legal team will be, “Yes, it does.” This is one end of the extreme where there’s big expansion and compression, but in some weird edge case, clause number 13 on page two can turn up in court, and so on. So clearly, that matters. We can’t get rid of it. We can’t just have those four bullet points. We need all that. You might not consider it material when you’re writing it, but it may become material later. That feels like one extreme where it’s like, “No, it feels like that has to be there,” something to deal with all those edge cases.

And the other extreme is probably a situation where the defender and the recipient don’t care about those details. Both are never going to care about those details, and they’re just observing some social graces or formalities of “this is how you write a business letter. I’m writing to a big company, I better write a business letter,” and maybe that stuff’s going to go away.

Des: In the same way, I think the analogy there for me would be when email conversations moved to SMS, iMessage, or WhatsApp. Think of all the shit you’re not saying anymore. “Hope this finds you well,” or whatever. All that shit’s gone.

Fergal: The constraints of Twitter, the format, the medium, gives you permission to be terser. I think that’s a real dynamic. The way we communicate and the way we write a help center article may not be the optimal way to write it. Maybe we should be briefer. On the machine learning team, there’s another way of thinking about this. The future of the world is going to be intermediated by agents. And once upon a time, this was obvious to everybody. Your web browser had a user agent stringing it and stuff. And as I say, it’s your agent going and navigating that weird internet with all these links and stuff for you. It’ll do stuff for you, come back, and tell you things. And then, all that stuff centralized and now you’ve got search engine and so on.

“It’d be one thing if all we had seen was the DALL·E 2 image generation. But no, we’re seeing transformations in audio synthesis, image synthesis, text understanding, text synthesis, and text compression”

There is an old idea in tech futurism and science fiction and so on, that you’ll probably have an agent that understands you, your intention, what you want, and is smart enough to figure out what to bring to your attention and what not to. So possibly, in the future, the way this goes is more like that. If you want to know a particular detail, the software on your side is smart enough to put that in the summarized version of it. But it’s smart enough to know that you don’t want to know that detail as well and to leave it out.

Maybe we’ll live in a future where user interfaces change, where my user interface to a particular business or task is not really controlled by that business or that task like it is today. Instead, it’s personalized for me. That sounds very fancy, but I think it’s going to happen fast. These language models are very powerful, they’re starting to be used to write code and so on, and it’s a very short hop from here to take action from me. We’ve seen some prototypes out there where folks are working on models that understand a website well enough to take in an English sentence and navigate the website for you. And then, are we heading for a future where that’s how everyone interacts with websites? Do you need a website anymore?

Des: Is this the new SEO? Making sure your GPT can understand you?

Fergal: Yeah. Maybe websites turn into something that looks more like an API that’s publicly exposed, and that’s something with UI and formatting because the UI gets formatted by the agents.

Des: We’re all just talking to Siri or whatever.

“Maybe that’s what the future with bots will look like. We’ve all got a bot personalized to us that handles the interfacing and you don’t really need to worry so much about that intermediate layer”

Fergal: Yeah, and I think Google and Apple can see this future. We don’t know the timeline, but again, the thinking tool I always use is: what if you had a very smart human who understood you, who had worked with you, maybe a personal assistant, and you were interfacing with them, and you wanted to book a holiday? What would they ask you about that? And in half of the stuff you see on or whatever, they’re not going to ask you that – they’re just going to book the holiday for you and maybe come back with clarifying questions: “Oh, you wanted to go and stay in an apartment, but there’s no space there. Will a hotel do?” But that’s an adaptable user interface. Again, I don’t focus too much on ChatGPT and what just shipped. You take a year or two out. It’s moving too fast. If you’re skeptical because of the current limitations, you’re going-

Des: Your skepticism will miss the mark.

Fergal: Exactly. Transformers are extremely powerful, and the transformer architectures people use are extremely powerful. We’ve seen multiple modalities improve here. It’d be one thing if all we had seen was the DALL·E 2 image generation. But no, we’re seeing transformations in audio synthesis, image synthesis, text understanding, text synthesis, text compression. We’re seeing so many parallel advances. It can write code. It’s probably going to be able to work a website pretty soon. So maybe that’s what the future with bots will look like. We’ve all got a bot personalized to us that handles the interfacing and you don’t really need to worry so much about that intermediate layer.

Des: One super prototype scene I saw on Twitter was somebody who trained a bot to speak in his own voice, I believe, and to call a number and navigate a banking phone tree, effectively get through to an agent, request to have all their foreign exchange transactions refunded or something. Something that’s the kind of standard where you just need to ask and they do it. It got all the way through to the end. And literally, they just said, “Go,” and walked away. That was obviously super contrived, maybe super closed, but it was still a very real use case end-to-end executed automatically.

Fergal: I think it’s a really interesting area. We talk a lot about how customer service will change, and where our heads always go is what will happen when you have a bot like ChatGPT but customized to your business and really great at answering questions and the cost of customer service issues will drop. But there’s another side to it. How will customer service change when the user has a bot that can handle customer service interactions and doesn’t give up or get tired? And there’s potentially a big change there.

Des: The new B2B is going to be bot-to-bot, basically.

Fergal: Maybe. It might be a while before users have that sort of tech, but it’s an interesting thing to consider.

The authenticity conundrum

Des: How do you generally think about this double-sided world of creation and ultimately, what could be seen as deception – this looks like a painting, but it’s not a painting, it was generated – versus detection, the idea that people can say, “Hey, I really hand wrote that code, that’s not generated code.” How important is it for humanity to navigate this world? There’s a famous scene in Westworld, a western robots factory, where a guy wants to-

Fergal: It’s a remake of an old thing.

Des: Oh, is it? News to me. I didn’t know it was a remake of an old thing. But in Westworld, there’s a scene where a guy bumps into a woman, has a conversation with her, and then, in the end, he says, “I have to ask, are you real?” Her response is, “If you have to ask, why does it matter?” And I think there’s something there. Will outpacing our detection capability be seen as the definition of authentic? Is authenticity even a thing anymore? In the banking example, how will a person say, “Hey, Fergal, is this actually you, or is this a bot you’ve trained?” Especially if you’ve trained it how to answer that question.

Fergal: There are some big questions there. There are at least five questions in there I probably lost track of. You could talk about the Turing test, a very prescient paper about how we’re going to be able to tell when a computer has become properly intelligent, and then one test was a dysfunctional test – if a human judge can reliably discern between the two via text interface or whatever, we could say it’s not intelligent. And when it passes that, we should accept that it’s functionally intelligent. It’s misrepresented a lot, but his paper was more like, “It’s doing something really interesting if it gets to that point.” And that’s one way of approaching things functionally.

“Tell a story where there’s a lot going on and then ask it a sort of complex, consequential question on that, and it’ll still get tripped”

Des: And we’re past that, I would say. Thereabouts.

Fergal: There are always headlines about something passing the Turing test. I think the original formulation is something like a skilled interrogator or something. We’re not at that point yet. If someone’s trained to ask the right questions, this stuff will quickly break down. It doesn’t have the deep world model that a human does.

Des: Right. How would you do the trick of asking a self-referential question? You’d ask someone to trip it up linguistically?

Fergal: The tech is getting better at that. But it is more like setting up a complex domain. Tell a story where there’s a lot going on and then ask it a sort of complex, consequential question on that, and it’ll still get tripped. It’ll get tripped up in a way that a kid wouldn’t. But I think the right way to think about this is that you’re dealing with alien intelligence. You want to call it intelligence, but it’s going to be shaped differently. So, it’s going to be a bunch of things that a seven-year-old kid wouldn’t be able to write – I’ll be able to write a computer program, but what you might call common sense reasoning is not there yet. That said, philosophically, that’s talking about whether this thing is alive and sensing stuff. And no, clearly not, with most definitions most people would use. But that’s kind of veering into the philosophy of AI questions.

To go back to your original point, what if you want to build CAPTCHAs for these systems? What does that look like? Yeah, people have ways of watermarking and detecting if that text is produced by these models, but I don’t know if that’ll be reliable if you have a model that’s really good at injecting the right amount of noise.

One thing I would caution anyone in this field is that there are machine learning systems and there’s like, “how do I make my machine learning system have good enough quality that it’ll hit my 99% image detection threshold in real life?” That’s one standard. And there’s this whole other standard: how do I build my machine learning system to work well compared to adversarial input? That’s a whole different ballgame.

“For a while, at least, any big player with a large language model will try and stop you if you’re using it for nefarious tasks like that”

Des: Defensive design.

Fergal: Defensive design. How do I defend against adversarial input? And in general, that’s really hard. If you told me, “Oh, I have a new fancy machine learning system that will detect fraud or guard my system in a complex environment,” I would be extremely skeptical. There are fraud detection systems, but that’s different from someone trying to attack the machine learning system.

I think this whole issue of detecting when you’re talking to a bot, when you’re talking to a large language model when it doesn’t want you to, that’s going to be hard. And if we end up in a situation in the future where customer service is getting flooded by bots pretending to be users, that’s going to be tricky to deal with. But I imagine that for a while, at least, any big player with a large language model will try and stop you if you’re using it for nefarious tasks like that. And there’ll be some control there because these models of such high quality are difficult to host and run yourself on any sort of consumer model hardware stuff. There can be some accountability.

Looming disruptions

Des: If we zoom up a bit, we’re probably not too far away from being able to generate plausible-sounding music. Lobby music, that type of thing.

Fergal: Yeah. Muzak.

Des: Muzak, exactly. And to some degree, there are general formulaic songs – I think something like 65 number-one songs have the same four chords or something like that, or top-charting songs anyway. And, obviously, Dan Brown’s novels all follow a simple format. That doesn’t mean it’s bad, but to what degree does society change when anything that’s expressed to some degree in a formulaic nature can be replicated, and ultimately, you can get a $0 version of it? The Da Vinci Code is still The Da Vinci Code, it’s a pretty good book by any normal definition, but you can now get the bargain basement version of it for $0 or five cents or whatever. And then, you think of it happening across every type of creativity. Again, this isn’t to say that the outputs of these technologies will be comparable, but they’ll be potentially 1% of the price. How do you think the world changes in that sort of future?

“It’s the Jevons paradox – sometimes, making something cheaper means you end up doing a lot more of it. Those dynamics are really hard to predict”

Fergal: I think in a lot of different ways. You can look at analogies from the past. You can look at painting, and then photography came along, and suddenly it was easy to capture a landscape image but-

Des: I’m sure the painters didn’t like it, right?

Fergal: I don’t know the history well enough, but generally, there is some incumbent who is upset whenever there’s disruption.

Des: I think it was the same with radio or cassette tapes – live musicians were like, “Well, yo, this was our gig.”

Fergal: Yeah. Every cinema used to have a pianist who would play the soundtrack, and that went away. Gramophone and piano tuners, the loom and the Luddites… There are countless examples of this. And I do think there are other areas facing imminent disruption, and there are going to be hard conversations about what’s valuable. Likewise, with customer support, you’ve got to be sensitive. There are always better outcomes and worse outcomes. People might look at large language models getting better at writing code and be like, “Hey, as a programmer, this valuable skill that I’ve invested years getting in, gosh, it’s not going to be useful anymore.”

“Will there be fewer customer support representatives in a world with a lot of automation, or will there be more because the value they can bring to a business has been magnified?”

There are different ways to think about this. You can think about it in terms of AWS. We use Amazon a lot at Intercom, and probably, if we had to do everything we did without AWS, it would cost us more orders of magnitude of programmer time to do it. Does that mean we hire fewer programmers as a result? Well, it probably means we wouldn’t be feasible as a business without that enabling tech. It’s the Jevons paradox – sometimes, making something cheaper means you end up doing a lot more of it. Those dynamics are really hard to predict. Will there be fewer customer support representatives in a world with a lot of automation, or will there be more because the value they can bring to a business has been magnified?

Des: When we take all the real stuff, we actually see the value they bring, and you’re like, “I want more of that.”

Fergal: You want more of that; you need more of that. Suddenly it’s like, “Wow, what could we unlock in terms of value to our business if we had a lot of those reps?” Each one could do 10 times more than they currently do. You never know. I think that’s something that gets missed sometimes. People always respond to technological disruption and always talk about, “Oh, you can climb up the value ladder and get a better job. You can become a product manager if that’s wherever you want to go.” And that’s potentially one way. But another way is that just getting much more productive at what you currently do can transform the amount of it that you need to do.

Des: Or more businesses become possible because of it.

Fergal: More businesses become possible. That’s the best thing. I think this is all going to unfold with things like AI art. Obviously, there’s a debate around plagiarism and copyright infringement. If someone goes and trains DALL·E 2 on a whole bunch of pictures, was that copyright infringement? And what if they learned the style of an artist and then you asked it to produce work like theirs? Is that copyright infringement? And there’s probably a lot of stuff that legal systems and society need to figure out there. One thing I think is missing from debate sometimes is that even if you decide that it is copyright infringement to train the current models – and we don’t accept that with humans; humans are allowed to look at things and copy their style – someone will still build models trained on open, permissible work, and they’re going to get pretty good at generating images. I think that ship has sailed to a certain extent.

How big can it go?

Des: To throw out a few needles here, you referenced AWS as one example where we don’t have a massive servers team here. We don’t have filing cabinets full of servers. Is your AI team smaller because of the existence of OpenAI, Anthropic, and all that? If they didn’t exist, would you be building an AI version of the server team?

Fergal: Yeah, I mean, that’s a hell of a question. There are different ways of looking at it. Is the AI team getting disrupted? And we’ve gone back and forth on this. Let’s look at the current version of large language models. I was playing with GPT recently as a movie recommender. It’s like, “Hey, I like watching X and Y. Suggest me something?” And it’s not bad. I’m sure it’s not as good as a well-tuned recommender that’s got all the best data, but it’s doing a lot better than randomly picking movies. And it’ll output, it’ll spit out like it’s reasoning. And its reasoning, like everything it does, is plausible, but it’s not bad. So again, even if the tech’s not great now – I wouldn’t be rushing out to productize a movie recommender from it or anything – what if it gets 10 times or 100 times better? What if you feed it way more training data or a way better training regime?

“There are tons of opportunities with the marriage of this new capability in specific things it’s good at with a lot of scaffolding, a lot of product work around that”

Des: Just wait for GPT-4.

Fergal: Yeah, GPT-6, whatever that looks like, right? Whatever 10 billion dollars buys you of computing and reinforcement, learning human feedback, if that is, in fact, what happens. And what if that happens? Do you still go and build a recommender system? Someone asks you for a recommender system, do you go do that? Sam Altman has given talks on it. Imagine we could get this to be human level. If you had something that was human-level general intelligence, would you need a machine learning team anymore? Or would you just sit down like, “Hi, how’s it going? Today I’m going to teach you how to be a movie recommender.” You’ve got to give lots of examples, and it’s got to be able to consume a data set about movies. But maybe it’s just like, “Hey, AI system, write the code to consume the data set about movies.” I don’t know.

“You’re getting into big questions, Des. And maybe that’s just where all our heads are going at the moment. But you can get into big questions about, like, by the time that’s disrupted, what percentage of current human economic activity is disrupted?”

Des: Yeah, totally correct.

Fergal: But that’s a very bullish case. Maybe we hit some asymptote before then, and I certainly don’t think we’re near that point at the moment. I think you still need your machine learning team. And I think we’re certainly in this happy Jevons Paradox for a while where a lot of stuff is unlocked, and maybe we’re doing slightly different work than we were before – we’re certainly doing a lot more prompt engineering – but these systems are not yet good enough to just train-

Des: Yeah. To outsource the whole thing to OpenAI, and they’ll solve our problems.

“If you put 100X more resources into model training or dataset creation, what return do you get? Is it 10X, a 100X, a 1000X? I don’t know if anyone knows that”

Fergal: Right, yeah. I really hesitate to speculate about when. Just to give you one super concrete limitation. All these models have a prompt size. The amount of context you can pass to it with a prompt is limited. And that limit is baked in pretty low down. And so, a lot of the stuff the team is doing at the moment is around, “Hey, how do we work around that? How do we give them a relevant article?” And we’re using more traditional machine learning techniques – traditional as in, invented five years ago. The classic stuff.

There are tons of opportunities with the marriage of this new capability in specific things it’s good at with a lot of scaffolding, a lot of product work around that. I think there will be disruption, and it feels like extremely disruptive tech to me, particularly when you project a few years out. But we don’t know how big it’ll be. And I don’t think anyone knows how big it’ll be yet. Maybe the folks in OpenAI do. But if you put 100X more resources into model training or dataset creation, what return do you get? Is it 10X, a 100X, a 1000X? I don’t know if anyone knows that. There’s certainly no consensus on it.

Des: There was that quote from Sam Altman where he was asked something – I think it was some irrelevant question about challenges in San Francisco or something like that – and his answer was, “When you believe that artificial general intelligence is as close as I do, you struggle to think about any other problem.” When I read that, I was like, “Okay, well, he’s certainly leaned a certain way.” Now, he could still be thinking in 20 years, but some societal problems are kind of irrelevant against the greater potential wave of what could be happening here.

“There’s clearly a pitfall to avoid and an attractive pitfall to fall into”

Fergal: Yeah. Full disclaimer mode now. I think there’s a lot of merit to that style of thinking, personally. I remembered there were times in the history of computation when it was like, “Oh, if you’ve got a million dollars to solve a computing problem and you need to solve it as soon as possible, what you need to do is sit with the million dollars for two years and then buy the fastest computer that the million could buy.

Des: I remember my own career. In 2006 or 2007, mobile websites were all the thing. Pre-iPhone, right? And people were talking about WAP and JMI files or JNI files, and everyone hyped up their mobile strategy. And literally, by the time I finished working out what I thought was the right recommendation for a client, the iPhone had launched. And I was like, “You know what? Don’t worry about it. Sit on your hands. Apple’s going to solve this entire problem.” And sure enough, two months later, “Hey, it turns out all our websites are mobile-ready.” Sometimes, a tech wave can be so big that any temporal thing you do will just be irrelevant against the magnitude of what’s going to happen.

Fergal: Yeah, if you believe AGI is close, I guess I can logically see that position. Now, clearly, it seems like there’s a terrible mistake to make there where-

Des: Yeah, where we’re wrong, and you’ve probably just been sitting on your hands.

Fergal: You’ve given yourself a license to ignore terrible, terrible things. So obviously, you’ve got to wait, and I’m not making any judgment on that. But yeah, there’s clearly a pitfall to avoid and an attractive pitfall to fall into. I think it’s very hard to bet against increasingly general intelligence. And I don’t know timelines and stuff, but I think there are big questions for people to think about. Now, that’s definitely way outside customer support or customer service.

Des: No, yeah. Well, look, thank you very much. We’ll check in in six weeks to find out that this podcast is yet again out of date. We’ll see where we’re at again. But for now, thank you very much.

Fergal: Thanks, Des.

Click here to sign up for the latest AI updates from Intercom.