The emergence of superintelligent AI

We’re starting to build AI systems that can understand things. Where does it stop?

Superintelligent AI isn’t a new topic. But for decades, it was mostly confined to hypothetical scenarios in academic papers or the pages of science fiction, with its futuristic tales of robot uprisings and world-ending scenarios. And yet, as AI technology moves forward, this once amusing concept, where an AI system starts improving its own intelligence recursively to the point where it eventually surpasses human intelligence, is tiptoeing ever closer to the threshold of reality. Suddenly, the conversation is much more serious.

Some researchers from organizations like the Max Planck Institute for Human Development, the Machine Intelligence Research Institute, and even OpenAI claim it will be extremely challenging – or even outright impossible – to contain them once you reach that point. And that point may not be that far away from happening.

Whenever and in whatever form it arrives, superintelligent AI systems will be truly revolutionary, impacting everything from the labor market and the economy to biology and medicine. They also, however, pose a profound existential risk to humanity, which raises some serious questions – how close is it to happening? Are we equipped to deal with it? And how do you even begin to regulate it?

In this episode, our Senior Director of Machine Learning, Fergal Reid, joins Emmet Connolly, our VP of Product Design, to take superintelligence head-on – the knowns and the unknowns, from the ethics to the threats and the challenges of regulation.

Here are some of the key takeaways:

  • Integrating AI in areas like self-driving cars raises ethical dilemmas, but it’s important to distinguish between that and superintelligence, which is potentially much more dangerous.
  • Balancing our focus between immediate AI concerns and superintelligence threats is crucial – the risks of today shouldn’t eclipse potential future harms, even if they’re harder to grasp
  • It’s incredibly challenging to regulate AI. Overregulation can curtail its benefits, but it’s important for frontier labs to engage with regulators to foster responsible development and adoption.
  • While applications of non-dangerous AI should keep the “move fast and break things” approach, research labs training frontier models must be closely regulated.
  • Overlooking AI’s potential benefits while overemphasizing risks contributes to an unproductive debate that can hinder progress. Overall this tech is weighing net positive.

If you enjoy our discussion, check out more episodes of our podcast. You can follow on Apple Podcasts, Spotify, YouTube or grab the RSS feed in your player of choice. What follows is a lightly edited transcript of the episode.


Into the unknown

Fergal Reid: This is another of our freeform conversations. We were doing a little bit of prep, and you were like, “Superintelligence is the most interesting–”

Emmet Connolly: What about the framing that superintelligence could change our society completely, perhaps overnight, and that we would be totally unequipped to deal with it? You were saying nobody knows, and that it might happen within the next couple of years. That’s the biggest thing we could possibly choose to talk about. But the difficulty I have with it is it’s all opaque. It could be two years away, it could be no amount of years away, or 200 years away. It’s the biggest thing, but it’s the biggest unknown.

Fergal: Okay, let’s get into it. Let’s do this superintelligence thing head-on. Let’s try and talk about what we know and what we don’t know. We’re going to be wrong a lot. How would I frame it? This has been on my mind a bit. Now, it is intractable. It’s difficult to grab hold of. What do we know? There’s all this hype, right? “Oh, it’s going to be crazy; superintelligence is coming soon.” And the so-called doomers claim that, by default, it’s going to kill us all. And we’ve touched on this a little bit in the past. And then, there are these other people who are like, “Oh, that’s all bullshit. That’s like worrying about overpopulation on Mars.”

“We’ve built stuff that starts to understand things. Where does it stop?”

Andrew Ng, the famous researcher, said this, and then a week or two ago, he seems to have changed his perspective. He has this newsletter, and in the newsletter, he didn’t touch on superintelligence, but he was like, “Oh, these systems are starting to understand and reason about stuff.” He mentioned Othello-GPT, which we’ve talked about before, where you train this thing on snippets of Othello games – just the sequences of moves in a board game – and it seems to start to learn fundamental things about the board game and the layout of the board. He found that convincing. And I find that convincing, too. And so, some people who were saying superintelligence is a million miles away are now changing their tune a little bit. People who were saying AI doesn’t understand anything are changing their tune a little bit.

We’ve built stuff that starts to understand things. Where does it stop? One thing I strongly feel is that it’s not ridiculous to talk about things getting more and more intelligent and getting smarter than humans. And that’s a change. Four or five years ago – two years ago? – I’d totally be in the Andrew Ng camp, “Yes, this is fun stuff to think about, and I like reading science fiction, but there’s no path. We don’t see any path there. There’s no evidence that any path we see will work.” Now, there’s a bunch of evidence that it might get there. That’s something we’ve learned. That’s something that’s changed in the world.

Emmet: And it sounds like something we can track. What I’m getting from what you’re implying is that reasoning is a necessary step to superintelligence or Artificial general intelligence (AGI) or whatever. And that the more there are reasoning capabilities being exhibited here, the more likely it is that we’ll get to AGI. And so, for one dimension of my inability to grasp onto anything tangible, you’re saying you think that confidence will collectively build over time?

“For the first time, we don’t really know what the trick is. And there may not be a trick. It may be real. This is intelligent, or, at least, the trick is similarly good to human intelligence”

Fergal: Yeah, I mean, you see more data, and you update your worldview. If you live in a world with no intelligence systems at all, you should be more skeptical of ever building superintelligence. And as the level of intelligence of systems in the world increases, you should become more open to the idea that we’ll get to something superintelligent. That’s pretty simple.

I remember learning about how chess AI works. I remember the Deep Blue and Kasparov chess match in 1997 (there’s an Arcade Fire song about this). My granddad used to play chess and taught me to play chess as a kid. He was really shocked and surprised by Deep Blue because Kasparov was this towering human mental genius, and Deep Blue beat him. “How could this be? Does this mean the machines are as smart as us?” And that was a big technical accomplishment. But it was fundamentally a good trick. When you learn how a chess AI works deep down underneath the hood, it’s Minimax or Negamax. It’s a pretty simple search algorithm. It has got to have some idea whether a board state is better or worse for it – if I have a lot of pawns and a queen and I’m not in check, it’s a simple calculation of the value of the chess pieces.

But then, it just does a huge amount of search. If I move here and you move there and I move there… There are some clever techniques to make the search more efficient, but it’s basically brute computation. It’s just doing a ton of calculations. And so, when you learn that, suddenly, it’s disappointing. Suddenly, it’s like, “Deep Blue wasn’t a thinking machine. It was just a really clever-”

Emmet: Calculator.

Fergal: … calculator, right. A big mechanical system that calculated en masse. With the history of AI, you always get really impressed until you learn the trick. But now, for the first time, we don’t really know what the trick is. And there may not be a trick. It may be real. This is intelligent, or, at least, the trick is similarly good to human intelligence.

“Deep Blue wasn’t smarter than Kasparov, but in quite a clever way, they deployed the brute force calculation aspect of its intelligence to a task that made it better to the point where it could win”

Emmet: Or reflective of some underlying, almost physics-like universal law of information theory or how information works when you put large amounts of it together.

Fergal: I don’t know if I would go with information theory, but I guess the idea that maybe a simple system would’ve given expressive power, and when you just put enough information on, it starts to get smart.

Emmet: So Deep Blue wasn’t smarter than Kasparov, but in quite a clever way, they deployed the brute force calculation aspect of its intelligence to a task that made it better to the point where it could win. So I think you can both say, “Deep Blue is not smarter than Garry Kasparov,” and you can say, “Deep Blue can beat Garry Kasparov at chess,” or “It’s better than humans at a task.”

AI takes the wheel

Emmet: That leads me to think about how we think about those things. The emotional reaction you were describing your granddad had… If we look ahead and see self-driving cars, which have been bubbling away in the background throughout or even before any of the LLM stuff really came to the forefront. There’s that emotional thing of, “What do we want from these systems?” Maybe I’m getting away from the AGI stuff here, but let’s look at cars. What do we want from them, or how are we going to respond to the integration there? Because we have an imperfect system right now – more than a million road deaths per year caused by human error. If we replace that with an AI system that only led to half a million road deaths per year, would we… I mean, it’s very difficult, from a purely ethical, number-crunching point of view, to not…

It strikes me that this is like the AI trolley problem writ large. The AI trolley problem is a local “What decision do you make?” But we have this societal-level decision of, “Are we going to accept the ick feeling that comes with cars being driven by robots, even if they’re fallible, even if that fallibility is less fallible than human drivers?”

“If – and this is a huge if – we get self-driving cars to the point where a machine now kills you where a human used to kill you, but it kills you a lot less, I think most people would be okay with that”

Fergal: I think we probably will. There’s obviously a potential for regulatory and political failure, but people accept those trade-offs all the time. They just don’t talk about them very much because it’s politically unwise to. Every developed country has a medical system that will weigh the cost and benefit of a drug. And for better or worse, they do that. And I think that’s a good thing to do. And I understand that’s politically difficult to defend, but if you don’t do it, more people die.

Emmet: People debate it, but very few people choose to live in a country where those regulations don’t provide guardrails and safety barriers for all sorts of aspects of their lives. So yeah, I agree with you.

Fergal: Yeah, that’s it. And hopefully, they do a good job, but they do have to make these sorts of life-or-death decisions about what drugs are made available. Societies do that. And I have to imagine that, absent some political failure, if – and this is a huge if – we get self-driving cars to the point where, yes, a machine now kills you where a human used to kill you, but it kills you a lot less, I think most people would be okay with that.

Emmet: I don’t know. From a purely logical point of view, if you crunch the numbers, it’s hard to find a logical flaw in what you just said. But I think, as opposed to drug regulation, which is kind of like the background hum of life and living, you choose to get into a self-driving or non-self-driving car. And I think that decision will be a lot more personal for a lot of people than some of those regulations that are less tangible in your day-to-day life. I think that’s going to be the stuff that might end up overly politicized, making regulation really hard to achieve and hampering us more than anything.

Fergal: I mean, look, I know this is going to get political, but look at nuclear power. Nuclear power pollutes less carbon than other things. And it’s a complicated discussion about the downside. When there’s a nuclear meltdown it’s extremely costly. That’s a hard statistical problem. But a lot of technologists would feel that the discussion around nuclear power wasn’t the most rational retrospectively. When I was a kid, I used to be scared of nuclear power. A lot of us in the eighties were scared of nuclear stuff. But maybe that’s an example of a political failure.

There’s definitely a big constituency of technologists who are like, “The debate around superintelligence is going to cause overregulation of AI, and there’s going to be a lot of human benefit that we’re going to lose.” That’s knee-jerk regulation. Maybe something similar will happen with self-driving cars where we get to the point where they’re better, but then there’s a political failure that ends up stopping them from being deployed until they’re perfect, and they’re never perfect. But I guess I’m optimistic that people will be able to have that debate.

Emmet: Well, this is why I struggle to personally and emotionally connect with the doomer angle on it because a certain amount of this is going to happen no matter what you say or do from here on out.

Fergal: For superintelligence? Because I’m happy to represent the doomer angle for superintelligence.

“There’s no scenario where self-driving cars will kill everybody”

Emmet: Oh, okay. So let’s distinguish between self-driving cars, which do not require superintelligence, and some superintelligent sci-fi moments. I perceive a doomer mentality for even the non-super-intelligent aspect of this.

Fergal: Yeah. You’ve got to differentiate between the two. There are a lot of people who are incredibly skeptical and cynical and pessimistic about, say, self-driving cars. And then there are other people who will be cynical and pessimistic about general intelligence or superintelligence, but I think you have to separate the two. Everyone thinks they’re rational, but I would argue that there’s a rational argument to be cautious about superintelligence. There’s no scenario where self-driving cars will kill everybody. But there are, I think, plausible scenarios where people invent superintelligence, it all goes wrong, and it literally kills everybody. I’m not saying that’s likely. There are people who say that is the default scenario, and they have cogent arguments, in my opinion, but I’m not saying it’s likely. But I don’t think you can say it’s impossible. Superintelligence is potentially super dangerous. And so, I think that’s just a mad state to be in. Even the last minute of this conversation is mad.

Emmet: Yeah, it’s wild. And to go back to what we said at the start, it’s the biggest thing, yet it’s so amorphous. We’re just trying to step our way into it.

Fergal: It’s scary to even think or talk about. I think people self-censor a lot. Academics self-censor.

Emmet: Right, for fear of being wrong.

Fergal: Fear of being wrong and the incentives around that. But also, even at a human level, it’s like, “Shit, really? You’re talking about the technology that’s so big.” You mentioned Oppenheimer in your previous podcast. If you believe that stuff will happen, and if you believe in superintelligence, it’s clearly a technology at a similar level of power as nuclear weapons. Or, I would argue, substantially above that power. There’s a doomer argument about self-driving cars, and I, personally, don’t take that seriously. Maybe that’s arrogant of me, but there’s no scenario where self-driving cars kill everybody.

Untangling AI ethics

Emmet: I think what we’re getting to here is the whole conversation around AGI or superintelligence is very different from the conversation around bog standard, what we have or-

Fergal: AI ethics.

Emmet: … what we get a bit better, but even in the next five years, it doesn’t get to the next thing. That is more core AI ethics – all the stuff about misinformation and things like that that we have to deal with regardless.

Fergal: I think it’s an accident of history and maybe a dangerous accident of history that these two conversations get muddled a lot. Now, that’s just one perspective. There are alternate perspectives. People who work in AI ethics today will say, “Oh, no, this superintelligence thing is sucking all the oxygen out of the room. Future theoretical harms are distracting from real harms today.” But if you believe that the future harms are not theoretical and that the magnitude is so much bigger, it should suck a lot of attention, you know?

“I’m not saying we shouldn’t try and get ahead of it in any way, but it’s going to be way harder to address than some of the more tangible, immediate problems we have”

Emmet: Let me try and present an alternate case. Yes, of course, that is the big prize/threat/danger. Yet it’s so unknown and unknowable to us… Let me get Rumsfeldian for a moment – there are known knowns, known unknowns, and unknown unknowns. And when I think about superintelligence, it’s unknown unknowns, “Let me go read some sci-fi and dream about what might happen.” Whereas we do have a bunch of work ahead of us that we could do, and maybe the muddling of the conversation among those levels of AI is very counterproductive. Although that’s the big scary thing, it’s kind of binary as to whether or not it’s going to happen. And I’m not saying we shouldn’t try and get ahead of it in any way, but it’s going to be way harder to address than some of the more tangible, immediate problems we have.

Fergal: Yes, it is intractable, but there’s a massive 1,000X threat, and I think most people would agree with that, but with a lower probability. Maybe it’s 1,000X or 10,000X threat but with a 10X lower or 100X lower probability. If we’re aligned on that, it’s irresponsible to not talk about that. Just because it’s intractable, it doesn’t mean it’s not going to come and kill you.

Emmet: Or do we start the ball rolling on actually building a muscle to regulate and discuss regulation and enact regulation so we’re better prepared for the intractable thing if and when it comes?

Fergal: Maybe, but maybe that’s too slow. We talked about self-driving cars and the ethics of that, and with everything, there are costs and benefits. And so, there are a lot of people in the world who would be like, “All this AI ethics stuff about our systems today is important but small compared to the utility of the systems today.” But the shape gets very different if you’re talking about superintelligence because that’s a next-level dangerous technology.

Emmet: I get you, but I still think there’s a path we should avoid. And it’s also a plausible path. We keep going on the path we’re going and hey, a thought experiment. Imagine if there was a big important election next year, and AI and misinformation became the boogieman for all of this. People have a Donald Trump chatbot on their phones that they can talk to, and that’s a compelling and entertaining product. I bet we’ll see something like that. And the public opinion and the media storm and the politicization on whether this is binary good or bad just swamp the whole thing and make it a lot more difficult to have the bigger important conversation.

“The idea is that we do both and we’re not in a position where we’re like, ‘Hey, the short-term real harms are the only thing to focus on because the other thing’s nebulous’”

If we get to a stage in the next year where serious misinformation and things that have real consequences for lots of people – not global meltdown consequences that are possible with the AGI, but, not to minimize it, a single accident somewhere due to a robot car or something – we have to set ourselves up to have increasingly mature conversations about this stuff. There’s a narrow path that I can imagine to us getting there, and probably lots of others where we mess it up significantly.

Fergal: I guess you’ve got to do both, right? The idea is that we do both and that we’re not in a position where we’re like, “Hey, the short-term real harms are the only thing to focus on because the other thing’s nebulous,” or that the future nebulous thing is so big that we spend zero time on the short term harms. Presumably, there’s some optimally balanced portfolio of effort.

Imagine there’s an organization in the world that really cares about AI ethics and AI safety. That organization should probably split its focus along both of those things. But how do you make that split? There’s this idea of Pascal’s mugging where something is so big, and I’m like, “Hey, I’ve got an AGI over here. Now, you probably don’t believe me, but you should give me all your money just in case. Because otherwise, it’s going to destroy the world.” It’s like, “Oh, wow, that consequence is so big, you should just do whatever I say.” And that’s like Pascal’s wager, right? Weaponized. You want to avoid that scenario, but you also want to avoid a scenario where you don’t look at the legitimate threat.

This is hard stuff. I think good, well-meaning people trying to make the world better, both long and short-term, are going to disagree. You mentioned recently the discussion is divisive. Sometimes, the most divisive discussion is when really good, well-meaning people disagree on tactics and end up fighting each other. That’s really hard to get around.

Emmet: The obvious thing there is we’ve got to do both things.

Regulating the AI landscape

Emmet: You mentioned EU regulations that are coming on stream. Are we well set up to put smart, right, effective regulations in place for the short-term stuff I’m talking about?

Fergal: I don’t know. I’m not the right expert for this. I’ve read one of the draft EU AI Act things. And there’s a lot of good stuff in there and a lot of sensible stuff. But it’s still at a political level, it’s still being debated. I read some amendments that seemed overblown. I don’t know. Look at GDPR. Some of the GDPR is great, right? Protection. And then some of it is over the top, like the cookie consent stuff, or the GDPR consent stuff you see everywhere. Has that really made the world better? You might say, “Well, the theory was good, but the implementation was just bad.” Yeah, but that matters. Part of the job of good law is that the consequences aren’t bad.

“Whatever you think about this stuff, surely it’s better to get it out there into the world while the stakes are low so we can make mistakes and find things out”

How does this play out with AI? I think we’ve got to wait and see. I’m encouraged by what I’m seeing in the US where they’ve engaged with a lot of the labs. There’s been two-way engagement and a real discussion, and they’re starting to put some voluntary oversight of frontier training runs. So it doesn’t feel knee-jerk so far. That feels good.

I might be naive. I kind of think that the people in the frontier research labs are being fairly responsible about this stuff. They care about not blundering into superintelligence by mistake. They’re engaging with regulators. You can imagine some other world where they’re deliberately polluting the public debate and setting up in jurisdictions that can’t be regulated. I think there’s a lot of good stuff happening, and cynics will be like, “Oh, this is just the AI lab trying to do regulatory capture.” I’m sure it’s not. The benefit of regulatory capture is not lost on some people in this space, but I think they’re mostly good actors.

Emmet: I would imagine good regulation also allows for some sliding scales or progress from step to step. Whatever you think about this stuff, surely it’s better to get it out there into the world while the stakes are low so we can make mistakes and find things out. The self-driving car levels probably provide something like this where you can say, “If you’re in this area, it’s fine for this level of autonomy. In this area, not, or driver intervention is required.” I’m out of my area of expertise here as well, but you can imagine it in other areas. Let’s start off with x-rays or scans of broken toes – the AI can take that. But with anything like looking at a tumor, we’ll have multiple layers, including human involvement. And maybe you just gradually work through those stages.

“The doomer position or the dangerous general intelligence position is that you accidentally build something much more powerful before you expect it”

Fergal: This is what we do day to day when we build AI products. We’re literally working on AI products in the Inbox at the moment trying to do something big, and now we’re trying to identify, “No, that was a little bit too ambitious.” That’s fine. Smaller cases where it will definitely give value safely. This is sort of our day job. You’re looking for ways to roll out where it’s definitely net positive and then expand that size. And I’ve got to imagine that a lot of good AI adoption will go like that.

Again, the doomer position or the dangerous general intelligence position is that you accidentally build something much more powerful before you expect it. You run some longer training run that’s 10 times longer than you ever ran before, and you discover you’ve trained something that’s 1,000 times more intelligent because it didn’t scale the way you thought or because you got some new algorithm or whatever. That’s the scenario that people are really worried about. And I think that’s a scenario to be cautious of.

But again, even the EU AI Act people are starting to talk about, “Oh, if you’re training the best, most frontier models, maybe you’ve got to register that. You’ve got to regulate it at least.” How do you regulate that? God, I don’t know. That would be a tough job. But at least they’re thinking about it.

Sounding the alarm

Emmet: Imagine you’re working in OpenAI or something like that, and we do this massive GPT-8 trained on some insanely huge corpus, and it passes this threshold that you’re describing where it’s scary how powerful it is. What do you think happens in that scenario? First of all, is it most likely to happen inside a large lab, the type of which you’re already describing, that wants to behave responsibly and is engaged with the government, and at that point, they will go, “Hey, hit the big red button, it happened. Let’s have a talk about where to take this next.” Does it become broadly accessible to a wide array of individuals quickly and suddenly, or could it be contained within a lab once it’s happened, and then we can talk about how to very carefully handle this material?

“Someone’s trained the model, what happens next? I’ve got to hope that the people training these things will raise the alarm”

Fergal: There’s a lot in there, and this is definitely a deep shit scenario where you ended up training something. That’s a science fiction scenario, but maybe it’s not so far away. Who knows? But it’s like you’ve trained something that’s maybe vastly more intelligent than you. It’s sitting in your lab. What happens next? I think people want to avoid this scenario, and I guess the game here is that as people get close to training these things, they can see this coming. They see that this is getting close to human intelligence. Hopefully probably not conscious. There are all sorts of ethical things if you thought it was conscious, but let’s assume you’ve got an unconscious but intelligent entity. What happens next?

If this is the way the technology plays out, the private lab feels like a better place to encounter this than some military black ops project. For the people who are like, “We need to stop all training runs now,” that means you’re going to encounter this in a military context first. And I don’t know if that would be better for people, genuinely. Maybe that’s my skepticism. But yeah, someone’s trained the model, what happens next? I’ve got to hope that the people training these things will raise the alarm. And I think they will. A nice thing about industrial research labs doing this is I think they’ll leak stuff. And I think that’s why it’s really important that it’s not done in the military context because there isn’t a culture of secrecy. Even Apple, the most secret of the big tech companies, I suspect that if they had some crazy AI thing, you’d hear about it, right? Google definitely would.

Emmet: When you say leak, do you mean whistleblower-style, like, “We have created this?”, or do you mean the model-

Fergal: No, I don’t mean leak the model itself.

“Imagine you live in a world with dangerous AGI proliferation. How do you control proliferation? That’s a whole different world”

Emmet: That did happen, right? Facebook released their open source model to some researchers, someone leaked it, and it became available on the internet.

Fergal: Yeah. Well, this is a whole other conversation. Imagine you live in a world with dangerous AGI proliferation. How do you control proliferation? That’s a whole different world. Suddenly it’s like, “Man, I wish it was in a military context.” But even if you look at the Manhattan Project, the Russians had spies that did tech transfer from the Manhattan Project to their weapons program. Surely nation states are thinking about this sort of stuff at the moment.

Emmet: Absolutely. There is some positive to be taken from the fact that, against all odds, almost 80 years ago or something like that, someone was sitting around having a version of the conversation we’re having now, and I think they would probably be pleasantly surprised to hear that if you fast forward to 2023, it has gone no further than the original usage.

Fergal: There’s also anthropic bias and anthropic shadow. There are all sorts of interesting statistical properties. This doesn’t really apply to nuclear exchange, but things that can destroy the world never happen in your timeline, as far as you can tell.

Emmet: Right. This is the Drake equation applied to living on Earth. Why has the world never ended while I’ve lived on it?

Fergal: Yeah. And anthropic bias shows up in the Drake equation. Humans tend to underestimate the prevalence of planet-destroying asteroids because the other species that got hit by planet-destroying asteroids are not in our timeline of history. If you want to start reasoning at that level…

Emmet: And bring that back to the AI stuff for me then.

“Maybe AI is some crazy new technology and we are going to count on individuals, people in public policy, and people in AI labs to step up to manage this well”

Fergal: Well, maybe not the AI stuff, but the nuclear stuff. I don’t think it’s too big of an effect there, but when you look at a whole bunch of close calls for nuclear exchange, the fact is it hasn’t actually happened. If it actually happened, would we be here to talk about it?

Emmet: I get it. You’re like, “Emmett, we’ve been lucky on a few coin tosses. I wouldn’t take that as evidence that everything’s going…”

Fergal: Now, on the other hand, there hasn’t been limited nuclear exchange. But I guess if your model is that nuclear exchange naturally spirals out of control, there’ll be very few cases of limited nuclear exchange. We shouldn’t feel enormously reassured by our ability to deal with that crazy technology that things didn’t all go wrong. That’s just one argument. And a counterargument is like, “Oh no, there was a lot of work done to try and pull the Cold War back from the brink.” And you read about the Cuban Missile Crisis and there were individuals who stepped up when it counted. Maybe AI is some crazy new technology and we are going to count on individuals, people in public policy, and people in AI labs to step up to manage this well. There’s a flip side of the coin. If we’re talking about generally intelligent systems, there are insane benefits for humanity. It’s important not to get lost.

Emmet: And the opportunity to solve some of our biggest known immediate problems as well, right? And lots of climate issues and overpopulation.

Fergal: Yeah, if overpopulation is even a problem.

Emmet: But I will say one thing. Even this conversation has made me a little bit more optimistic because it’s allowed me to take this miasma of concern and compartmentalize it a little bit. I’m much more optimistic than I was at the start of the conversation about our short-term likelihood of successfully managing, regulating, continuing to grow, and enabling the right level of innovation without letting it go crazy. And obviously, we have this big potential moment on the horizon.

Fergal: You must have been very pessimistic at the start of the conversation.

Move fast and break things

Emmet: Well, I still think all that regulation stuff is going to be super hard work and take many years and probably get it wrong. And one of the concerns that I have is we…

Fergal: I think regulation is going to be fast and hard.

Emmet: Fast and effective, you think?

“No one wants to be the person looking back who didn’t speak to the risks, but there is a danger of over-indexing on that. We need to try and more heavily highlight both sides of it if we’re to have a good debate”

Fergal: This is a conversation between two technologists, right? And this area is mad. “Hey, people might build AGI, and it might have a massively negative consequence for loads of people in the world who don’t normally pay attention to technology.” And technology skirts regulations often, and it moves fast and breaks things. And that’s tolerated, mostly, as long as it’s smartphone apps. But if you get to the point where people are credibly discussing even the low chances of massively negative outcomes, a lot of people who are in politics and civil society will be like, “Hey, are these guys serious?” Right now, we’re at a point where those people are looking at it, and they’re seeing all this debate. And a lot of the credible people in AI are kind of like, “Oh, it’s like overpopulation on Mars.”

I think that’s switching, and I think that will switch. I think that in a year’s time, a lot of the credible people in AI will be like, “I’m not sure. Yeah, I can’t really explain. I can’t really explain this system’s performance. This is doing really amazing things, yet I don’t know where it’s going to go.” And as they sit down in those closed-door briefing sessions, the people in politics and stuff will be like, “Really? There is a major threat here, and you can’t explain it?” I personally predict hard and fast regulation of frontier training runs. Overall, I think we’ll be glad to see that. And I really hope we don’t throw the baby out with the bath water. We do industrial work. We are not using superintelligence. There’s a vast amount of benefit to be gained from automating drudgery. I think, overall, technology is weighing net positive.

Emmet: I guess that’s part of the role that we have to play in all this. Even as we’re talking about this stuff, it strikes me that 80% of our conversation is actually skewed towards danger mitigation, and so on. I think we may take for granted a lot of the potential positives and upsides here. No one wants to be the person looking back who didn’t speak to the risks, but there is a danger of over-indexing on that. We need to try and more heavily highlight both sides of it if we’re to have a good debate.

“Software gets built by teams, where, traditionally, you build the thing, put it out in the market, find out if it works, fail fast, and your bugs will be revealed to you”

An element that I’m uncomfortable with as well is this conversation that we’re having is also super speculative. And we’re like, “Hey, who knows?” Nobody can know, but it’s still very valuable and worthwhile to put yourself out there a little bit and have a bit of a guess about it. But it’s very different from how we’ve done software in the past.

Fergal: How humanity as a whole has done software. Is that what you mean?

Emmet: I mean how software gets built by teams, where, traditionally, you build the thing, put it out in the market, find out if it works, fail fast, and your bugs will be revealed to you. How do bugs get fixed? They get fixed by being allowed to break to a certain extent and then being fixed.

Fergal: Not just software. This is a new class of threat because you’ve got to get it right the first time. In the past, we made a big mistake – the first nuclear meltdown – and then we took it much more seriously. We can’t do that this time, right?

Emmet: This means that software makers need to get from the move fast and break things mentality to something that is a bit more engaged… I guess it depends on what you’re doing as well. If you’re doing text auto-complete on your blog post authoring tool, go for it. But in certain areas. There are already software industries around health and finance and the military that are all super regulated. Hopefully, we can pull on enough of that experience.

“Overall, for society as a whole, the benefit of rapid development and patching things and fixing issues as they crop up is probably worth it”

Fergal: Oh gosh, I don’t know. Firstly, I think that applies to people training frontier models. I think we should have a “move fast and break things” approach for people building customer support AI chatbots and things of that class, right? That’s very different from a frontier model building the most intelligent system humans have ever built for the first time. Frontier models need to be regulated, but there’ll be a bunch of stuff this is pretty intelligent and is a useful tool, but it’s not going to try and seize control of anything. Overall, for society as a whole, the benefit of rapid development and patching things and fixing issues as they crop up is probably worth it.

Emmet: Yeah, what I’m saying is we need to inherit the norms of the industries that we’re talking about. Yes for the example you gave, but less so for medical software or something like that.

Fergal: I mean, it’s a complicated thing. There are many different bits here I need to separate out. I think research labs training frontier models probably need to inherit norms of regulated industries at some point in the future when the models start to get close to a point where they’re dangerous. Some people would say that’s now. I would say probably not quite yet, but at some point soon.

Emmet: And they seem pretty open to that. That’s pretty plausible to me.

Fergal: I think so. But what are those norms? Those norms are at least as stringent as the norms of people doing gain-of-function research for pathogens, and they don’t seem to be doing a great job, by the way. They will need to be the most stringent norms that humanity ever had. I would vote in that direction. Then, separately, there are people on the applications layer applying non-dangerous AI, and I would say it depends on what you’re doing. If you’re going to detect cancers from images, you probably need to be regulated, and I think the EU AI Act is going to regulate things like that.

Then, there’s a tier below, such as Netflix movie recommendations. There are issues of bias and all sorts of stuff there, but with that thing, I probably wouldn’t bother regulating it, or I’d only regulate it extremely lightly because yes, while there are real meaningful issues there, Netflix recommends certain things based on some stuff I’d rather they didn’t do, and there are issues of bias and other ethical issues. The benefit to society of moving fast probably weighs heavily on the scales against those harms. Other people will disagree with that, and that’s their right. I think that’s reasonable, but that’s how I frame it. I would put crazy stuff like superintelligence in a new category or the category of biological weapons research. Maybe I’m wrong. Maybe it’ll turn out that I was over-pessimistic.

Between optimism and caution

Emmet: To cool our jets a little bit, there’s no reason to suddenly get worried about the Netflix algorithm because it’s underpinned by slightly different technology, and the fact that bias and data sets have been kind of part of the conversation about AI systems from the start probably bodes a lot better than the random stuff that gets put into your algorithmic feeds, which is just as opaque to everyone outside of those companies and probably a lot more prone towards bias or mistakes. This has been a good reminder to kind of compartmentalize some of these different ideas, so I’m glad that we have had it.

Fergal: Yeah, it’s an interesting conversation.

“You don’t want to go too far, too speculative into the future, but you don’t want to ignore the low-probability, high-magnitude stuff either. It’s a hard space for everyone to get their head around”

Emmet: I have a feeling we could come back in a couple of months and almost have a follow on. You know what I mean?

Fergal: Yeah. Well, it’s percolating in all our heads. We’re all trying, as humans, to get our heads around technology change here. It’s hard. You don’t want to go too far, too speculative into the future, but you don’t want to ignore the low-probability, high-magnitude stuff either. It’s a hard space for everyone to get their head around. Reasonable people can disagree in this space and I think it’s really good that there’s this discussion about it. I wish the discussion was a little less muddled. I find a lot of the media discussion very shallow.

Emmet: Do you know someone who you think is doing a good job at it? Publications, journalists, some rando on Twitter…

Fergal: There’s a lot of really great stuff on the technical side. There are a lot of good research papers being written around this. The academic research discussion, I feel, is still muddled, but overall, it’s making progress, and I think this is just how we deal with stuff. You’ve got to get to it in layers. The journalists will get there. We saw this with COVID. I’d done a little bit of epidemiology, so I could understand a little bit of it, but it was a time of adaptation. Even experts were getting things wrong in the first month or two, and then people adapted. I’m optimistic that this will happen. I hope the timeline is okay. That’s it.

Emmet: In conclusion, Fergal, are we so back, or is it so over? That’s the fundamental question of all this.

Fergal: I’m incredibly optimistic about AI and the power it brings, and I’m really cautious. Up to the point of human-level AI, I’m incredibly optimistic. It’s going to be great for people overall. With an unconscious but intelligent system that gets progressively more useful and powerful up to human level, people can use it to do bad things, but overall, I think it’ll be net positive. There is a threshold – I don’t know where it is – where we have to start being cautious, but I think as long as that doesn’t happen way faster than people expect, as long as it doesn’t happen by accident, I see a lot of really positive directions. I’m optimistic, but I do think it’s time to take the negative outcomes seriously, too. That’s where my head is at.

Emmet: At some level, from the very first tool – if you watch 2001: A Space Odyssey where he picks up the bone – tools have had the potential to be put to use to make something or hurt someone. And we’re potentially coming up to the point where the tool that is more powerful than anything before it comes along. And it’s just the sharpest articulation of the duality that anything can be put to positive or negative uses. It’s up to us. Or have we finally been outsmarted by the tools such that we can’t keep it under wraps?

Fergal: I guess it’s up to us as citizens and democracies. I don’t know if the little democracy we’re in is going to have much of a role here. It’s a lot up to the people working in the research labs. I think that civil society needs to get its head around this stuff. This is a different type of tool. It’s, at least, in the trickiest, most dangerous class of tools. And there are certain things that could play out here that are pretty wild. I look at kids starting school, and I don’t know what world they’re going to live in if those wild scenarios play out. Again, we should acknowledge there are other scenarios. But yeah, I do expect a big reaction from wider society soon.

“There’s ethical stuff for them to weigh, but there’s a very coherent position where you can’t just ignore this. This is coming”

Emmet: Would you be slightly relieved if at all just tapered off?

Fergal: I mean, I would be disappointed. I would love it if it went further before it tapered off. But I do think it would give us all more time to adapt to the threats. At this point, if anyone was watching this, they’d be like, “Oh my god, we need to outlaw all AI stuff.” And that’s a very reasonable thing. But I do think there’s this kind of crazy theoretical strategic benefit to encountering this stuff early and fast. It’s driving a lot of the research lab folk to pull the technology into the present so we can deal with it when it’s early. And I do think this stuff is almost unbannable. I don’t always agree with them, but I have a lot of respect for a lot of the actors in research labs like OpenAI who think about this because I do feel this is such an incredibly irresponsible thing to do. There’s ethical stuff for them to weigh, but there’s a very coherent position where you can’t just ignore this. This is coming. You’ve got to choose when you’ll deal with it.

Emmet: And being part of solving the problem from the inside rather than putting your head in the sand. I mean, more power to those people and the people responsible for regulating. We should both support them and hold them accountable.

Fergal: The raw material here is computation. Computation is everywhere. We have computers everywhere. It’s not like it’s uranium and you can control the uranium. You do want to encounter it when it needs a lot of computation that’s centralized. Then, it’s at least regulatable for some period of time.

Emmet: I definitely want to check back in on it. Thanks, Fergal.

Fergal: Good chat. Thanks, Emmet.

Inside Intercom Podcast (horizontal) (1)