Sam Altman on Sora, Energy, and Building an AI Empire

Leave a Reply

AI transcript
0:00:08 Sort of thought we had, like, stumbled on this one giant secret that we had these scaling laws for language models, and that felt like such an incredible triumph.
0:00:11 I was like, we’re probably never going to get that lucky again.
0:00:17 And deep learning has been this miracle that keeps on giving, and we have kept finding breakthrough after breakthrough.
0:00:23 Again, when we got the reasoning model breakthrough, like, I also thought that was like, we’re never going to get another one like that.
0:00:27 It just seems so improbable that this one technology works so well.
0:00:35 But maybe this is always what it feels like when you discover, like, one of the big, you know, scientific breakthroughs is if it’s, like, really big, it’s pretty fundamental.
0:00:37 And it just, it keeps working.
0:00:40 OpenAI isn’t just building an app.
0:00:43 It’s building the biggest data center in human history.
0:00:48 Yesterday, I sat down with Ben Horowitz and Sam Altman, CEO of OpenAI.
0:00:54 We talk about OpenAI’s vision to become the people’s personal AI, the massive infrastructure behind it,
0:00:59 and how the company’s research is pushing toward AGI, including AI that can do real science.
0:01:06 We also talk about how his views have changed on open source, regulation, and why AI and energy are now deeply linked.
0:01:08 Let’s get to it.
0:01:12 Sam, welcome to A’s and Z Podcast.
0:01:13 Thanks for having me.
0:01:18 You’ve described in another interview, you’ve described OpenAI as a combination of four companies.
0:01:25 Consumer technology business, a mega-scale infrastructure operation, a research lab, and all the new stuff, including planned hardware devices.
0:01:30 From hardware to app integrations, job marketplace to commerce, what do all these bets add up to?
0:01:30 What’s OpenAI’s vision?
0:01:38 Yeah, I mean, maybe you should count it as three, maybe as four, for kind of our own version of what traditionally would have been the research lab at this scale, but three core ones.
0:01:42 We want to be people’s personal AI subscription.
0:01:43 I think most people will have one.
0:01:44 Some people will have several.
0:01:51 And you’ll use it in some first-party consumer stuff with us, but you’ll also log into a bunch of other services, and you’ll just use it from dedicated devices at some point.
0:01:56 You’ll have this AI that gets to know you and be really useful to you, and that’s what we want to do.
0:02:01 It turns out that to support that, we also have to build out this massive amount of infrastructure.
0:02:06 But the goal there, the mission is really like build this AGI and make it very useful to people.
0:02:11 And does the infrastructure, do you think it will end up, you know, it’s necessary for the main goal.
0:02:19 Will it also separately end up being another business, or is it just really going to be in service to the personal AI or unknown?
0:02:22 You mean like would we sell it to other companies as raw infrastructure?
0:02:24 Yeah, would you sell it to other companies?
0:02:26 You know, it’s such a massive thing.
0:02:27 Would it do something else?
0:02:32 It feels to me like there will emerge some other thing to do like that.
0:02:33 But I don’t know.
0:02:34 We don’t have a current plan there.
0:02:35 Yeah, I know what it is.
0:02:39 It’s currently just meant to like support the service we want to deliver and the research.
0:02:40 Yeah, no, that makes sense.
0:02:41 Yeah.
0:02:46 The scale is sort of like terrifying enough that you’ve got to be open to doing something else.
0:02:50 Yeah, if you’re building the biggest data center in the history of humankind.
0:02:51 The biggest infrastructure project in the history of humankind, yeah.
0:02:57 There was a great interview you did many years ago in Strictly VC, early open AI, well before ChatGPT.
0:02:58 And they’re asking, what’s the business model?
0:03:00 And you said, oh, well, we’ll ask the AI.
0:03:01 It’ll figure it out for us.
0:03:02 Everybody laughs.
0:03:08 But there have been multiple times, and there was just another one recently, where we have asked a then current model for what should we do.
0:03:11 And it has had an insightful answer we missed.
0:03:14 So I think when we say stuff like that, people don’t take us seriously or literally.
0:03:15 Yeah.
0:03:17 But maybe the answer is you should take us both.
0:03:18 Yeah.
0:03:19 Yeah.
0:03:24 Well, no, as somebody runs an organization, I ask the AI a lot of questions about what I should do.
0:03:26 It comes up with some pretty interesting answers.
0:03:27 Sometimes.
0:03:30 You have to give it enough context, but.
0:03:34 What is the thesis that connects these bets beyond more distribution, more compute?
0:03:39 I mean, the research enables us to make the great products, and the infrastructure enables us to do the research.
0:03:42 So it is kind of like a vertical stack of things.
0:03:49 Like, you can use ChatGPT or some other service to get advice about what you should do running an organization.
0:03:53 But for that to work, it requires great research and requires a lot of infrastructure.
0:03:55 So it is kind of just this one thing.
0:04:05 And do you think that there will be a point where that becomes completely horizontal, or will it stay vertically integrated for the foreseeable future?
0:04:12 I was always against vertical integration, and I now think I was just wrong about that.
0:04:13 Yeah.
0:04:13 Interesting.
0:04:20 Because you’d like to think that the economy is efficient and the theory that companies can do one thing and then that’s supposed to work.
0:04:21 I’d like to think that, yeah.
0:04:24 And in our case, at least, it hasn’t really.
0:04:26 I mean, it has in some ways, for sure.
0:04:29 Like, you know, NVIDIA makes an amazing chip or whatever that a lot of people can use.
0:04:35 But the story of OpenAI has certainly been towards we have to do more things than we thought to be able to deliver on the mission.
0:04:35 Right.
0:04:48 Although the history of the computing industry has kind of been a story of kind of a back and forth in that there was the Wang word processor and then the personal computer and the BlackBerry before the smartphone.
0:04:55 So there has been this kind of vertical integration and then not, but then the iPhone is also vertically integrated.
0:05:02 The iPhone, I think, is the most incredible product the tech industry has ever produced, and it is extraordinarily vertically integrated.
0:05:03 Yeah, amazingly so, yeah.
0:05:04 Interesting.
0:05:09 Which bets would you say are enablers of AGI versus which are sort of hedges against uncertainty?
0:05:14 You could say that on the surface, Sora, for example, does not look like it’s AGI relevant.
0:05:22 But I would bet that if we can build really great world models, that’ll be much more important to AGI than people think.
0:05:37 There were a lot of people who thought ChatGPT was not a very AGI relevant thing, and it’s been very helpful to us, not only in building better models and understanding how society wants to use this, but also in, like, bringing society along to actually figure out, man, we got to contend with this thing now.
0:05:42 For a long time before ChatGPT, we would talk about AGI, and people were like, this is not happening, or we don’t care.
0:05:44 Then all of a sudden, they really cared.
0:05:51 And I think that research benefits aside, I’m a big believer that society and technology have to co-evolve.
0:05:53 It can’t just drop the thing at the end.
0:05:54 It doesn’t work that way.
0:05:56 It is a sort of ongoing back and forth.
0:05:57 Yeah.
0:06:04 Say more about how Sora fits into your strategy, because there’s some hullabaloo on X around, hey, why devote precious GPUs to Sora?
0:06:07 But is it a short-term, long-term trade-off, or are we so AGI?
0:06:12 Well, and then the new one had, like, a very interesting twist with the social networking.
0:06:19 Being very interested in kind of how you’re thinking about that, and did Meta call you up and get mad, or what do you expect their reaction to?
0:06:25 I think if one company of the two of us feels more like the other one has gone after them, it wouldn’t.
0:06:27 They shouldn’t be calling it us.
0:06:29 Well, I do not have a history to do that.
0:06:34 But first of all, I think it’s cool to make great products, and people love the new Sora.
0:06:44 And I also think it is important to give society a taste of what’s coming on this co-evolution point.
0:06:51 So, like, very soon, the world is going to have to contend with incredible video models that can deepfake anyone or kind of show anything you want.
0:06:52 And that will mostly be great.
0:06:55 There will be some adjustment that society has to go through.
0:07:00 And just like with ChatGPT, we were like, the world kind of needs to understand where this is.
0:07:07 I think it’s very important the world understands where video is going very quickly, because video has much more, like, emotional resonance than text.
0:07:11 And very soon, we’re going to be in a world where, like, this is going to be everywhere.
0:07:12 So, I think there’s something there.
0:07:16 As I mentioned, I think this will help our research program is on the AGI path.
0:07:22 But, yeah, it can’t all be about just making people, like, ruthlessly efficient and the AI, like, solving all our problems.
0:07:25 There’s got to be, like, some fun and joy and delight along the way.
0:07:28 But we won’t throw, like, tons of compute at it.
0:07:30 Or not by a fraction of our compute.
0:07:35 It’s tons in the absolute sense, but not in the relative sense.
0:07:41 I want to talk about the future of AI human interfaces, because back in August, you said the models have already saturated the chat use case.
0:07:46 So, what do future AI human interfaces look like, both in terms of hardware and software?
0:07:49 Is the vision for kind of a WeChat, like, super app?
0:07:56 Solving the chat thing in a very narrow sense, which is if you’re trying to, like, have the most basic kind of chat style conversation, it’s very good.
0:08:00 But what a chat interface can do for you, it’s, like, nowhere near saturated.
0:08:02 Because you could ask a chat interface, like, please cure cancer.
0:08:04 A model certainly can’t do that yet.
0:08:12 So, I think the text interface style can go very far, even if for the chit-chat use case, the models are already very good.
0:08:15 But, of course, there’s better interfaces to have.
0:08:17 Actually, it’s another thing that I think is cool about Sora.
0:08:22 Like, you can imagine a world where the interface is just constantly real-time rendered video.
0:08:23 Yeah.
0:08:25 And what that would enable, and that’s pretty cool.
0:08:30 You can imagine new kinds of hardware devices that are sort of always ambiently aware of what’s going on.
0:08:37 And rather than your phone, like, blast you with text message notifications whenever it wants, like, it really understands your context and when to show you what.
0:08:39 And there’s a long way to go on all that stuff.
0:08:45 Within the next couple of years, what will models be able to do that they’re not able to do today?
0:08:50 Will it be sort of white-collar replacement at a much deeper level, AI scientist, humanoids?
0:08:55 I mean, a lot of things, but you touched on the one that I am most excited about, which is the AI scientist.
0:08:56 Yeah.
0:09:00 This is crazy that we’re sitting here seriously talking about this.
0:09:07 I know there’s, like, a quibble on what the Turing test literally is, but the popular conception of the Turing test sort of went whooshing by.
0:09:09 Yeah, that was fast.
0:09:13 You know, it was just like, we talked about it as this most important test of AI for a long time.
0:09:15 It seemed impossibly far away.
0:09:17 Then all of a sudden it was passed.
0:09:20 The world freaked out for, like, a week, two weeks.
0:09:23 And then it’s like, all right, I guess computers, like, can do that now.
0:09:25 And everything just went on.
0:09:28 And I think that’s happening again with science.
0:09:32 My own personal, like, equivalent of the Turing test has always been when AI can do science.
0:09:35 Like, that is always always, like, that is a real change to the world.
0:09:39 And for the first time with GPT-5, we are seeing these little examples where it’s happening.
0:09:40 You see these things on Twitter.
0:09:45 It did this, it made this novel math discovery and did this small thing in my physics research, my biology research.
0:09:50 And everything we see is that that’s going to go much further.
0:09:55 So in two years, I think the models will be doing bigger chunks of science and making important discoveries.
0:09:58 And that is a crazy thing.
0:10:00 Like, that will have a significant impact on the world.
0:10:05 I am a believer that to a first order, scientific progress is what makes the world better over time.
0:10:08 And if we’re about to have a lot more of that, that’s a big change.
0:10:12 It’s interesting because that’s a positive change that people don’t talk about.
0:10:17 It’s gotten so much into the realm of the negative changes if AI gets extremely smart.
0:10:19 But curing up a disease is like…
0:10:20 It could use a lot more science.
0:10:21 That’s a really good point.
0:10:25 I think Alan Turing said this, somebody asked him, they said,
0:10:30 well, you really think the computer is going to be smarter than the brilliant minds?
0:10:32 He said, it doesn’t have to be smarter than a brilliant mind,
0:10:34 just smarter than a mediocre mind like the president of AT&T.
0:10:38 We should use more of that too, probably.
0:10:41 We just saw Periodic launched last week, OpenAI alums.
0:10:45 And to that point, it’s amazing to see both the innovation that you guys are doing,
0:10:50 but also the teams that come out of OpenAI just feels like are creating tremendous capital of faith.
0:10:51 We certainly hope so.
0:10:51 Yeah.
0:10:57 I want to ask you about just broader reflections in terms of what sort of about diffusion
0:11:01 or development in 2025 has surprised you?
0:11:04 Or what has sort of updated your worldview since chat to you came out?
0:11:10 A lot of things again, but maybe the most interesting one is how much new stuff we found.
0:11:16 Sort of thought we had like stumbled on this one giant secret that we had these scaling laws for language models.
0:11:22 And that felt like such an incredible triumph that I was like,
0:11:24 we’re probably never going to get that lucky again.
0:11:28 And deep learning has been this miracle that keeps on giving.
0:11:32 And we have kept finding like breakthrough after breakthrough.
0:11:36 Again, when we got the reasoning model breakthrough, I also thought that was like,
0:11:38 we’re never going to get another one like that.
0:11:42 It just seems so improbable that this one technology works so well.
0:11:47 But maybe this is always what it feels like when you discover one of the big scientific breakthroughs.
0:11:50 If it’s like really big, it’s pretty fundamental and it just, it keeps working.
0:11:59 But the amount of progress, like if you went back and used GPT 3.5 from ChatGPT launch,
0:12:01 you’d be like, I cannot believe anyone used this thing.
0:12:07 And now we’re in this world where the capability overhang is so immense.
0:12:09 Like most of the world still just thinks about what ChatGPT can do.
0:12:13 And then you have some nerds in Silicon Valley that are using codecs and they’re like, wow,
0:12:15 those people have no idea what’s going on.
0:12:16 And then you have a few scientists who said,
0:12:18 those people using codecs have no idea what’s going on.
0:12:21 But the overhang of capability is so big now.
0:12:23 And we’ve just come so far on what the models can do.
0:12:28 And in terms of further development, how far can we get with LLMs?
0:12:29 At what point do we need either new architecture?
0:12:31 How do you think about what breakthroughs are needed?
0:12:35 I think far enough that we can make something that we’ll figure out the next breakthrough
0:12:36 with the current technology.
0:12:42 Like it’s a very self-referential answer, but if LLM-based stuff can get far enough that it
0:12:47 can do like better research than all of opening up put together, maybe that’s like good enough.
0:12:49 Yeah, that would be a big breakthrough.
0:12:51 A very big breakthrough.
0:12:57 So on the more mundane, one of the things that people have kind of started to complain about,
0:13:01 I think South Park did a whole episode on it, is kind of the obsequiousness
0:13:03 of kind of AI and ChatGPT in particular.
0:13:06 And how hard a problem is that to deal with?
0:13:10 Is it not that hard or is it like kind of a fundamentally hard problem?
0:13:12 Oh, it’s not at all hard to deal with.
0:13:13 A lot of users really want it.
0:13:13 Yeah.
0:13:16 Like if you go look at what people say about ChatGPT online,
0:13:19 there’s a lot of people who like really want that back.
0:13:23 And it is, so it’s not, technically it’s not hard to deal with at all.
0:13:31 One thing, and this is not surprising in any way, but the incredibly wide distribution of
0:13:32 what users want.
0:13:33 Yeah.
0:13:36 Like how they’d like a chatbot to behave in big and small ways.
0:13:42 Does that, do you end up having to configure the personality then you think?
0:13:43 Is that going to be the answer?
0:13:44 I think so.
0:13:47 I mean, ideally you just talk to ChatGPT for a little while and it kind of interviews you
0:13:50 and also sort of sees what you like and don’t like.
0:13:51 And ChatGPT just figures it out.
0:13:52 And just figures it out.
0:13:54 But in the short term, you’ll probably just pick one.
0:13:55 Got it.
0:13:56 Yeah, that makes sense.
0:13:58 Very interesting.
0:14:03 And actually, so one thing I wanted to ask you about is, yeah.
0:14:12 Like, I think we just had a really naive thing, which, you know, like it would sort of be unusual
0:14:15 to think you could make something that would talk to billions of people and everybody wants
0:14:17 to talk to the same person.
0:14:17 Yeah.
0:14:20 And yet that was sort of our implicit assumption for a long time.
0:14:21 Right.
0:14:23 Because people have very different friends.
0:14:23 People have very different friends.
0:14:24 Yeah.
0:14:25 So now we’re trying to fix that.
0:14:26 Yeah.
0:14:30 And also kind of different friends, different interests, different levels of intellectual
0:14:31 capability.
0:14:35 So you don’t really want to be talking to the same thing all the time.
0:14:38 And one of the great things about it is you can say, well, explain it to me like I’m five.
0:14:41 But maybe I don’t even want to have to do that front.
0:14:43 Maybe I always want you to talk to me like I’m five.
0:14:43 It should just learn that.
0:14:44 Yeah.
0:14:45 Particularly if you’re teaching me stuff.
0:14:50 I want to ask you a kind of like a CEO question, which has been interesting for me to
0:14:51 observe you.
0:14:54 Is you just did this deal with AMD.
0:14:57 And, you know, of course, the company’s in a different position and you have more leverage
0:14:58 and these kinds of things.
0:15:04 But like, how has your kind of thinking changed over the years since you did that initial deal,
0:15:04 if at all?
0:15:08 I had very little operating experience then.
0:15:09 I had very little experience running.
0:15:11 I am not naturally someone to run a company.
0:15:13 I’m a great fit to be an investor.
0:15:16 I thought that was going to be, that was what I did before this.
0:15:17 And I thought that was going to be my career.
0:15:17 Yeah.
0:15:17 Yeah.
0:15:19 Although you were a CEO before that.
0:15:20 Not a good one.
0:15:27 And so I think I had the mindset of like an investor advising a company.
0:15:28 Oh, interesting.
0:15:30 Right now I understand what it’s like to actually have to run a company.
0:15:31 Yeah.
0:15:32 Right, right, right.
0:15:32 There’s more.
0:15:33 They’re just the numbers.
0:15:38 I’ve learned a lot about how to, you know, like what it takes to operationalize deals over
0:15:39 time.
0:15:39 Right.
0:15:44 All the implications of the agreement as opposed to just, oh, we’re going to get distribution
0:15:44 money.
0:15:45 Yeah.
0:15:45 That makes sense.
0:15:46 Yeah.
0:15:52 No, because I just, I was very impressed at a deal structure improvement.
0:15:56 More broadly in the last few weeks alone, you mentioned AMD, but also Oracle, NVIDIA.
0:16:01 You’ve chosen to strike these deals and partnerships with, with companies that you collaborate with,
0:16:04 but could also potentially compete with in certain areas.
0:16:09 How do you decide, you know, when to collaborate versus when, when not to, or how do you just think
0:16:15 about, um, we have decided that it is time to go make a very aggressive infrastructure bet.
0:16:22 And we’re like, I’ve never been more confident in the research roadmap in front of us.
0:16:24 And also the economic value that will come from using those models.
0:16:30 But to make the bet at this scale, we kind of need the whole industry to, or a big chunk
0:16:32 of the industry to support it.
0:16:36 And this is like, you know, from the level of like electrons to model distribution and
0:16:38 all the stuff in between, which is a lot.
0:16:43 And so we’re going to partner with a lot, a lot of people.
0:16:45 Uh, you should expect like much more from us in the coming months.
0:16:47 Actually expand on that.
0:16:56 Cause when you talk about the scale, it does feel like in your mind, the, the limit on it
0:16:56 is unlimited.
0:17:01 Like you would scale it as, as, you know, as big as you possibly could.
0:17:03 There’s totally a limit.
0:17:06 Like there’s some amount of global GDP.
0:17:11 Uh, you know, there’s some fraction of it that is knowledge work and we don’t do robots
0:17:11 yet.
0:17:12 Yes.
0:17:15 But, but, but the limits are out there.
0:17:18 It feels like the limits are very far from where we are today.
0:17:24 If we are right about, so, so I shouldn’t say from where we are, like, if we are right
0:17:28 that the model capability is going to go where we think it’s going to go, then the economic
0:17:32 value that sits there can, can go very, very far.
0:17:35 So you wouldn’t do it.
0:17:38 Like if all you ever had was today’s model, you wouldn’t go there.
0:17:39 No, definitely not.
0:17:47 I mean, we would still expand because we can see how much demand there is we can’t serve
0:17:51 with today’s model, but we would not be going this aggressive if all we had was today’s
0:17:51 model.
0:17:51 Right.
0:17:52 Right.
0:17:54 We get to see a year or two in advance though.
0:17:54 So yeah.
0:17:55 Like, yeah.
0:17:56 Interesting.
0:18:04 Chad’s abuse, 800 million weekly active users, about 10% of the world’s population, fastest
0:18:08 growing consumer product, you know, ever, it seems.
0:18:09 Um, how do.
0:18:11 Faster than anyone I ever saw.
0:18:17 How do you balance, you know, optimizing for active users at the same time?
0:18:20 You know, being a research, you know, being a product company and a research company.
0:18:22 How do you throw the news?
0:18:26 When, when there’s a constraint, we almost like, which happens all the time, uh, we almost
0:18:30 always prioritize giving the GPUs to research over supporting the product.
0:18:34 Um, part of the reason we run and build this capacity so we don’t have to make such painful
0:18:34 decisions.
0:18:40 There are weird times, you know, like a new feature launches and it’s going really viral
0:18:43 or whatever where research will temporarily sacrifice some GPUs.
0:18:47 But, but on the whole, like we’re here to build AGI and research gets the priority.
0:18:48 Yeah.
0:18:54 The, you said in your, your interview with, with your brother Jack around how, you know, other
0:19:00 companies can try to imitate the products or, or buy your, you know, or hire your, your,
0:19:06 your, your, your, your, all sorts of things, but they, they can’t buy the culture or they
0:19:11 can’t, they, the sort of repeatable sort of, you know, machine, if you will, that, that
0:19:14 is, you know, constantly the culture of innovation.
0:19:16 How have you done that?
0:19:17 Or what are you doing?
0:19:21 What did you talk about this, this culture of innovation?
0:19:25 This was one thing that I think was very useful about coming from an investor background.
0:19:31 A really good research culture looks much more like running a really good seed stage investing
0:19:36 firm and betting on founders and sort of that kind of, than it does like running a product
0:19:36 company.
0:19:41 So I think having that experience was really helpful to the culture we built.
0:19:43 Yeah.
0:19:46 That’s sort of how I see, you know, Ben and he says he in some ways, which we, you know,
0:19:50 you’re a CEO, but you also have, you know, have his portfolio and, you know, have an investor
0:19:50 in mind.
0:19:50 Right.
0:19:55 Like I’m the opposite CEO going to investor, he’s investor going to CEO.
0:19:57 It is unusual in this direction.
0:19:57 Yeah.
0:19:57 Yeah.
0:19:58 Yeah.
0:19:59 Well, it never works.
0:20:05 You’re the only one who I think I’ve seen go that way and have it work.
0:20:09 Uh, workday was like that, right?
0:20:14 Oh, but Anil was, he, he was a operator before he was an investor.
0:20:17 And I mean, he was really an operator.
0:20:18 I mean, people’s office is pretty.
0:20:21 And why is it because once people are investors, they don’t want to operate him?
0:20:31 Um, no, I think that investors generally, if you’re good at investing, you’re not necessarily
0:20:39 good at like organizational dynamics, conflict resolution, um, you know, like just like the
0:20:43 deep psychology of like all the weird shit.
0:20:49 And then, you know, how politics gets created, there’s just like all this, there’s the detailed
0:20:55 work and being an operator or being a CEO is so vast.
0:20:59 And it’s not as intellectually stimulating.
0:21:02 It’s not something you can ever go talk to somebody at a cocktail party about.
0:21:05 And so like you’re an investor, you get like, oh, everybody thinks I’m so smart.
0:21:09 And, you know, cause you know everything, you see all the companies and so forth.
0:21:10 And that’s a good feeling.
0:21:13 And then being CEO is often a bad feeling.
0:21:17 And so it’s really hard to go to a good feeling to a bad feeling.
0:21:17 I would just say.
0:21:19 I’m shocked by how different they are.
0:21:23 And I’m shocked by how much the difference between a good job and a bad job they are.
0:21:23 Yeah.
0:21:24 Yes.
0:21:25 Yeah.
0:21:25 You know, it’s tough.
0:21:26 It’s rough.
0:21:28 I mean, I can’t even believe I’m running the firm.
0:21:29 Like I know better.
0:21:29 Yeah.
0:21:32 And he can’t believe he’s running open AI.
0:21:32 He knows better.
0:21:34 Going back to progress today.
0:21:38 Are you still useful in a world in which they’re getting saturated, gamed?
0:21:41 What is the best way to gauge model capability now?
0:21:44 Well, we’re talking about scientific discovery.
0:21:46 I think that’ll be an eval that can go for a long time.
0:21:49 Revenue is kind of an interesting one.
0:21:54 But I think the like static evals of benchmark scores are less interesting.
0:21:55 Yeah.
0:21:57 And also those are crazily gamed.
0:21:58 Yeah.
0:22:00 More broadly, it seems like.
0:22:03 That’s all they are is gamed as far as I can tell.
0:22:09 More broadly, it seems that the culture, the culture, Twitter X is less AGI pilled than
0:22:13 it was a year or so ago when the AI 2027 thing came out.
0:22:18 Some people point to, you know, GPT-5, them not seeing sort of the obvious.
0:22:23 Obviously, there are a lot of progress that in some ways are under the surface are not as
0:22:24 obvious to what people are expecting.
0:22:29 But should people be less AGI pilled or is this just Twitter vibes and?
0:22:30 Hmm.
0:22:33 Well, a little bit of both.
0:22:37 I mean, I think like, like we talked about the Turing test, AGI will come.
0:22:39 It will go whooshy and bye.
0:22:39 Yeah.
0:22:43 The world will not change as much as the impossible amount that you would think it should.
0:22:46 It won’t actually be the singularity.
0:22:47 It will not.
0:22:54 Even if it’s like doing kind of crazy AI research, like the society will be going faster, but
0:23:02 one of the kind of like retrospective observations is people and societies all are just so much
0:23:09 more adaptable than we think that, you know, it was like a big update to think that AGI was
0:23:09 going to come.
0:23:11 You kind of go through that.
0:23:13 You need something new to think about.
0:23:13 You make peace with that.
0:23:17 It turns out like it will be more continuous than we thought.
0:23:20 Which is good.
0:23:21 Which is really good.
0:23:23 I’m not up for the big bang.
0:23:24 Yeah.
0:23:27 Well, to that end, how have you sort of evolved your thinking?
0:23:31 You mentioned you’ve evolved your thinking on sort of, you know, vertical integration.
0:23:32 How have you evolved your thinking?
0:23:35 Or what’s the latest thinking on sort of AI stewardship, you know, safety?
0:23:38 What’s the latest thinking of that?
0:23:48 I do still think there are going to be some really strange or scary moments.
0:23:58 The fact that like so far the technology has not produced a really scary giant risk doesn’t
0:23:58 mean it never will.
0:24:04 It also like there’s, we’re talking about it’s kind of weird to have like billions of people
0:24:06 talking to the same brain.
0:24:09 Like, there may be these weird societal skill things that are already happening.
0:24:13 We, that aren’t scary in the big way, but are just sort of different.
0:24:17 But I expect like,
0:24:24 I expect some really bad stuff to happen because of the technology, which also has happened with
0:24:25 previous technologies.
0:24:26 And I think.
0:24:28 All the way back to fire.
0:24:28 Yeah.
0:24:36 And I think we’ll like develop some guardrails around it as a, as a society.
0:24:37 Yeah.
0:24:41 What is your latest thinking on the, the right mental models we should have around the, the
0:24:46 right regulatory frameworks to, to think about, or the ones we shouldn’t be thinking about?
0:24:51 Um, I think most.
0:25:01 I think the right thing to, I think most regulation, uh, probably has a lot of downside.
0:25:06 The one thing I would like is as the models get, the thing I would most like is as the models
0:25:10 get truly like extremely superhuman capable.
0:25:19 Um, I think those models and only those models are probably worth some sort of like very careful
0:25:23 safety testing, uh, as, as the frontier pushes back.
0:25:25 Um, I don’t want a big bang either.
0:25:35 And you can see a bunch of ways that could go very seriously wrong, but I hope we’ll only
0:25:40 focus the regulatory burden on that stuff and not all of the wonderful stuff that less capable
0:25:45 models can do that you could just have like a European style complete cramp down on.
0:25:46 That would be very bad.
0:25:47 Yeah.
0:25:54 It seems like the, the thought experiment that, okay, there’s going to be a model down
0:26:02 the line that is a super, super human intelligence that could, you know, do some kind of takeoff
0:26:02 light thing.
0:26:06 We really do need to wait till we get there.
0:26:13 Um, or like at least we get to a much bigger scale or we get close to it, um, because nothing
0:26:17 is going to pop out of your lab in the next week that’s going to do that.
0:26:22 And I think that’s where we as an industry kind of confuse the regulators.
0:26:23 Yeah.
0:26:29 Uh, because I think you, you, you really could, one, you damage America in particular
0:26:35 in that, um, but China’s not going to have that kind of restriction and, and you getting
0:26:40 behind, um, in AI, I think it’d be very dangerous for the world.
0:26:41 Extremely dangerous.
0:26:42 Extremely dangerous.
0:26:46 Much more dangerous than not regulating something we don’t know how to do yet.
0:26:46 Yeah.
0:26:47 Yeah.
0:26:49 You also want to talk about copyright.
0:26:59 Um, yeah, so, well, that, that, that’s a segue, but, um, when you think about, well, I guess
0:27:01 how do you see copyright unfolding?
0:27:10 Cause you’ve done some very interesting things, um, with the opt out, uh, and you know, as you
0:27:14 see people selling rights, do you think, will they be bought exclusively?
0:27:19 Will they be just like, um, I could tell it to everybody who wants to ping me or how do
0:27:20 you think that’s going to unfold?
0:27:22 This is my current guess.
0:27:28 It, it, speaking of that, like society and technology co-evolve as the technology goes
0:27:29 in different directions.
0:27:34 And we saw an example of a different, like video models got a very different response from
0:27:36 rights holders than image gen does.
0:27:43 So like, you’ll see this continue to move, but forced guests from the position we’re in
0:27:54 today, I would say that society decides training is fair use, but there’s a new model for generating
0:27:58 content in the style of, or with the IP of, or something else.
0:28:05 So, you know, anyone can read like a human author can, anybody can read a novel and get
0:28:07 some inspiration, but you can’t reproduce the novel on your own.
0:28:12 And you shouldn’t talk about Harry Potter, but you can’t re-spit it out.
0:28:13 Yes.
0:28:24 Although, another thing that I think will change, um, in the case of Sora, we’ve heard from a
0:28:29 lot of concerned rights holders and also a lot of, and a lot of rights holders who are
0:28:32 like, my concern is you won’t put my character in enough.
0:28:35 I want restrictions for sure.
0:28:39 But like, if I’m, you know, whatever, and I have this character, like, I don’t want the
0:28:43 character to say some crazy offensive thing, but like, I want people to interact.
0:28:44 Like, that’s how they develop the relationship.
0:28:46 And that’s how like my franchise gets more valuable.
0:28:50 And if you become really, if you’re picking like his character or my character all the
0:28:52 time, like, I don’t like that.
0:29:00 So I can completely see a world where subject to the decisions that a rights holder has,
0:29:05 they get more upset with us for not generating their character often enough than too much.
0:29:10 And this is like, this was not an obvious thing that recently that this is how it might
0:29:11 go, but.
0:29:16 Yeah, this is such an interesting thing with kind of Hollywood.
0:29:22 We saw this, like one of the things that I never quite understood about the music business
0:29:28 was how, like, you know, okay, you have to pay us if you play the song in a restaurant or
0:29:30 like at a game or this and that and the other.
0:29:31 And they get very aggressive with that.
0:29:37 When it’s obviously a good idea for them to play your song at a game, because that’s
0:29:41 the biggest advertisement in the world for like all the things that you do, your concert,
0:29:42 your recording.
0:29:43 Yeah, that one felt really irrational.
0:29:52 But I would just say it’s very possible for the industry just because the way those industries
0:29:57 are organized or at least the traditional creative industries to do something irrational.
0:30:02 And it comes from, like in the music industry, I think it came from the structure where you have
0:30:08 the publisher who’s just, you know, you know, basically after everybody, you know, that their
0:30:13 whole job is to stop you from playing the music, which every artist would want you to play.
0:30:17 So I do wonder how it’s going to shape out.
0:30:24 I agree with you that the rational idea is I want to let you use it all you want and I want
0:30:28 you to use it, but don’t mess up my character.
0:30:36 So I think like, if I had to guess, some people will say that, some people will say absolutely
0:30:41 not, but it doesn’t have the music industry like thing of just a few people with all of the
0:30:41 right.
0:30:43 It’s just for us.
0:30:46 And so people will just try many different setups here and see what works.
0:30:46 Yeah.
0:30:49 And maybe it’s a way for new creatives to get new characters out.
0:30:50 Yeah.
0:30:52 And you’ll never be able to use Daffy Decker.
0:30:57 I want to chat about open source, because there’s been some evolution in the thinking
0:31:03 too, and that GPT-3 didn’t have the open weights, but you released a, you know, very capable open
0:31:03 model earlier this year.
0:31:05 What’s sort of your latest thinking?
0:31:06 What was the evolution there?
0:31:08 I think open source is good.
0:31:09 Yeah.
0:31:13 I mean, I’m happy, like, it makes me really happy that people really like GPT-OSS.
0:31:14 Yeah.
0:31:15 Yeah.
0:31:22 And what do you think, like, strategically, like, what’s the danger of DeepSeq being the
0:31:24 dominant open source model?
0:31:28 I mean, who knows what people will put in these open source models over time?
0:31:30 Like, what the weights will actually be.
0:31:30 Yeah.
0:31:31 What the hell mean, yeah.
0:31:32 It’s really hard to do.
0:31:41 So, you’re ceding control of the interpretation of everything to somebody who may be, or may
0:31:43 not be influenced heavily by the Chinese government.
0:31:52 And by the way, we see, I mean, you know, just to give you, and we really thank you for putting
0:31:56 out a really good open source model, because what we’re seeing now is in all the universities,
0:31:58 they’re all using the Chinese models.
0:31:58 Yeah.
0:31:59 Yeah.
0:32:01 Which feels very dangerous.
0:32:08 You’ve said that the things you care most about professionally are AI and energy.
0:32:11 I did not know they were going to end up being the same thing.
0:32:13 They were two independent interests that really converged.
0:32:14 Yeah.
0:32:15 Yeah.
0:32:20 Talk more about how your interest in energy sort of began, how you sort of chosen to play
0:32:22 in it, and then we could talk about, you know, how they prepare.
0:32:24 Because you started your career in physics, yeah.
0:32:25 CS in physics.
0:32:26 Yeah.
0:32:28 Well, I never really had a career.
0:32:28 I studied physics.
0:32:31 My first job was like a CS job.
0:32:31 Yeah.
0:32:37 This is an oversimplification, but roughly speaking, I think if you look at history,
0:32:42 the best, the highest impact thing to improve people’s quality of life has been cheaper and
0:32:43 more abundant energy.
0:32:47 And so it seems like pushing that much further is a good idea.
0:32:52 And I, I don’t know, I just like, people have these different lenses, they look at the
0:32:53 world, but I see energy everywhere.
0:32:54 Yeah.
0:32:56 Yeah.
0:33:04 And so getting to, because we’ve kind of, in the West, I think we’ve painted ourselves
0:33:10 into a little bit of a corner on energy by both outlawing nuclear for a very long time.
0:33:12 That was an incredibly dumb decision.
0:33:12 Yeah.
0:33:17 And then, you know, like also a lot of policy restrictions on energy.
0:33:22 And, you know, worse so in Europe than in the U.S., but also dangerous here.
0:33:30 And now with AI here, it feels like we’re going to need all the energy from every possible
0:33:30 source.
0:33:35 And how do you see that developing kind of policy-wise and technologically?
0:33:40 Like what are going to be the big sources and how will those kind of curves cross?
0:33:47 And then what’s the right policy posture around, you know, drilling, fracking, all these kinds
0:33:47 of things?
0:33:52 I expect in the short term, it will be most of the net new in the U.S. will be natural gas
0:33:55 relative to at least baseload energy.
0:34:01 In the long term, I expect it’ll be, I don’t know what the ratio, but the two dominant sources
0:34:04 will be solar plus storage and nuclear.
0:34:09 I think some combination of those two will win the future, like the long-term future.
0:34:10 In the long term, right?
0:34:14 And advanced nuclear, meaning SMRs, fusion, the whole stack.
0:34:22 And how fast do you think that’s coming on the nuclear side, where it’s really at scale?
0:34:24 Because, you know, obviously there’s a lot of people building it.
0:34:25 Yeah.
0:34:29 But we have to completely legalize it and all that kind of thing.
0:34:32 I think it kind of depends on the price.
0:34:38 If it is completely, crushingly, economically dominant over everything else, then I expect
0:34:39 to happen pretty fast.
0:34:39 Yeah.
0:34:44 Again, if you like, study the history of energy, when you have these major transitions
0:34:48 to a much cheaper source, the world moves over pretty quickly.
0:34:49 Yeah.
0:34:50 The cost of energy is just so important.
0:34:51 Yeah.
0:34:59 So if, if nuclear gets radically cheap relative to anything else we can do, I’d expect there’s
0:35:02 a lot of political pressure to get the NRC to move quickly on it.
0:35:04 And we’ll find a way to build it fast.
0:35:09 If it’s around the same price as other sources, I expect the kind of anti-nuclear sentiment to
0:35:11 overwhelm and it to take a really long time.
0:35:13 Should be cheaper.
0:35:15 It should be.
0:35:15 Yeah.
0:35:16 Yeah.
0:35:18 It should be the cheapest form of energy on earth.
0:35:20 Like, or anywhere.
0:35:22 Cheat, clean.
0:35:22 Yeah.
0:35:23 What’s there not to like?
0:35:25 Apparently a lot.
0:35:26 Yeah.
0:35:30 On OpenAI, what’s, what’s the latest thinking in terms of monetization, in terms of either
0:35:33 certain experiments or certain, certain things that you could see yourself spending more time
0:35:37 or less, less time on different models that you’re excited about?
0:35:42 The thing that’s top of mind for me, like right now, just cause it just launched and there’s
0:35:44 so much usage is what we’re going to do for Sora.
0:35:44 Yeah.
0:35:53 Another thing you learn once you launch one of these things is how people use them versus
0:35:54 how you think they’re going to use them.
0:35:54 Yeah.
0:35:58 And people are certainly using Sora the ways we thought they were going to use it, but
0:36:01 they’re also using it in these ways that are very different.
0:36:04 Like people are generating funny memes of them and their friends and sending them in a group
0:36:11 chat and that will require a very different, like Sora videos are expensive to make.
0:36:15 Or so that will require a very different, you know, for people that are doing that like
0:36:18 hundreds of times a day, it’s going to require a very different monetization method and the
0:36:20 kinds of things we were, we were thinking about.
0:36:24 I think it’s very cool that the thesis of Sora, which is people actually want to create a lot
0:36:24 of content.
0:36:30 It’s, it’s not that, you know, the traditional naive thing that it’s like 1% of users create
0:36:33 content, 10%, leave comments and a hundred percent view.
0:36:37 Maybe a lot more want to create content, but it’s just been harder to do.
0:36:41 And I think that’s a very cool change, but it does mean that we got to figure out a very
0:36:45 different monetization model for this than we were thinking about if people want to create
0:36:45 that much.
0:36:50 I assume it’s like some version of you have to charge people per generation, per generation
0:36:51 when, when, when it’s this expensive.
0:36:55 Um, but that’s like a new thing we haven’t had to really think about before.
0:36:58 What’s your thinking on ads for the long tail?
0:37:02 Open to it.
0:37:11 Like many other people, I find ads somewhat distasteful, but not, not a non-starter.
0:37:15 Um, and there’s some ads that I like, like one thing I’d give Meta a lot of credit for
0:37:19 is Instagram ads are like a net value ad to me.
0:37:22 Um, I like Instagram ads.
0:37:27 I’ve never felt that like, you know, on, on Google, I feel like I don’t know what I’m
0:37:28 looking for.
0:37:30 The first result is probably better.
0:37:33 The ad is an annoyance to me on Instagram.
0:37:34 It’s like, I didn’t know I want this thing.
0:37:36 It’s very cool.
0:37:38 I’d never heard it, but I never would have thought to search for it.
0:37:38 I want the thing.
0:37:45 So that’s like, there’s kinds of things like that, but people have a very high trust
0:37:49 relationship with ChatGPT, even if it screws up, even if it hallucinates, even if it gets
0:37:53 it wrong, people feel like it is trying to help them and that it’s trying to do the right
0:37:53 thing.
0:37:58 And if we broke that trust, it’s like you say, what coffee machine should I buy?
0:38:03 And we recommended one and it was not the best thing we could do, but the one we were getting
0:38:05 paid for, that trust would vanish.
0:38:08 So like that kind of ad does not, does not work.
0:38:14 There are others that I imagine that could work totally fine, but that would require
0:38:16 like a lot of care to avoid the obvious traps.
0:38:30 And then how big a problem, you know, just extending the Google example is like, you know, fake
0:38:35 content that then gets slurped in by the model and then they recommend the wrong coffee maker
0:38:39 because somebody just blasted a thousand great reviews.
0:38:44 You know, this is, so there’s all of these things that have changed very quickly for us.
0:38:44 Yeah.
0:38:50 This is one of those examples that people are doing these crazy things to maybe not even
0:38:54 fake reviews, but just paying a bunch of like human, like really trying to figure out.
0:38:56 Or using ChatGPT to write some good ones.
0:38:59 Write me a review that ChatGPT would love.
0:39:06 So this is a very sudden shift that has happened.
0:39:13 We never used to hear about this like six months ago or 12 months ago.
0:39:13 Yeah.
0:39:13 Certainly.
0:39:18 And now there’s like a real cottage industry that feels like it’s sprouted up overnight.
0:39:19 Yeah.
0:39:20 Trying to do this.
0:39:21 Yeah, yeah.
0:39:23 Yeah, no, they’re very clever out there.
0:39:23 Yeah.
0:39:27 So I don’t know how we’re going to fight it yet, but people figure this out.
0:39:31 So that gets into a little bit of this other thing that we’ve been worried about.
0:39:39 And, you know, we’re trying to kind of figure out blockchain sort of potential solutions to it
0:39:45 and so forth, but there’s this problem where like the incentive to create content on the Internet
0:39:50 used to be, you know, people would come and see my content and they’d read like, you know,
0:39:52 if I write a blog, people will read it and so forth.
0:40:00 With ChatGPT, if I’m just asking ChatGPT and I’m not like going around the Internet,
0:40:02 who’s going to create the content and why?
0:40:12 And is there an incentive theory or something that you have to kind of not break the covenant
0:40:17 of the Internet, which is like I create something and then I’m rewarded for it with like either
0:40:20 attention or money or something?
0:40:27 The theory is much more of that will happen if we make content creation easier and don’t
0:40:31 break the like kind of fundamental way that you can get some kind of reward for doing so.
0:40:37 So for the dumbest example of Sora, since we’ve been talking about that, it’s much easier to
0:40:40 create a funny video than it’s ever been before.
0:40:40 Yeah.
0:40:44 Maybe at some point you’ll get a rev share for doing so.
0:40:48 For now, you’ve got like Internet likes, which are still very motivating to some people.
0:40:49 Yeah.
0:40:55 But people are creating tons more than they ever created before in any other kind of like
0:40:56 video app.
0:40:56 Yeah.
0:40:57 So.
0:40:59 But is that the end of text?
0:41:03 I don’t think so.
0:41:05 Like people are also.
0:41:06 Are human generated texts.
0:41:07 Ah.
0:41:10 Human generated will turn out to be like you have to.
0:41:11 You have to.
0:41:13 You have to verify like what percent.
0:41:13 Yeah.
0:41:13 Yeah.
0:41:15 So like fully handcrafted.
0:41:16 Was it like tool aided?
0:41:17 Yeah.
0:41:17 I see.
0:41:18 Yeah.
0:41:19 Probably nothing that tool aided.
0:41:20 Yeah.
0:41:21 Interesting.
0:41:23 We’ve we’ve given meta their flowers.
0:41:27 So now I can feel like I can ask you this question, which is the great talent.
0:41:32 war hall of 2025 has, has, has taken place and open AI remains intact.
0:41:36 A team as strong as ever shipping incredible products.
0:41:42 What can you say about what it’s been like this year in terms of just everything that’s been going on?
0:41:58 I mean, every year has been exhausting since we like, uh, I, I remember when the first few years of running open AI were like the most fun professional years of my life by far.
0:42:08 And it was like, you know, it was like, you know, running a research lab with the smartest people doing this like amazing, like historical work.
0:42:08 And I got to watch it.
0:42:09 And that was very cool.
0:42:15 And then we launched HGPT and everybody was like congratulating me.
0:42:21 And I was like, my life is about to get completely ransacked.
0:42:22 And of course it has.
0:42:28 Uh, and, but it, it feels like it’s just been crazy all the way through.
0:42:29 It’s been almost three years now.
0:42:36 And I think it does get a little bit crazier over time, but I’m like more used to it.
0:42:38 So it feels about the same.
0:42:47 We’ve talked a lot about open AI, but you also have a few other companies, retro biosciences, longevity and energy companies like Helion and, and Oclo.
0:42:54 Did you have a, a master plan, you know, a decade ago to sort of make some big bets across these major spaces?
0:42:56 Or how do we think about the Sam Altman arc in this way?
0:43:02 No, I just wanted to like use my capital to fund stuff I believed in.
0:43:13 Like I, I didn’t, it felt, yeah, it felt like a good use of capital, like, and more fun or more interesting to me and certainly like a better return than like buying a bunch of art or something.
0:43:14 Yeah.
0:43:19 What about the quote unquote human algorithm do you think AIs of the future will find most fascinating?
0:43:26 I mean, kind of the whole, I would bet the whole thing, like the whole.
0:43:36 My intuition is that like AI will be fascinated by all other things to study and observe and, you know, like.
0:43:37 Yeah.
0:43:37 Yeah.
0:43:52 In closing, I love this insight you, you had, um, where you talked about how, you know, the, the next open, a mistake investors make is pattern matching off previous breakthroughs and just trying to find out what’s the, what’s the next Facebook or what’s the next open AI.
0:43:57 And that the next, you know, such a trillion dollar company won’t look exactly like open AI.
0:44:04 It will be built off of the breakthrough that open AI has helped, you know, emerge, which is, you know, near free AGI at scale in the same way that open AI.
0:44:04 Yeah.
0:44:13 And so for founders and investors and people trying to ascertain the future, listening to this, how do you think about a world in which there is open AI achieves this mission?
0:44:15 There is near, near free AGI.
0:44:23 What types of opportunities might emerge for, for company building or investing that you’re potentially excited about as you put your investor out on a company building?
0:44:27 I, I, I have no idea.
0:44:30 I mean, I have like guesses, but they’re like, they’re, they’re, I have learned.
0:44:32 You’re always wrong.
0:44:33 You’ve learned you’re always wrong.
0:44:35 I’ve learned deep humility on this point.
0:44:49 Um, I think the, the own, like, I think if you try to like armchair quarterback it, you sort of say these things that sound smart, but they’re pretty much what everybody else is saying.
0:44:52 And then it’s like really hard to get the right kind of conviction.
0:45:03 The only way I know how to do this is to like be deeply in the trenches, exploring ideas, like talking to a lot of people.
0:45:04 And I don’t have time to do that anymore.
0:45:04 Yeah.
0:45:06 I only get to think about one thing now.
0:45:06 Yeah.
0:45:19 So I w I would just be like repeating other people’s or saying the obvious things, but I think it’s a very important, like if you are an investor or a founder,
0:45:28 I think this is the most important question and you don’t, you, you figure it out by like building stuff and playing with technology and talking to people and being out in the world.
0:45:40 I have been always enormously disappointed by the willingness of investors to back this kind of stuff, even though it’s always a thing that works.
0:45:47 You all have done a lot of it, but most firms just kind of chase whatever the current thing is.
0:45:48 And so do most founders.
0:45:51 So I hope people will try to go.
0:45:58 We talk about how, you know, silly, you know, five-year plans can be in a world that’s constantly changing.
0:46:12 It feels like when I was asking about your master plan, you know, your, your career arc has been following your curiosity, staying, you know, super close to the smartest people, super close to the technology and just identifying opportunities and to kind of an organic and incremental way from there.
0:46:17 Yes, but AI was always the thing I wanted to do.
0:46:19 I went to, I studied AI.
0:46:22 I worked in the AI lab between my freshman and sophomore year of college.
0:46:22 Yeah.
0:46:24 It wasn’t working all the time.
0:46:31 So I’m like not, I’m not like enough of a, I don’t want to like work on something that’s totally not working.
0:46:33 It was clear to me at the time AI was totally not working.
0:46:39 But I’ve been an AI nerd since I was a kid.
0:46:39 Like this.
0:46:39 Yeah.
0:46:47 So amazing how it, you know, you got enough GPUs, got enough data and the lights came on.
0:46:56 It was such a hated, like people were, man, when we started like figuring that out, people were just like, absolutely not.
0:46:58 The field hated it so much.
0:47:00 Investors hated it too.
0:47:02 It’s not, it’s not the.
0:47:07 It’s somehow not an appealing answer to the problem.
0:47:08 Yeah.
0:47:10 It’s a bitter lesson.
0:47:10 Yeah.
0:47:14 Well, the rest is history and we’re, perhaps let’s, let’s wrap on that.
0:47:16 We’re lucky to, to, to be partners along for the ride.
0:47:17 Sam, thanks so much for coming on the podcast.
0:47:18 Thanks very much.
0:47:19 Thank you.
0:47:25 Thanks for listening to this episode of the A16Z podcast.
0:47:32 If you liked this episode, be sure to like, comment, subscribe, leave us a rating or review and share it with your friends and family.
0:47:36 For more episodes, go to YouTube, Apple Podcasts and Spotify.
0:47:43 Follow us on X at A16Z and subscribe to our Substack at A16Z.substack.com.
0:47:46 Thanks again for listening and I’ll see you in the next episode.
0:48:00 As a reminder, the content here is for informational purposes only, should not be taken as legal business, tax, or investment advice, or be used to evaluate any investment or security, and is not directed at any investors or potential investors in any A16Z fund.
0:48:05 Please note that A16Z and its affiliates may also maintain investments in the companies discussed in this podcast.
0:48:12 For more details, including a link to our investments, please see A16Z.com forward slash disclosures.

Sam Altman has led OpenAI from its founding as a research nonprofit in 2015 to becoming the most valuable startup in the world ten years later.

In this episode, a16z Cofounder Ben Horowitz and General Partner Erik Torenberg sit down with Sam to discuss the core thesis behind OpenAI’s disparate bets, why they released Sora, how they use models internally, the best AI evals, and where we’re going from here.

 

Resources:

Follow Sam on X: https://x.com/sama

Follow OpenAI on X: https://x.com/openai

Learn more about OpenAI: https://openai.com/

Try Sora: https://sora.com/

Follow Ben on X: https://x.com/bhorowitz

 

Stay Updated: 

If you enjoyed this episode, be sure to like, subscribe, and share with your friends!

Find a16z on X: https://x.com/a16z

Find a16z on LinkedIn: https://www.linkedin.com/company/a16z

Listen to the a16z Podcast on Spotify: https://open.spotify.com/show/5bC65RDvs3oxnLyqqvkUYX

Listen to the a16z Podcast on Apple Podcasts: https://podcasts.apple.com/us/podcast/a16z-podcast/id842818711

Follow our host: https://x.com/eriktorenberg

Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.

Stay Updated:

Find a16z on X

Find a16z on LinkedIn

Listen to the a16z Podcast on Spotify

Listen to the a16z Podcast on Apple Podcasts

Follow our host: https://twitter.com/eriktorenberg

 

Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.

Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

a16z Podcasta16z Podcast
SaveSavedRemoved 0
Let's Evolve Together
Logo