First Time Founders with Ed Elson – This Physicist Is Building AI Droids

0
0
AI transcript
0:00:02 Support for this show comes from Shopify.
0:00:07 With Shopify, it’s easy to create your brand, open up for business, and get your first sale.
0:00:13 Use their customizable templates, powerful social media tools, and a single dashboard for managing it all.
0:00:20 The best time to start your new business is right now, because established in 2025, it has a nice ring to it, doesn’t it?
0:00:26 Sign up for a $1 per month trial period at shopify.com slash voxbusiness, all lowercase.
0:00:31 Go to shopify.com slash voxbusiness to start selling with Shopify today.
0:00:33 Shopify.com slash voxbusiness.
0:00:43 Support for this show comes from the Audible original, the downloaded two, Ghosts in the Machine.
0:00:51 Quantum computers, the next great frontier of technology, offering endless possibilities that stretch the human mind.
0:01:00 But for Roscoe Cadoulian and the Phoenix Colony, quantum computing uploads the human mind with life-altering consequences.
0:01:11 Audible’s hit sci-fi thriller, The Downloaded, returns with Oscar winner Brendan Fraser, reprising his role as Roscoe Cadoulian in The Downloaded 2, Ghosts in the Machine.
0:01:14 This thought-provoking sequel from Robert J.
0:01:18 Takes listeners on a captivating sci-fi journey.
0:01:21 A mind-bending must-listen that asks,
0:01:24 What are you willing to lose to save the ones you love?
0:01:27 The Downloaded 2, Ghosts in the Machine.
0:01:30 Available now, only from Audible.
0:01:43 Support for this show comes from the Audible original, The Downloaded 2, Ghosts in the Machine.
0:01:46 The Earth only has a few days left.
0:01:52 Roscoe Cadoulian and the rest of the Phoenix Colony have to re-upload their minds into the quantum computer.
0:01:57 But a new threat has arisen that could destroy their stored consciousness forever.
0:02:06 Listen to Oscar winner Brendan Fraser reprise his role as Roscoe Cadoulian in this follow-up to the Audible original blockbuster, The Downloaded.
0:02:12 It’s a thought-provoking sci-fi journey where identity, memory, and morality collide.
0:02:17 Robert J. Sawyer does it again with this much-anticipated sequel that leaves you asking,
0:02:20 What are you willing to lose to save the ones you love?
0:02:24 The Downloaded 2, Ghosts in the Machine.
0:02:26 Available now, only from Audible.
0:02:26 The Downloaded 2, Ghosts in the Machine.
0:02:40 Welcome to First Time Founders.
0:02:42 I’m Ed Elson.
0:02:45 Seven and a half billion dollars.
0:02:50 That is how much money has poured into AI coding startups in just the past three months.
0:02:53 And it’s not that hard to see why.
0:02:58 Across the industry, developers are embracing generative AI to speed up their work.
0:03:04 It’s efficient, it’s impressive, but it’s still under the careful watch of human engineers.
0:03:08 Well, my next guest wondered if AI could do more.
0:03:14 What if it could handle routine tasks like debugging or migrations on its own?
0:03:16 What if it could be autonomous?
0:03:22 To turn that idea into reality, he launched an AI startup which uses agents to handle the mundane work
0:03:24 that developers would rather skip.
0:03:34 With a $50 million investment from Sequoia, JP Morgan, and NVIDIA, his company is reshaping the future of software development.
0:03:39 This is my conversation with Matan Grinberg, co-founder and CEO of Factory.
0:03:42 All right, Matan Grinberg, thank you for joining me.
0:03:43 Thank you for having me.
0:03:44 How are you?
0:03:45 I’m good.
0:03:49 We should probably start off by saying we go way back.
0:03:50 We do indeed, yes.
0:03:51 We’re friends from college.
0:03:54 I knew you back in college.
0:03:56 I knew you when you were studying physics.
0:03:57 You were a budding physicist.
0:04:06 I mean, just for those listening, Matan was basically the smartest guy I knew in college.
0:04:10 And then you go on and you’re, I know you were getting your PhD in physics.
0:04:15 And then eventually you tell me, no, I’m actually starting an AI company.
0:04:22 And now here you are, and you’re running one of these top AI agentic startups, figuring out how to automate coding.
0:04:25 Let’s just start with like, how did we get here?
0:04:32 How did we go from Princeton physics, going to be a physicist, and then now you’re an AI person?
0:04:37 Yeah, so obviously that was not the arc that I think I was expecting either.
0:04:43 Probably goes back to eighth grade, which is why I got into physics in the first place.
0:04:47 Spike is a very big motivator for me.
0:04:51 And in eighth grade, my geometry teacher told me to retake geometry in high school.
0:04:53 And I was like, screw that.
0:04:53 Like what?
0:04:54 Like I’m good at math.
0:04:55 I don’t need to do that.
0:05:01 And so in the summer between eighth and ninth grade, my first order on Amazon ever was
0:05:08 textbooks for Algebra 2, Trigonometry, Precalculus, Calculus 1, 2, 3, Differential Equations.
0:05:09 A true nerd.
0:05:09 Yeah, exactly.
0:05:13 And so I spent the whole summer studying those textbooks.
0:05:21 And going into freshman year of high school, I took an exam to pass out of every single one of those classes.
0:05:22 So I had credit for all of them.
0:05:25 And then I went to my dad and I was like, what’s the hardest math?
0:05:30 And he said, he said, string theory, which is actually physics.
0:05:31 It’s not math.
0:05:33 And I was like, okay, I’m going to be a string theorist.
0:05:37 And then basically for the next like 10 years of my life, that was all I really cared about.
0:05:42 I didn’t really pay attention much to anything about like finance, entrepreneurship, like anything
0:05:42 like that.
0:05:44 Went to Princeton because it was great for physics.
0:05:47 Then did a master’s in the UK, came to Berkeley to do a PhD.
0:05:49 And at Berkeley, it finally dawned on me.
0:05:50 Wait a minute.
0:05:55 I was just studying for 10 years, like 10 dimensional black holes and quantum field theory and all
0:05:55 this stuff.
0:06:00 Originally because of this like spite and obviously I came to love it, but I realized
0:06:03 that I didn’t really want to spend my entire life doing that.
0:06:08 Taking 10 years to realize that is a little bit slow, but I had a bit of an existential crisis
0:06:09 of, you know, like, what is it?
0:06:10 What should I do?
0:06:16 Almost joined Jane Street in a classic, like ex-physicist, like, what should I do?
0:06:18 Decided not to, because I feel like that’s the thing.
0:06:22 Like, you know, once you go there, you kind of don’t move on from that.
0:06:28 So I ended up taking some classes at Berkeley in AI, really fell in love in particular with
0:06:29 what was called program synthesis.
0:06:31 Now they call it co-generation.
0:06:36 Um, and the math from physics made it such that like jumping into the AI side was relatively
0:06:42 straightforward, did that for about a year and then realized that the best way to pursue
0:06:46 co-generation was not through academic research, but through starting a company.
0:06:50 And so then the question was like, okay, well, I know nothing about entrepreneurship.
0:06:51 I’ve been a physicist for 10 years.
0:06:52 What should I do?
0:06:57 And this was, uh, just after COVID.
0:07:04 But I remember on YouTube in my recommended algorithm, I saw a podcast on zoom with this
0:07:07 guy whose name I remembered from a paper that I wrote at Princeton.
0:07:12 This guy used to be a string theorist, but it was a podcast and it was like, uh, Sequoia
0:07:16 investor, like talks, you know, everything from like crypto to physics.
0:07:17 And I was like, what the hell is this?
0:07:21 And I remember watching the interview and the guy seemed relatively normal, like had social
0:07:25 skills, which is rare for someone, which is rare for someone who had published in string
0:07:26 theory.
0:07:32 That was the other interesting thing about you is you’re a social person who’s also this
0:07:37 physics genius, which again is quite rare, but so you found someone in common.
0:07:37 Yeah.
0:07:40 So, so found someone who was like, okay, you know, maybe there’s, there was someone else
0:07:42 who has this, this similar background.
0:07:45 And I, I remembered the name correctly.
0:07:49 And so I looked him up and saw that he was a string theorist who ended up, uh, you know,
0:07:54 getting his degree, then joining Google ventures, being one of the first checks into Stripe.
0:07:58 Then one of the first checks into like SpaceX on the way he had built and sold a company for
0:08:00 a billion dollars to Palo Alto networks.
0:08:03 And I was just like, this is an insane trajectory.
0:08:06 So sent him a cold email and I was just like, Hey, I’m a ton.
0:08:11 I studied physics at Princeton, wrote a paper with this guy named Juan Maldicena, who’s
0:08:12 like a very famous string theorist.
0:08:15 Um, and I was like, would love to talk.
0:08:22 And that day he immediately replied and was like, Hey, come down to our office in Menlo
0:08:22 Park.
0:08:22 Let’s chat.
0:08:25 What was supposed to be a 30 minute meeting ends up being a three hour walk.
0:08:29 And we walked from Sand Hill all the way to, to Stanford campus and then back.
0:08:32 And funny enough on the, on the walk.
0:08:36 So we realized that we had a lot of like very similar reasons for getting into physics in
0:08:39 the first place, similar reasons for wanting to leave as well.
0:08:42 Um, and this was in April of 2023.
0:08:49 So just after the Silicon Valley bank crisis, and also very soon after the Elon Twitter acquisition
0:08:56 and after the conversation, um, he was basically like, Maton, you should 100% drop out of your
0:09:03 PhD and you should either join Twitter right now, because if you voluntarily go to, uh, Twitter
0:09:05 of all times now, that’s just bad-ass.
0:09:08 It looks great, you know, on a resume or you should start a company.
0:09:11 And I knew what the answer was, but didn’t want to like corrupt what was an incredible
0:09:11 meeting.
0:09:12 So I was like, okay, thank you so much.
0:09:13 I’m going to go think about it.
0:09:17 The good advice for meetings.
0:09:18 Don’t give your answer right away.
0:09:18 Yeah.
0:09:19 Yeah.
0:09:19 Take some time.
0:09:20 Come back.
0:09:22 And so crazy thing.
0:09:26 The next day I go to a hackathon in San Francisco in this hackathon.
0:09:30 I run into, you know, we recognized each other at this hackathon and we’re like, oh, Hey,
0:09:32 like, you know, I remember you.
0:09:37 Um, we ended up chatting and realizing that we were both really obsessed with coding for AI.
0:09:40 And then that day we started working on what would become factory.
0:09:42 He had a job at the time.
0:09:44 I was a PhD student, so I could spend whatever time I wanted on it.
0:09:49 Um, and over the next 48 hours, we built the demo for what would become factory.
0:09:53 Um, called up Sean and I was like, Hey, I was thinking about what you said.
0:09:54 I have a demo I want to show you.
0:09:56 And so we got on a zoom.
0:09:56 I showed it to him.
0:09:58 He was like, this is all right.
0:09:59 And I was like, all right.
0:10:00 Like, I think this is pretty sick.
0:10:01 Like, I don’t know.
0:10:03 He’s like, okay, would you work on it full time?
0:10:04 And I was like, yeah, a hundred percent.
0:10:07 And he was like, okay, drop out of your PhD and send me a screenshot.
0:10:10 And I was just like, fuck it.
0:10:10 Okay.
0:10:15 So go to, go to the, go to the like Berkeley portal, like fully unenroll and withdraw.
0:10:21 Didn’t tell my parents, uh, obviously, um, send him a screenshot and he’s like, okay,
0:10:23 you have a meeting with the Sequoia partnership tomorrow morning.
0:10:24 Like be ready to present.
0:10:32 So now back by Sequoia, you just raised your series B, uh, you are one of the top AI coding
0:10:36 startups, but there are a lot of AI coding companies.
0:10:41 Uh, we spoke with one a while ago, which was Codium, which eventually became Winsurf.
0:10:47 It got folded into Google in this kind of controversial situation point being, there are people who are
0:10:51 doing this, um, what makes factory different?
0:10:54 What made it different from the get go and what makes it different now?
0:11:02 Our mission from the, when we first started is actually the exact same that it is today, which is to bring autonomy to software
0:11:02 engineering.
0:11:07 Um, I think when we first started in April of 2023, we were very early.
0:11:13 And what I’ve come to realize is that, and this is kind of a, a little bit of a trite statement, but being early is the same as being wrong.
0:11:22 And we were wrong early on in that the foundation models were not good enough to fully have autonomous software development agents.
0:11:30 Um, and so in the early days, I think the important things that we were doing was building out an absolutely killer team, which we do have.
0:11:37 And everyone that we started with is still here, which has been incredible and having a deeper sense of how developers are going to adopt these tools.
0:11:40 So that was kind of in the early days.
0:11:49 And I think something that we learned that still to this day, I don’t really see any other companies focus on is the fact that coding is not the most important part of software development.
0:12:07 In fact, as a company gets larger and has the number of engineers in a company grows, the amount of time that any given engineer spends on coding goes down because there’s all this organizational molasses of like needing to do documentation and design reviews and meetings and approvals and code review and testing.
0:12:12 And so the stuff that developers actually enjoy doing, namely the coding, is actually what you get to spend less time on.
0:12:18 And then there’s these companies emerging saying, hey, we’re going to automate that one little thing that you sometimes get to do that you enjoy.
0:12:19 You don’t get to do that anymore.
0:12:29 So your life as a developer is just going to be reviewing code or documenting code, which it just, I think, really misses the mark on what developers in the enterprise actually care about.
0:12:41 And I think the reason why this happens is because a lot of these companies have like in their composition, best engineers in the world graduating from, you know, the greatest schools and they join startups.
0:12:44 And at a startup, if you’re an engineer, all you do is code.
0:12:48 And so there’s kind of this mismatch in terms of empathy of what the life of a developer is.
0:12:53 Because, you know, if you’re a developer at one of these hot startups, yes, coding, speed that up, great.
0:12:59 But if you’re a developer at some 50,000 engineer org, coding is not your bottleneck.
0:13:00 Your bottleneck are all these other things.
0:13:10 And with us focusing on that full like end-to-end spectrum of software development, we end up kind of hitting more closely to what developers actually want.
0:13:18 Microsoft, I know, I think Satya Nadella said something like 30% of code at Microsoft is being written by AI right now.
0:13:26 I think Zuckerberg said that he’s shooting for, I think, half of the code at Meta to be written by AI.
0:13:38 You’re basically saying what software developers want is not for someone to be doing the creative part, but they want someone or an agent or an AI to be doing the boring drudge work.
0:13:43 What does that drudge work actually look like?
0:13:47 Like you said, sort of reviewing code, documenting code.
0:13:52 In what sense is factory addressing that issue?
0:13:57 Even the idea of like 30% of code is AI written, I think is a very non-trivial metric to calculate.
0:14:04 Because if you have AI generated like 10 lines and you manually go adjust two of them, do those two count as AI generated or not?
0:14:05 So there’s some gray there.
0:14:08 You think that they’re kind of just throwing numbers out there a little bit?
0:14:11 It’s just a very hard, it’s hard to calculate.
0:14:16 And so even if you were trying to be as rigorous as possible, I don’t know how you come up with a very concrete number there.
0:14:24 But regardless, I think that directionally it’s correct that the number of, the number of lines of code that’s AI generated is, you know, strictly increasing.
0:14:26 The way the factory helps.
0:14:34 So I guess generally like the software development lifecycle, very high level looks like first understanding, right?
0:14:39 So you’re trying to figure out what is the like lay of the land of our current code base, let’s say, or our current product.
0:14:46 Then you’re going to have some planning of whether it’s like a migration that we want to do or a feature or some customer issue that we want to fix.
0:14:48 Then you’re going to plan it out, create some design doc.
0:14:50 You’re going to go and implement it.
0:14:51 So you’re going to write the code for it.
0:14:56 Then you’re going to generate some tests to verify that it, you know, is passing some criteria that you have.
0:14:58 There’s going to be some human review.
0:15:00 So they’re going to check to make sure that this looks good.
0:15:07 And then you might update your documentation and then you kind of push it into production and, you know, monitor to make sure that it doesn’t break.
0:15:17 In an enterprise, all of those steps take a really, really long time because there’s, you know, the larger your org, if it’s 30 years old, there are all these different interdependencies.
0:15:21 And like, like imagine you’re a bank and you want to ship a new feature to your like mobile app.
0:15:26 There are so many different interdependencies that any given change will affect.
0:15:34 So then you need to have meetings and you need to have approvals from this person and this person needs to find the subject matter expert for this part of the code base.
0:15:36 And it ends up taking months and months.
0:15:43 And so where factory helps is a lot of the things that don’t seem like the highest leverage are what they spend a lot of time on.
0:15:48 So like that testing part or the review process or the documentation or even the initial understanding.
0:15:55 I cannot tell you how many customers of ours have a situation where there was like one expert who’s been there for 30 years who just retired.
0:15:59 And so now there’s like literally no one who understands a certain part of their code base.
0:16:03 And so getting some new engineer to go in and do that, there’s no documentation.
0:16:11 So now that engineer has to spend six months writing out docs for this like legacy code base, which is, you know, engineers spend years of their lives becoming experts.
0:16:16 The highest leverage use of their time is not writing documentation on existing parts of the code base.
0:16:22 In this world where like an org has factory fully deployed, that engineer can just send off an agent.
0:16:23 Our agents are called droids.
0:16:35 So send off a droid to go and generate those docs, ask it questions, get the insight as if it was a subject matter expert that’s been there for 20 years so that he can go and say, OK, here’s how we’re going to design a solution.
0:16:37 Here’s how we’re going to fix whatever issue is at hand.
0:16:45 These droids, your agents that you call droids, I think one of the big differentiators that I’ve seen is that they are fully autonomous.
0:16:56 They’re doing it basically everything on their own in contrast to something like Copilot, which is by definition working alongside you to help you figure things out.
0:17:02 You guys are saying, no, these things can be completely on their own, totally autonomous.
0:17:05 Literally, you’ve got robots just doing the work for you.
0:17:08 Why is that the way to go with AI?
0:17:09 At a high level.
0:17:12 So this is true for code, but I would also say for knowledge work more broadly.
0:17:21 But for code in particular, we’re going from a world where developers wrote 100% of their code to a world where developers will eventually write 0% of it.
0:17:34 And we’re basically changing the primitive of software development from like writing lines of code, writing functions, writing files to the new primitive being a delegation, like delegating a task to an agent.
0:17:45 And so the new kind of important thing to consider is, you know, you can delegate a task, but if it’s very poorly scoped, the agent will probably not satisfy whatever criteria you had in your head.
0:18:02 And so if this new primitive is delegation, your job as a developer is to get good at how can I very clearly define what success looks like, what I need to get done, what the testing it should do, like what our organization’s contributing guidelines are, let’s say.
0:18:14 And so with this as the new primitive, the job of the developer is now, okay, if I set up the right guidelines and I tell this agent to go, it now has the information it needs to succeed on its own.
0:18:17 And this is very similar to like human engineer onboarding.
0:18:20 Like when you onboard a human engineer into your organization, what do you do?
0:18:22 You don’t just throw them into the code base.
0:18:24 You’ll say, hey, here’s what we’ve built so far.
0:18:26 Here’s how we build things going forward.
0:18:29 Here’s our process for deciding on what features to build.
0:18:32 Here’s our coding standards.
0:18:41 So you have like a long onboarding process, then you give them a laptop so they can actually go and write the code and test it and run it and mess around with it before they actually submit it.
0:18:45 And so we need to do similar things with agents where we give them this thorough onboarding process.
0:18:50 You give it an environment where it can actually test the code and, you know, mess around with the code to see if it’s working.
0:18:58 And having that laptop now has this like autonomous loop that it can go through where it tries out some code, runs it.
0:18:58 Oh, that failed.
0:19:00 Let me go iterate based on that.
0:19:04 Now we do have not like fully autonomous droids.
0:19:10 But the point is that giving people access to this, they can set up droids to fully generate all of their docks for them.
0:19:14 So now as an engineer, that’s just something you don’t need to worry about because that’s not the highest leverage use of your time.
0:19:17 Thinking about instead this behavior change towards delegation.
0:19:21 That’s like the kind of biggest thing that we work with enterprises on.
0:19:36 I think delegation is the right word, but it’s also kind of a scary word because delegation implies, I mean, the way that we work today, you delegate to other people whose jobs are to do all of the things that you’re describing.
0:19:40 There are some companies that say AI is going to be your partner and work alongside you.
0:19:48 You’re saying, actually, no, this is just going to do the work, i.e. it would replace people.
0:19:56 And this is obviously a big debate in AI, the automation debate, what happens to the four and a half million software engineers.
0:20:05 What is your viewpoint on this automation debate and the idea that AI is going to take your job?
0:20:09 I think at a high level, I will say AI will not replace human engineers.
0:20:13 Human engineers who know how to use AI will replace human engineers who don’t.
0:20:27 And I think the reason AI will not replace human engineers is because basically there’s like a bar for how big a problem needs to be in order for it to like be economically viable for someone to implement a software solution to it.
0:20:33 And suppose it used to be a billion dollars and then slowly it’s gone down to a hundred million dollars or ten million dollars.
0:20:40 Like these are like the TAMs of the problem that makes it economically worthwhile to build up a team of software engineers to work on a problem.
0:20:42 What AI does is it lowers that bar.
0:20:48 So now in a world where before you could only economically viably solve a problem that’s worth ten million dollars.
0:20:49 Now maybe it’s a hundred thousand dollars.
0:20:55 Now maybe it’s like large enterprises can actually make a lot of custom software for any given customer of theirs.
0:20:58 It means that the leverage of each software developer goes up.
0:21:02 It does not mean that the number of software engineers go down.
0:21:06 It would mean that if there was only one company in the world that had access to AI.
0:21:08 Because then they have access to AI.
0:21:09 They can use AI while no one else does.
0:21:13 And now they have way more leverage so they can beat their competitors while having less humans.
0:21:13 Right.
0:21:16 But the reality is now is if there are two companies and they’re competing.
0:21:18 One has a thousand engineers.
0:21:19 The other has a thousand engineers.
0:21:21 They both get AI.
0:21:24 So now they have the equivalent output of a hundred thousand engineers.
0:21:28 They’re not going to start firing engineers because now one company is going to be way more productive than the other.
0:21:33 They’ll deliver a better product, better solution, lower cost to their customers, and then they’re going to succeed.
0:21:37 So then this other company is going to be incentivized then to have more engineers, right?
0:21:37 Yes.
0:21:40 So I think that’s one side of it.
0:21:48 I think the other is like we have really bad prediction on what we can do with these tools.
0:21:56 Because right now like humanity has only seen what loosely a hundred thousand software engineers working together can build.
0:21:58 That might be like, let’s say, the cloud providers.
0:22:03 Those are some of like the largest engineering orgs, something that took, let’s say, a hundred thousand engineers to build.
0:22:09 We don’t even know what the equivalent of 10 million human software engineers could build.
0:22:15 Like we can’t even conceive of like what software is so intricate and complicated that it would take that many engineers to build.
0:22:18 And I kind of refuse to believe that a hundred thousand is the limit.
0:22:19 There’s no interesting software after that.
0:22:27 Yeah, I’m really glad you brought up the point of the danger here is that one company would own all of the AI.
0:22:31 Like the problem isn’t value creation.
0:22:45 I mean, what we’re describing is technology bringing the costs down and therefore creating more incentives to build, more value creation, which can only be a good thing unless it is in some way hijacked.
0:22:53 And you don’t have a system of capitalism where companies are really competing with each other and forcing each other to iterate.
0:22:58 And also that includes many different players who can participate in that value creation.
0:23:13 And when I look at the AI space right now, just as an example, when we interviewed and spoke with what is now Windsurf, and I asked the founders this question of like, how do you compete with big tech?
0:23:17 And they explained how they’re going to do it and how they’re going to take big tech on.
0:23:19 And then what do you know, Google buys them.
0:23:25 And I look at the same thing with like scale AI, which was, you know, one of the biggest AI startups.
0:23:29 Alexander Wang was this incredible thought leader.
0:23:37 And then what do you know, he gets, I mean, they get an investment, which turns into he’s now an employee at Meta.
0:23:42 And now he’s building, you know, like Meta social media AI.
0:23:53 And it all seems as though AI is being kind of overridden or taken over by big tech through these investments, which then turn into these sort of aqua hires.
0:23:59 And it does make me concerned that all of the power and all of the value is accruing into one place.
0:24:02 And it’s the same place that we’ve had over the past 20 years.
0:24:04 So how do you think about that?
0:24:12 Do you think about this possibility that maybe big tech comes in and says, we need your software, we need your people, we’re going to acquire you?
0:24:15 And do you worry about that concentration of power in AI?
0:24:21 I think it’s a very like top of mind thing for people is like, even from the investor side, is it going to be the incumbents that win?
0:24:29 Or will it be, you know, insurgents or however you want to, you know, the startups that can come and, you know, kind of claw their way into like surviving without, without acquisition.
0:24:34 I think the answer is always founder and company dependent.
0:24:38 Like, I think some examples that come to mind are like Airbnb and Uber.
0:24:46 These are companies where there wasn’t a very obvious gap in the market such that anyone could start a company like Airbnb or Uber and just, you know, succeed.
0:24:56 Like, I think it took a lot of very intentional and very relentless work in the face of tons of adversity to actually make those companies viable and successful.
0:25:08 Um, and I think in a lot of these cases, it is the choice of the founders or the companies to either continue or, you know, proceed to, to joining, um, big tech.
0:25:14 And I think at the end of the day, just, it really does depend on like, how relentless are you willing to be to actually fight that fight?
0:25:18 Because I think both of those acquisitions were optional.
0:25:21 Like, I don’t think they were like back against the wall, had no other choice.
0:25:31 I think it was like, for whatever reason, and I don’t know the exact details of either of these situations, but it was like, you know what, based on the journey so far, let’s elect to do this.
0:25:34 Presumably because they were offered so much money.
0:25:44 I mean, when I look at Meta hiring all of these AI geniuses, and I assume this is probably a concern for Factory and many other AI startups, what if Meta just hires our people?
0:25:52 And I wonder if it’s because these companies are so dominant, they have so much money, that they’re like, here, here’s a billion dollars, and it’s hard to say no to that.
0:25:52 Totally, yeah.
0:25:57 But I think if you, like, went back in time and you offered, like, let’s say Travis Kalanick a lot of money.
0:25:58 He’d say no.
0:25:58 Yeah.
0:26:01 Because he was, like, that was the mission.
0:26:06 And I think similarly at Factory, we are super focused on people that are very mission-driven.
0:26:10 If you want to make a ridiculous amount of money, you can go to Meta, you can go to, you know, one of those places.
0:26:17 The people who have joined our team have chosen this mission with this team in particular because of that reason.
0:26:21 And I think that’s what it takes ultimately at the end of the day because we do not want to be acquired.
0:26:26 We do not want to be part of big tech because I think they don’t have the tools to solve the problem in the way that we want to solve it.
0:26:38 Yeah, it sounds like what AI needs in order for there to be, like, real competition is you need a founder who wants to go to bat and who wants to fight, essentially.
0:26:42 Who doesn’t want to get, I guess, in bed with big tech.
0:26:55 But, I mean, one of the big themes that we’ve been seeing with AI recently is, of course, this circular financing stuff where these companies are investing and then the money comes back to them when they buy their products.
0:27:04 And it’s hard to see the competition actually happening when you see everyone kind of collaborating with each other.
0:27:07 How do you think about that?
0:27:15 And how do people in Silicon Valley, I mean, you’re very tapped into Silicon Valley, Sequoia, one of the top firms, one of your investors.
0:27:18 How do people view that in Silicon Valley right now?
0:27:20 And are they concerned about it?
0:27:24 People definitely make a lot of jokes about, like, the, like, circular investing and that sort of thing.
0:27:34 I mean, on one hand, I get it because there is a lot of interdependency of all these companies and there is a lot that they can do together, which I think on one hand is a good thing.
0:27:43 On the other hand, it’s a little bit inflationary to some, like, valuations or, like, revenue numbers or these types of things.
0:27:48 I think on the net, AI will be so productive that it won’t matter that much.
0:27:52 But short term, it is a little bit, like, eyebrow raising, I guess.
0:28:00 But at the end of the day, it’s like, if you’re, let’s say, a foundation model company, you need to get the direct deal with NVIDIA because you want the GPUs.
0:28:17 So you kind of, it’s just one of those things that you kind of have to do and I don’t, I guess I’m not sure what an alternative would look like in a dynamic where you have four or five foundation model companies who are, let’s ignore Google because they can make their own stuff, but who are really competing over the GPUs in order to make the next best models.
0:28:20 We’ll be right back.
0:28:31 Support for the show comes from Shopify.
0:28:34 If you run a small business, you know there’s nothing small about it.
0:28:37 As a business owner myself, I get it.
0:28:40 Every day, there’s a new decision to make and even the smallest decisions can feel massive.
0:28:46 What can help the most is a platform with all the tools you need to be successful, a platform like Shopify.
0:28:52 Shopify is the commerce platform behind millions of businesses around the world and 10% of all e-commerce in the U.S.
0:28:57 From household names, including Mattel and Gymshark, to those brands just getting started.
0:29:00 That’s why they’ve developed an array of tools to help make running your small business easier.
0:29:04 Their point of sale system is a unified command center for your retail business.
0:29:08 It gives your staff the tools they need to close the sale every time.
0:29:13 And it lets your customers shop however they want, whether that’s online, in-store, or a combination of the two.
0:29:20 Shopify’s first-party data tools give you insights that you can use to keep your marketing sharp and give your customers personalized experiences.
0:29:25 And, at the end of the day, that’s the goal of any small business owner, keeping your customers happy.
0:29:28 Get all the big stuff for your small business right with Shopify.
0:29:33 Sign up for your $1 per month trial and start selling today at shopify.com slash propg.
0:29:36 Go to shopify.com slash propg.
0:29:39 Shopify.com slash propg.
0:29:46 Support for the show comes from Gruen’s.
0:29:49 Even when you do your best to eat right, it’s tough to get all the nutrition you need from diet alone.
0:29:52 That’s why you need to know about Gruen’s.
0:29:55 Gruen’s isn’t a multivitamin, a greens gummy, or a prebiotic.
0:29:58 It’s all of those things, and then some, at a fraction of the price.
0:30:00 And bonus, it tastes great.
0:30:12 All Gruen’s daily gummy snack packs are vegan, nut, gluten, dairy-free, with no artificial flavors or colors, and they’re packed with more than 20 vitamins and minerals, made with more than 60 nutrient-dense ingredients, and whole food.
0:30:19 Gruen’s ingredients are backed by over 35,000 research publications, and the flavor tastes just like sweet, tart green apple candy.
0:30:24 And for a limited time, you can try their Gruen-y, Smith Apple flavor, just in time for fall.
0:30:28 It’s got all the same snackable, packable, full-body benefits you’ve come to expect.
0:30:35 But this time, these taste like you’re walking through an apple orchard in a cable-knit sweater, warm apple cider in hand.
0:30:39 Grab your limited edition Gruen-y, Smith Apple Gruen’s, available only through October.
0:30:40 Stock up because they will sell out.
0:30:47 Up to 52% off when you go to Gruen’s, G-R-U-N-S, dot co, and use the code PROVG.
0:30:54 Support for this show comes from Betterment.
0:30:56 Nobody knows what’s going to happen in the markets tomorrow.
0:31:02 That’s why when it comes to saving and investing, it helps to have a long-term approach and a plan you can stick to.
0:31:06 Because if you don’t, it’s easy to make hasty decisions that could potentially impact performance.
0:31:11 Betterment is a saving and investing platform with a suite of tools designed to prepare you for whatever is around the corner.
0:31:14 Their automated investing feature helps keep you on track for your goals.
0:31:20 Their globally diversified portfolios can smooth out the bumps of investing and prepare you to take advantage of long-term trends.
0:31:24 And their tax-smart tools can potentially help you save money on taxes.
0:31:29 In short, Betterment helps you save and invest like the experts without having to be an expert yourself.
0:31:36 And while you go about your day, Betterment’s team of experts are working hard behind the scenes to make sure you have everything you need to reach your financial goals.
0:31:38 So be invested in yourself.
0:31:39 Be invested in your business.
0:31:41 Be invested with Betterment.
0:31:43 Go to Betterment.com to learn more.
0:31:47 That’s B-E-T-T-E-R-M-E-N-T dot com.
0:31:48 Investing involves risk.
0:31:50 Performance not guaranteed.
0:32:01 We’re back with First Time Founders.
0:32:16 In terms of AI legislation, there seems to be a lot of debate right now on how do you regulate AI, and California is trying to be a leader in regulating.
0:32:20 What are your views on AI regulation?
0:32:25 Are people going over the top, trying to regulate?
0:32:27 Is it warranted?
0:32:28 How do you think about that?
0:32:30 Maybe just to draw some parallels.
0:32:36 In my mind, I view things like climate regulation, nuclear regulation, and AI regulation to be similar in that they are global.
0:32:39 And local regulation doesn’t really matter.
0:32:41 Like, for example, pick any one of those three.
0:32:54 If you make rules about, in California, you can’t have a gas car, or you can’t build, like, nuclear weapons, or you can’t build AI in the extreme in California, that doesn’t really matter because that says nothing about the rest of the world.
0:32:59 And if the rest of the world does it, it affects what happens in California for climate, for nuclear, for AI.
0:33:12 And so, I think for AI in particular, the regulation that is interesting is less, like, I think California just, it doesn’t matter regulating AI state by state, at least at the macro level.
0:33:21 Maybe it’s, like, in terms of usage for, like, interpersonal things, sure, but in terms of, like, training models, the relevant stage there, in my mind, is the global stage.
0:33:37 And how does it affect, like, U.S. regulation versus European regulation versus China, let’s say, from what I’ve seen thus far, the time spent on, like, state regulation is kind of wasted, at least as it relates to foundation models.
0:33:42 I think there is a concern, probably, in Silicon Valley that everyone’s so afraid of AI.
0:33:51 I mean, I’ve seen these surveys that, you know, I think more than half of Americans are more worried about AI than they are excited.
0:33:58 Um, I guess that’s something to philosophically tackle on your end, because you’re building it.
0:34:09 Um, but then I would imagine that in Silicon Valley, there’s this feeling of everyone’s just too scared because they’ve watched all these movies and they’ve watched The Terminator.
0:34:18 And so, these people are getting too, too worried about it to the point that we’re regulating in a way that actually doesn’t make sense.
0:34:19 It’s pretty interesting.
0:34:20 I think two things come to mind.
0:34:28 So, one, there’s the classic phenomenon of, you know, you’re a startup, you want no regulation, then you become big, then suddenly you want regulation.
0:34:28 Yes.
0:34:33 Um, and we’ve seen that happen with, I think, basically every foundation model company, um, which is always a shame to see.
0:34:38 And then the second, and this is more just, like, a comment on the Silicon Valley and some of the culture there.
0:34:44 I know so many people who work at the foundation model labs who don’t have savings.
0:34:48 Like, they just do not believe in, like, putting any money in their 401k.
0:34:52 They, like, spend it all because of this, like, vision of, like, something’s coming.
0:34:52 Wow.
0:34:53 Yeah, which is very weird.
0:34:59 Um, but then there are equally as many who work at these who are, like, you know, these guys kind of drank too much of the Kool-Aid.
0:35:10 It’s really important to have these conversations and think about these things because I think it’s, actually, it reminds me a lot of thinking about, like, in theoretical physics, like, thinking about the Big Bang and, like, black holes in the universe.
0:35:14 The first time you think about it, it’s kind of, like, scary existential crisis.
0:35:15 What is everything?
0:35:18 If we’re in such a large universe, nothing has meaning, whatever.
0:35:24 I think thinking about AI, like, getting exponentially better kind of leads to similar, like, existential questions.
0:35:29 Like, what are we, like, what value do humans have if there’s going to be something that’s smarter than any one of us?
0:35:33 And then you have the maturity of, like, wait, intelligence is not why humans have value.
0:35:35 That’s not the source of intrinsic value.
0:35:37 We don’t think someone’s more valuable because they’re smarter.
0:35:44 So having these conversations and thought processes is, I think, very important for both people working in AI and people who aren’t.
0:35:51 But, yeah, there’s some pretty weird people who kind of, like, are really, really in the bubble, inundated in it,
0:35:55 and who kind of get these interesting worldviews of, like, you know, the singularity is coming.
0:35:58 So I want to, you know, spend everything that I have now.
0:36:02 Yet at the same time, if they think it’s not going to be good, they remain working on it.
0:36:09 So these AI engineers who are not saving any money, they’re doing it because they think, like, the end of the world is coming
0:36:14 or because they think that there’s going to be some transformative event that will make them really rich?
0:36:16 Like, is it more of a Duma perspective or?
0:36:17 It’s a pretty big mix.
0:36:20 Like, some people think we will just become in a world where we’re, like, post-economic.
0:36:22 And just, like, money will be irrelevant.
0:36:27 And, like, for anyone, there’s some, like, base level, whether it’s, like, some UBI type thing
0:36:30 or some have, like, the Duma perspective.
0:36:31 It’s pretty bizarre.
0:36:33 It sounds irrational to me.
0:36:34 Yes, I would agree.
0:36:35 Okay, you’d agree.
0:36:36 Yes.
0:36:36 Yeah.
0:36:46 And I think that it brings up an interesting thing in AI, which is there’s this incredibly transformative once-in-a-generation technology that has come along.
0:36:52 And it causes humans, when that happens, to act strangely.
0:36:52 Yes.
0:37:04 That behavior, not saving while you’re building AI because you think that it’s going to mean some event that, you know, could either end the world or, you know, dismantle the system.
0:37:06 Maybe they’re onto something.
0:37:07 To me, it seems irrational.
0:37:20 And I also think it says something about the potential of a bubble that is emerging that a lot of people in the last few weeks have been getting more and more concerned about and that more and more people seem to believe.
0:37:25 I mean, you know, I think Sam Altman himself said the word bubble.
0:37:27 There have been other tech leaders who are saying that.
0:37:33 As someone who is building in this space, how do you think about that?
0:37:40 Does it concern you or is it something that you’re not too worried about?
0:37:49 Obviously, just to be a responsible CEO, I need to have priors that there is some chance that something like that happens in, like, the broader economy where, you know,
0:37:59 there’s some corrections, yeah, my priors are very low in particular, because like the ground truth utilization of GPUs is just like fully, fully saturated.
0:38:08 Now, it would be one thing if we’re building out all these data centers for like the dream of, okay, we’re going to saturate this compute at some day, but like we’re doing that today.
0:38:11 And it’s like people are still hungry for more of that compute.
0:38:17 Now, I think there’s a good argument that a lot of compute is subsidized.
0:38:25 So like NVIDIA might subsidize the foundation model companies, the foundational model companies subsidize companies like us and maybe give us discounts on their inference.
0:38:27 And we might subsidize new growth users.
0:38:33 And there’s a little bit of that, that I think that’s the part that there’s a concern of like actually drawing a similar comparison to Uber.
0:38:38 I don’t know if you remember when Uber first came out, rides were super cheap because it was very much subsidized.
0:38:46 VCs were paying for us, VCs and so the LPs, all the like pension funds were basically subsidizing people’s Ubers in a very indirect way.
0:38:51 And like people kind of, you know, sometimes can make jokes about that, even as it relates to LLMs.
0:39:05 The reason I’m less concerned is that the ROI is just so massive and like the, the productivity gains from in particular coding, it’s like the fact that we have built factory with basically less than 20 engineers.
0:39:08 That is something that we just would not have been able to do.
0:39:14 And so I think the leverage that people are getting is what makes me less concerned.
0:39:15 And also the speed of adoption.
0:39:21 Like, I think even some of these enormous enterprises that we’re speaking with, they missed like mobile by like five years.
0:39:30 But for AI, they are on it because they know if we have 50,000 engineers, we need to get them AI tools for engineering because of how existential it is.
0:39:42 If there is a correction, and the way I see it is there will be a correction that won’t wipe out AI like some people seem to think, but it’ll be similar to the internet.
0:39:51 There’s a correction, there’s a correction, valuations come down, there is some pain, and then long term, you will see massive adoption and massive value creation.
0:39:53 That’s just my perspective.
0:39:58 Say there is a correction, who wins in that scenario?
0:40:01 Like, what happens to OpenAI?
0:40:05 What happens to startups like yourself?
0:40:11 Like, who are going to be the winners and losers in that scenario where we do see some sort of pullback?
0:40:14 So one core principle is Jensen always wins.
0:40:17 So for the last few years, Jensen’s going to stay winning.
0:40:20 So that’s, I think, you know, not going to change.
0:40:21 And why do you say that?
0:40:24 Because he’s just at the very base of the value chain?
0:40:25 Yes, yes, yes.
0:40:29 And at the end of the day, like all of these circular deals, they all come back to NVIDIA.
0:40:33 Anytime anyone announces, hey, we’re doing like free inference, that’s free.
0:40:35 But, you know, someone’s paying Jensen at the end of the day.
0:40:38 So I think that’s kind of one baseline there.
0:40:59 I think another, and this actually maybe relates to what we were talking about earlier about, you know, these companies and the acquisitions is, as it relates to like startups and how many there are, there was a period that I think has been dying down at least a little bit in San Francisco, where if you’re an engineer who like worked at AI for a month, you basically just get stapled a term sheet like onto your forehead.
0:41:10 And the second you leave and, you know, you show up to show up to a VC, which I think is not good, because you don’t get like the Travis Kalanick’s or the Brian Chesky’s in a world where you’re encouraged to do things like that.
0:41:14 Like anytime anyone asks me like, hey, Matton, you know, I’m thinking about starting a company.
0:41:21 I will always say no, always, because if me saying no discourages you from starting a company, then you absolutely should not have done it.
0:41:26 And I think like, there’s almost like too much help and too much like, yeah, you know, go do it, go start it.
0:41:32 Because then it leads to some of these things we were talking about, where the second the going gets tough, it’s like, all right, acquisition time.
0:41:35 And this is maybe my localized view, because I live in San Francisco.
0:41:39 And that’s like, you know, what I see more day to day than some of like the more macro trends.
0:41:51 But I think the first place we would see a correction like that is in, I mean, coding, for example, there are like 100 startups in the coding space, you know, perhaps there’ll be less that are funded, because it’s like, hey, you know what, at this point, maybe it’s not as relevant.
0:41:57 Or, you know, the nth AI personal CRM, like, that’s another one that’s, there’s been like a million companies there.
0:42:04 The correction might look like, at least at that level, you know, funding being a little more difficult, let’s say.
0:42:13 And then the way that that relates to the foundation model companies is I think eventually you’ll get to a point where they can subsidize inference less, which just means growth probably slows.
0:42:21 Like OpenAI and Anthropic, their revenue has been, you know, ridiculously large, but also the, you know, margin on that has been pretty negative.
0:42:26 And so it’s basically like, how long can you subsidize and like deal with that negative margin?
0:42:28 They’re obviously a legendary company.
0:42:29 Uber is a great example.
0:42:37 Amazon’s a great example where you can like operate at a loss for a period of time in order to build an absolute monster of a company and then just turn on margin whenever you’re ready.
0:42:40 The question is, how long can you sustain that?
0:42:42 And so if there were a correction, I think that would affect that.
0:42:48 Yeah, it does feel increasingly that AI, the danger of AI isn’t adoption or technology.
0:42:50 It’s a timing and financing problem.
0:42:54 And, you know, I look at OpenAI and the amount that they’re spending.
0:43:02 I’m starting to believe that the AI companies who are going to win are the ones who manage their balance sheets the best.
0:43:11 And it’s really going to be a question of financial management because of the thing that you say there where all of this money is being plowed in.
0:43:22 And it is a question of how long can you go at an operating loss, which, you know, Uber crushed it, Amazon crushed it.
0:43:26 There were many other companies that died that did not crush it from that perspective.
0:43:30 So it will be really interesting to see how that plays out.
0:43:42 As someone who is building in Silicon Valley in San Francisco, you’ve built this incredible company that’s generated a ton of heat and press.
0:43:47 Like you are in AI, what does that feel like?
0:43:51 Like, what does it feel like to be one of the AI people?
0:43:55 Does it feel like you’re in some special moment in time?
0:43:56 Like, what is it like?
0:44:01 It feels very much like we are still in the trenches because there is a ton that we want to do and that we need to get done.
0:44:06 I think for me, the most surreal thing is the team that we’ve assembled.
0:44:14 Like every day coming in person in our office in San Francisco, it is such a privilege working with now we’re 40 of the smartest people that I’ve ever met in my life.
0:44:15 You know, we’re in New York right now.
0:44:17 We’re starting to open up an office here.
0:44:24 I think that’s where it’s a little bit like, whoa, like we’re now, you know, we have two offices and on the opposite sides of the country.
0:44:35 It’s more just like, I think it’s just really cool to see over the last two and a half years how dedicated effort can actually like build something that is concrete and meaningful.
0:44:51 And some of the largest enterprises that we’re working with, it’s just kind of crazy to sometimes stop for a bit when it’s not like the nonstop grind to think like this organization now doesn’t have to deal with these problems because of something that we built because of this random cold email, because of this random hackathon that I met Eno at.
0:44:59 I think it’s just, um, it’s a very cool visceral reminder that you can do things that affect things.
0:45:00 Yes.
0:45:07 Um, and if you are really driven by a good mission, you can make people’s lives better in relatively short order.
0:45:08 And I think that’s a really empowering thought.
0:45:19 What is something that you think the American population sort of gets wrong about AI and also about AI founders and the people building this technology?
0:45:25 Most of the world only knows chat GPT, very few people know about like in San Francisco, everyone’s like, Oh, which model is better?
0:45:28 Like open AI, Anthropic, Google Gemini, the rest of the world.
0:45:31 It’s just like, it’s basically just chat GPT, which I think on one hand is interesting.
0:45:41 Um, I think on the other hand, it is really important for basically every profession to kind of rethink your entire workflow.
0:45:51 And it is in fact, I would say it’s almost an obligation to like basically take a sledgehammer to everything that you’ve set up as like your routine and how you do work and rethink it with AI.
0:46:03 For me, this is actually something that’s really important because I’m like the most habit oriented, like routine person and like constantly, you know, every few months being like, let me try and see how I could do this differently with AI in a way.
0:46:11 That’s not, um, like, Oh, technology is taking over, but more just like it makes things more efficient and faster and, and, uh, um, more convenient.
0:46:21 So I think that’s one thing is there is so much time that can be saved by spending a little bit of time to, you know, try out these different tools, whether it’s something like chat GPT, or, you know, if you’re an engineer trying out, um, you know, something like factory.
0:46:30 I think regarding AI founders, it’s hard to say because there’s so many, uh, tropes that unfortunately can be really true sometimes.
0:46:38 And sometimes it’s even frustrating to me because like I grew up in Palo Alto and hated startups, like hated it.
0:46:43 Like I grew up like in middle school, we would spend time like, you know, walking around downtown Palo Alto.
0:46:52 And I remember, I have a very concrete memory when Palantir moved into downtown Palo Alto, there were all these people in there, like Patagonias with like the Palantir logo.
0:46:58 And I remember looking so like scornfully at all these people walking by with these Patagonias.
0:47:07 Um, but yeah, I mean, I think, I think it’s, maybe actually, I think the thing is less for the rest of the world about AI founders and more like some of these AI companies.
0:47:19 It’s really important to leave San Francisco, like exit the bubble, like it’s a cliche, but like touch grass, go to see the real world because while San Francisco is very in the future, you know, I’ve taken a Waymo to work for the last like two years.
0:47:24 The rest of the world is still like kind of how it was in San Francisco five years ago.
0:47:39 And I think it’s important to have that grounding because if you don’t leave and if you don’t have that grounding, you could do things like not put money in your 401k and things like, not that you need to put it in your 401k, but you kind of get these little bit warped perspectives sometimes.
0:47:40 That is really interesting.
0:47:49 Does that, I mean, this idea that there is a bubble, it’s the wrong thing, but there is this, it’s an echo chamber.
0:48:00 Um, and the fact that you’re building and you’re saying, you know, we’re building these offices in New York and the thing that is important for AI.
0:48:17 And I think it’s probably really true is to kind of go out into the world and understand like, what are some real use cases where this was really going to provide value for people, not just in your enterprise SaaS startup in San Francisco, but anywhere else throughout America.
0:48:24 Does that worry you as you go further up the chain of power and command in Silicon Valley?
0:48:29 Does it worry you perhaps that people at the very top aren’t doing that enough?
0:48:35 They’re not getting out there and understanding what this technology really needs to be and do for America?
0:48:36 I would say yes.
0:48:52 And I think that’s also just a very common problem, just generally as organizations scale or as organizations get more powerful, the people running those organizations inherently get separated from the ground truth of like, let’s say the individual engineers or individual people who are going and delivering that product to people.
0:48:55 And I think similarly, they lose touch with their customers as well.
0:49:00 I think the best leaders have really good communication lines towards the bottom.
0:49:05 Yeah, towards the people, the customers they’re serving or the people who are like kind of in the trenches, like hands on doing the work.
0:49:13 And I think you probably end up seeing this in results of a lot of these companies, because I think it’s hard to be a successful company if you don’t have some of that ground truth.
0:49:23 Any good leader, I think, should be concerned about that and should always be paranoid of like, you know, am I surrounded by yes men or am I in an echo chamber and I’m not getting the real like ground truth?
0:49:23 Yeah.
0:49:24 Yeah.
0:49:26 So that is something that’s going on.
0:49:28 We’ll be right back.
0:49:39 Fox Creative.
0:49:44 Support for this show comes from AWS Generative AI Accelerator Program.
0:49:47 My name is Tom Elias.
0:49:50 I’m one of the co-founders at Bedrock Robotics.
0:49:53 Bedrock Robotics is creating AI for the built world.
0:49:58 We are bringing advanced autonomy to heavy equipment to tackle America’s construction crisis.
0:50:05 There is a tremendous demand for progress in America through civil projects, yet half a million jobs in construction remain unfilled.
0:50:10 We were part of the 2024 AWS Gen AI Accelerator Program.
0:50:12 As soon as we saw it, we knew that we had to apply.
0:50:19 The AWS Gen AI Accelerator Program supports startups that are building ambitious companies using Gen AI and physical AI.
0:50:26 The program provides infrastructure support that matches an ambitious scale of growth for companies like Bedrock Robotics.
0:50:32 Now, after the accelerator, about a year later, we announced that we raised about $80 million in funding.
0:50:36 We are scaling our autonomy to multiple sites.
0:50:39 We’re making deep investments in technology and partners.
0:50:47 We have a lot more clarity on what autonomy we need to build and what systems and techniques and partners we need to make it happen.
0:51:01 It’s the folks that we have working all together inside Bedrock Robotics, but it’s also our partners like Amazon really all trying to work together to figure out what is physical AI and how do we affect the world in a positive way.
0:51:08 To learn more about how AWS supports startups, visit startups.aws.
0:51:20 As a founder, you’re moving fast towards product market fit, your next round, or your first big enterprise deal.
0:51:29 But with AI accelerating how quickly startups build and ship, security expectations are also coming in faster, and those expectations are higher than ever.
0:51:35 Getting security and compliance right can unlock growth, or stall it if you wait too long.
0:51:46 Vanta is a trust management platform that helps businesses automate security and compliance across more than 35 frameworks like SOC 2, ISO 27001, HIPAA, and more.
0:51:58 With deep integrations and automated workflows built for fast-moving teams, Vanta gets you audit-ready fast and keeps you secure with continuous monitoring as your models, infrastructure, and customers evolve.
0:52:06 That’s why fast-growing startups like Langchain, Rider, and Cursor have all trusted Vanta to build a scalable compliance foundation from the start.
0:52:17 Go to Vanta.com slash Vanta to save $1,000 today through the Vanta for Startups program and join over 10,000 ambitious companies already scaling with Vanta.
0:52:25 That’s V-A-N-T-A dot com slash Vox to save $1,000 for a limited time.
0:52:31 Support for the show comes from HIMSS.
0:52:34 Hair loss isn’t just about hair, it’s about how you feel when you look in the mirror.
0:52:39 HIMSS helps you take back that confidence with access to simple, personalized care that fits your life.
0:52:47 HIMSS offers convenient access to a range of prescription hair loss treatments with ingredients that work, including choose oral medications, serums, and sprays.
0:52:50 You shouldn’t have to go out of your way to feel like yourself.
0:52:57 It’s why HIMSS brings expert care straight to you with 100% online access to personalized treatment plans that put your goals first.
0:53:01 No hidden fees, no surprise costs, just real, personalized care on your schedule.
0:53:09 For simple online access to personalized and affordable care for hair loss, ED, weight loss, and more, visit HIMSS.com slash PropG.
0:53:12 That’s HIMSS.com slash PropG for your free online visit.
0:53:14 HIMSS.com slash PropG.
0:53:16 Individual results may vary.
0:53:27 Based on studies of topical and oral minoxidil and finasteride, future products include compounded drug products, which the FDA does not approve or verify for safety, effectiveness, or quality.
0:53:28 Prescription required.
0:53:32 See website for details, restrictions, and important safety information.
0:53:42 We’re back with first-time founders.
0:53:45 Who is like AI Jesus right now?
0:53:46 Is it Jensen?
0:53:48 Is it Sam Altman?
0:53:50 Is it Mark Zuckerberg?
0:53:53 Like in San Francisco, who’s the guy?
0:53:55 Who do people revere?
0:53:56 I mean, it’s got to be Jensen.
0:54:00 Like Sam, you know, a lot of wins, some losses.
0:54:02 Zuck, a lot of wins, a lot of losses.
0:54:05 Jensen, that guy just grinded for 30 years.
0:54:09 I remember when I built a computer at home to play like video games on a PC.
0:54:11 I bought an NVIDIA chip.
0:54:14 And in my mind, it was like NVIDIA, you know, they’re the video game graphic card company.
0:54:18 Now they’re the most valuable company in human history with no signs of stopping.
0:54:21 And he just grinded it out for 30 years.
0:54:24 Like it is the most respectable thing.
0:54:25 He’s also the nicest dude.
0:54:27 And he has no, like he doesn’t have enemies.
0:54:28 He’s so.
0:54:29 You’ve met with him, right?
0:54:29 I have.
0:54:31 He’s extremely generous with his time.
0:54:34 He also knows, this guy’s like knows every little detail about factory.
0:54:38 Like, I don’t know how he has the time to do these things, but he’s, he is a killer.
0:54:39 He’s, he’s really good.
0:54:45 When you think about sort of the long-term future of AI, and there was, you know, for many years,
0:54:49 it was AGI is coming and, and think about all the things it can do.
0:54:52 Um, think about how it could solve diseases.
0:54:54 I think about how it could solve cure cancer.
0:55:06 Um, and then I see like erotica GPT and I see the, the SORA AI TikTok feed.
0:55:10 I’m sort of like, what happened to the, to the big vision?
0:55:16 We’re back to sort of Pornhub meets TikTok, but it’s got AI.
0:55:21 How do we expand the vision of AI?
0:55:24 What is the, what is the grand vision for AI?
0:55:26 And do you think it’s going to really come true?
0:55:31 Well, so I think on one hand, like, you know, the, the pure slop that is these like AI SORA
0:55:34 or the one that Meta announced, I think on one hand.
0:55:40 I think on one hand, it’s very, it’s very, in a, in a certain weird sense, it is beautiful
0:55:43 in that it is just like pure human nature.
0:55:46 Like, what do we do when we have really good technology?
0:55:47 Like, let’s make porn.
0:55:49 Like that’s the first thought.
0:55:53 And in a certain sense, it’s like, okay, I’m glad that even though we’re generating all this
0:55:55 technology, we’re still humans at our core.
0:55:58 We overestimated ourselves when we thought we were going to cure cancer.
0:56:02 But then on the other hand, there are still people who are doing really great work.
0:56:07 Like one of my friends, Patrick Su, who runs ARC Institute, they’re doing AI for, uh, biotech
0:56:08 research and biology.
0:56:10 And I think they’re doing a lot of really cool work.
0:56:14 And maybe this actually relates to something we were talking about earlier, which is, you know,
0:56:18 people kind of at a first glance might have a little bit of an existential crisis of,
0:56:20 you know, intelligence is now commoditized.
0:56:25 So there’s now, like some people are saying, you know, we both live in a world where if we
0:56:30 have children at some point, our children will never be smarter than AI, right?
0:56:34 Like we both grew up in a world where we are smarter than computers for at least a period of
0:56:34 time.
0:56:40 And our kids would never know that world, which is a little bit crazy because, you know, a
0:56:43 huge part of growing up is going to college, becoming really smart in some certain area.
0:56:48 Um, and so when I think now we’re having a little bit of a decoupling of human value
0:56:53 being attributed to intelligence, but then there’s a natural question of like, okay, well,
0:56:57 you know, we were sold this vision about, you know, let’s say even the American dream of
0:57:01 like, if you work really hard, get really good at this one thing, then you’ll have a better
0:57:01 life.
0:57:04 But now it’s like, you’re never going to beat the intelligence of this computer.
0:57:06 So what is the thing to strive for?
0:57:11 And I think this actually relates to like the AI porn versus the AI curing cancer, which
0:57:17 is in my mind, the new primitive or the new, uh, maybe like North star for humans is agency
0:57:20 and which humans have the will instead.
0:57:25 Like, yes, you can like hit the hedonism and just watch AI porn and play video games all day.
0:57:29 But who has the agency to say, no, I’m going to work on this hard problem that doesn’t give me as
0:57:33 much dopamine, but like, because of the will and agency that I have, I’m choosing to work on this
0:57:34 instead.
0:57:40 And I think that might be the new valuable thing that if you have that in large quantities, that maybe
0:57:42 that’s kind of what brings you more meaning.
0:57:47 Why do you go to agency versus many other things?
0:57:54 For example, you know, uh, you mentioned your, your friend who’s working on issues in biotech.
0:58:02 Maybe that is a question of like having the right values or, or, um, I mean, not to get like mushy,
0:58:06 but maybe a value would be kindness or a value would be creativity.
0:58:09 There are lots of things out there that you could pick and choose from.
0:58:11 Why is it agency in your mind?
0:58:16 I guess the way that I think about it, it’s like the agency to go against maybe like the easiest
0:58:21 path for dopamine or like the, like the natural, like human nature, like just give me like the
0:58:26 good tasting food, the porn, the video games, the like, you know, easy, fun stuff.
0:58:30 Um, and I think maybe part of agency has to do with values.
0:58:35 Like if you value creativity and if you value kindness and, you know, I think that is something
0:58:40 that might motivate more agency, but agency is basically, at least the way I think about
0:58:45 it, it’s just like the will to endure something that is more difficult for maybe a longer term
0:58:50 reward, whether that’s the satisfaction of, you know, bringing this, you know, better healthcare
0:58:53 to people or, or, uh, satisfying that curiosity.
0:59:01 It’s interesting because you say the word agency and you are building agents and there’s like a
0:59:02 parallel there.
0:59:09 And it, it’s almost as if the people who are really going to win are the people who can
0:59:16 have some level of command and directive, directive agency over these AI agents.
0:59:24 It’s the person who isn’t just going to do what they’re told by the guy who controls the AI agent
0:59:27 and says, okay, create this code.
0:59:30 It’s the person who can actually tell the agents what to do.
0:59:35 And that’s the direction that you believe humanity and work should be headed.
0:59:36 A hundred percent.
0:59:39 And I think that’s also like, if you think back to like the people that you’ve met in
0:59:45 your life that come across as like particularly intelligent or like, you know, remarkable in
0:59:48 whatever capacity, oftentimes it’s not raw IQ horsepower.
0:59:53 Like you’ll note that when you meet someone with high IQ, it’s pretty easy to tell, but growing
0:59:57 up in the Bay Area, there are so many that are very high IQ, but aren’t that, aren’t that
0:59:59 like high agency or like independent minded.
1:00:04 And I think those are the people that oftentimes it’s like really like leave a mark when you remember
1:00:07 of like, oh, like that person was, you know, maybe they weren’t even that high IQ, but they
1:00:09 were very like independent high agency.
1:00:14 And I think that now is going to be much more important because great, you know, you might
1:00:15 be born, have a lot of, you know, high IQ.
1:00:19 Everyone has access to the AI models that have this intelligence.
1:00:21 So it’s not really a differentiator anymore.
1:00:25 The differentiator is, do you have the will to use those in a way that no one has thought
1:00:29 of before or in a way that’s difficult, but to get some longer term task done.
1:00:34 It’s really interesting because what you’re describing is like, how do you, what can I do that
1:00:35 AI cannot do?
1:00:38 And what you’re saying is AI cannot think for itself.
1:00:42 It cannot be an independent, creative minded creature.
1:00:45 It can be a math genius.
1:00:52 It can solve problems within seconds, but it can’t have the willpower to decide this is what
1:00:53 I want to do.
1:00:54 This is what is important to me.
1:00:57 This is what has value, which I think is definitely right.
1:01:00 We have to wrap up here.
1:01:09 I just want to note, I saw a tweet, I think from yesterday that you put out there and it
1:01:15 shows this competition of all the different coding agents.
1:01:23 So you’ve got a cursor and you’ve got Gemini and you’ve got open eyes, um, coding agent.
1:01:24 You are number one.
1:01:25 That’s right.
1:01:27 In the agent performance.
1:01:27 That’s right.
1:01:29 What does that mean?
1:01:34 What does it mean to be number one and how are you going to take that moving forward?
1:01:39 This is a benchmark that basically does like head to heads of coding agents and they use
1:01:40 like an ELO rating system.
1:01:44 So it’s like chess where, um, you know, at a high level you could have in chess, let’s
1:01:51 say, uh, if you have a hundred losses against, you know, someone that’s equal skill to you,
1:01:55 but then you beat Magnus Carlsen, you can have an incredibly high chess rating.
1:02:00 So this is like an ELO rating system where it gives these agents two tasks and then it
1:02:03 just has humans go and vote which solution they liked better.
1:02:06 Like the one from, let’s say factory versus open eye or anthropics.
1:02:08 Um, and we have the highest ELO rating.
1:02:11 So in these head to heads, um, we beat them, which is pretty exciting.
1:02:13 I think it’s exciting on a couple of fronts.
1:02:17 One we’ve raised, obviously very little money compared to a lot of the competitors that are
1:02:17 on that.
1:02:25 And I think that goes to show that, um, in a lot of these cases being too focused on the
1:02:29 fancy, like train the model, let’s do the RL, let’s do the fancy fine tuning and all
1:02:30 this stuff.
1:02:32 Sometimes it doesn’t get, give you the best like ground truth.
1:02:36 Like what is the best performing thing for an engineer’s given task?
1:02:38 Um, benchmarks are very flawed.
1:02:42 You know, they’re not fully comprehensive of everything that it can do, but I think it’s
1:02:47 helpful when developers have a lot of choices out there to try and say, okay, well, like
1:02:48 which one should I use?
1:02:52 This one is nice because it’s pretty empirical of like developers seeing two options and picking
1:02:55 them and then consistently, um, our droids win, which is, which is pretty fun.
1:02:56 Final question.
1:02:59 What does the future of factory look like?
1:03:02 What do you, what do you think about when you look at the next 10 years?
1:03:04 10 years is very hard because AI is pretty crazy.
1:03:08 And I think humans are bad at reasoning around exponentials.
1:03:13 I would say in the next few years, um, bringing about that mission of, you know, that world of
1:03:18 developers being able to delegate very easily and just have a lot more leverage, um, developers
1:03:21 not needing to spend hours of their time on code reviews or documentation.
1:03:29 And I think more broadly that turns software developers into like more cultivators or orchestrators
1:03:35 and allows them to use what they have trained up for so many years, which is like their systems
1:03:35 thinking.
1:03:39 That’s what makes engineers so good is they’re really good at reasoning around systems, reasoning
1:03:43 around constraints from their customers, from the business, from the underlying technology
1:03:47 and synthesizing those together to come up with some optimal solution.
1:03:52 Um, and with factory, they get to use that to its fullest extent, much more frequently in
1:03:53 their day to day.
1:03:58 Um, and I think that is a net good for the world because that means there will be more software
1:04:03 and better software that is created, which means we can solve more problems and solve problems
1:04:06 that weren’t solved before, which I think on the net is just better for the world.
1:04:09 Mattan Grinberg is the founder and CEO of Factory.
1:04:11 This was awesome.
1:04:12 Thank you.
1:04:12 Thank you, Ed.
1:04:21 This episode was produced by Alison Weiss and engineered by Benjamin Spencer.
1:04:25 Our research associates are Dan Chalon and Kristen O’Donoghue and our senior producer is Claire Miller.
1:04:28 Thank you for listening to First Time Founders from Prof. G Media.
1:04:31 We’ll see you next month with another founder story.
1:04:42 Support for this show comes from Delta.
1:04:48 When you unlock your full potential, you get to meet a version of yourself you might have
1:04:49 never met otherwise.
1:04:55 And as the official airline of the National Women’s Soccer League, Delta Airlines is there
1:04:57 to help connect you to your full potential.
1:05:03 to help you grow and recognize the power you have to change yourself, your team, and the
1:05:03 world around you.
1:05:09 Delta is dedicated to helping you get to where you need to be, from season kickoff to the championships.

Ed speaks with Matan Grinberg, co-founder and CEO of Factory, an AI company focused on bringing autonomy to software engineering. They discuss the long-term future of AI, the role of regulation, and whether or not he’s concerned about an AI bubble.

Learn more about your ad choices. Visit podcastchoices.com/adchoices

Leave a Reply

The Prof G Pod with Scott GallowayThe Prof G Pod with Scott Galloway
Let's Evolve Together
Logo