AI transcript
0:00:06 as a small startup, the other sort of scary thought
0:00:08 is the thought that maybe AI one day
0:00:10 will be able to break encryptions.
0:00:12 – We’ve got artificial super intelligence right now.
0:00:14 There’s seven billion agents on earth
0:00:16 that are focused on this problem
0:00:18 and like nobody has cracked those encryptions yet.
0:00:19 I think it’s much more likely
0:00:22 that there’s like some physical hardware innovation
0:00:25 that allows super computers to really take off
0:00:26 and break some of those things.
0:00:28 (upbeat music)
0:00:30 – Hey, welcome to the Next Wave podcast.
0:00:31 My name is Matt Wolf.
0:00:34 I’m here with my co-host, Nathan Lanz
0:00:37 and we are your chief AI officer.
0:00:39 Our goal with this podcast is to keep you looped in
0:00:42 on the latest news and everything you need to know
0:00:43 in the AI world.
0:00:46 And today we have an amazing guest for you.
0:00:48 Today we’re talking to Logan Kilpatrick.
0:00:51 Now, Logan used to be the head of developer relations
0:00:53 over at OpenAI.
0:00:56 Now he’s the lead of product over at Google’s AI studio.
0:00:58 We actually recorded this episode a few weeks ago
0:01:00 and we happened to catch him in this in between phase
0:01:02 where he’s not at OpenAI.
0:01:03 He’s not at Google yet.
0:01:07 And we got some really, really cool insights from him.
0:01:08 Now what he says on this,
0:01:11 isn’t the opinions of Google or OpenAI
0:01:12 ’cause he’s not with those companies
0:01:13 at the time of recording.
0:01:15 But by the time you hear this,
0:01:18 he’s now working for Google in this new role.
0:01:21 (upbeat music)
0:01:23 – When all your marketing team does is put out fires,
0:01:24 they burn out.
0:01:27 But with HubSpot, they can achieve their best results
0:01:29 without the stress.
0:01:31 Tap into HubSpot’s collection of AI tools,
0:01:34 breeze, to pinpoint leads, capture attention,
0:01:37 and access all your data in one place.
0:01:39 Keep your marketers cool
0:01:41 and your campaign results hotter than ever.
0:01:44 Visit hubspot.com/marketers to learn more.
0:01:47 (upbeat music)
0:01:48 – And I think a lot of people,
0:01:50 they only know like Sam Altman.
0:01:52 They don’t know that Logan on Twitter
0:01:54 has been like the main voice of like,
0:01:56 he’s been talking to all the AI influencers,
0:01:57 all the engineers.
0:01:59 Like anytime any big thing would come out,
0:02:01 like he was the human like voice
0:02:04 that you would hear on social media from OpenAI.
0:02:06 And now he’s left.
0:02:08 Right after OpenAI seems to be doing great,
0:02:09 but there’s been a lot of drama,
0:02:10 he’s left to join Google.
0:02:12 So it does seem like a huge win for them
0:02:15 because if he can start making their products more,
0:02:17 you know, if he can be like a human like voice for Google
0:02:19 and explain to people like how you can use it.
0:02:20 I think it’s such a big win for them.
0:02:22 – Yeah, there’s a lot of interesting things happening
0:02:25 right now, both at Google and at OpenAI.
0:02:29 In fact, Google is about to do their annual Google I/O event.
0:02:30 And last year that’s where they made
0:02:33 a whole bunch of huge AI announcements.
0:02:36 We’re expecting really big AI announcements again this year.
0:02:38 I’m gonna be there with Logan.
0:02:40 So I will probably be doing a little bit of reporting
0:02:43 from this Google I/O event and keeping you looped in.
0:02:45 We’ll likely even do a follow-up episode
0:02:47 about what happened at Google I/O
0:02:48 and all of those announcements.
0:02:51 There’s also some rumors flying around right now
0:02:55 that OpenAI is about to launch a AI powered search engine
0:02:56 to go head to head with Google.
0:03:00 We don’t really totally know all the details about that,
0:03:01 but it’s unfolding right now.
0:03:03 It’s supposed to be happening the week
0:03:05 that this episode is dropping.
0:03:07 So if there’s any big news around that,
0:03:09 we will be telling you all about that
0:03:10 in an upcoming episode as well.
0:03:12 But today in this episode,
0:03:15 we had a really fascinating conversation with Logan.
0:03:18 He had some really good insights about OpenAI,
0:03:21 the culture at OpenAI, why he decided to leave,
0:03:22 why he picked to go to Google.
0:03:26 And of course, he has some amazing insights for you
0:03:27 if you have a business
0:03:29 or you want to use this stuff in your personal life.
0:03:31 He has some great tips there
0:03:33 as far as how you can actually integrate what he teaches
0:03:36 and what we’re talking about on today’s episode.
0:03:37 It’s an amazing episode.
0:03:38 You’re really going to enjoy it.
0:03:41 So let’s go ahead and jump on in with Logan Kilpatrick.
0:03:44 Thanks so much for joining us on the show today.
0:03:45 – Yeah, likewise.
0:03:46 I’m super, I’m super excited to be here.
0:03:49 – So I’m curious how when it comes to OpenAI,
0:03:51 I promise the whole conversation won’t be about OpenAI,
0:03:54 but I’m curious as head of developer relations
0:03:55 in the early days, you were probably working
0:03:57 with Jasper, CopyAI, those kinds of companies.
0:04:00 – Yeah, and I think the most interesting thing
0:04:01 to me about being in OpenAI
0:04:05 was just the breadth of the work in the early days
0:04:08 because of how quickly everything was moving
0:04:10 and because of how everything was always on fire
0:04:13 all the time, you could really just jump in.
0:04:14 And if you were someone who loved fighting fires,
0:04:18 which I loved, you could jump in and get your hands dirty
0:04:21 like pretty much anywhere in the company.
0:04:23 And that was something that I appreciated so much.
0:04:26 And I think like the natural tendency as a company grows,
0:04:27 as things become more formalized,
0:04:30 every additional hundred people who joined the company
0:04:32 was just like less and less of an opportunity to do that.
0:04:35 And yeah, it was interesting to see like
0:04:37 when I joined OpenAI, it really, really felt
0:04:38 like a small startup.
0:04:40 And I think when I left OpenAI,
0:04:44 I didn’t feel like a small startup anymore by any means.
0:04:46 – Yeah, no, it’s crazy to think that, you know,
0:04:48 a million active users as a small startup.
0:04:52 But yeah, I mean, that was kind of the vibe
0:04:54 in the early days, it seems like that’s really cool.
0:04:55 – Well, I also think at the time,
0:04:57 ChatGPT was just like a demo.
0:04:59 Like I don’t really think like people had,
0:05:01 were like just starting to,
0:05:02 but like at that time, the million,
0:05:05 I don’t think that was like DAUs, if I remember correctly,
0:05:07 it was just like a million people had tried ChatGPT
0:05:08 or something like that.
0:05:11 So I’m guessing like the attrition rate was super,
0:05:14 super high and most people weren’t actually converted
0:05:16 like weekly users or something like that.
0:05:19 – Yeah, so basically the growth of ChatGPT
0:05:22 when it first came out wasn’t really expected, right?
0:05:26 OpenAI didn’t anticipate that it would have that,
0:05:29 you know, quick of an onboarding of so many people.
0:05:32 – So at the time, the reason that ChatGPT was created,
0:05:34 and there’s a really great podcast interview
0:05:37 with my former manager, Frazier,
0:05:40 who led product for both ChatGPT and the API.
0:05:42 And he talked about how, and again,
0:05:44 this was before I joined OpenAI,
0:05:48 but the GPT-4 actually finished training summer of 2022.
0:05:50 So the team knew like what was coming
0:05:53 and really the early explorations
0:05:55 that Frazier and the team were doing
0:05:57 and a bunch of other folks was thinking about
0:05:59 what is the right form for this technology
0:06:02 to actually be useful to end users.
0:06:04 So this whole narrative that like the team
0:06:06 just kind of threw together something random
0:06:08 and then like published it and then it all went,
0:06:10 you know, perfectly well is actually not true.
0:06:12 Like, and you should, folks should go
0:06:14 and listen to Frazier talk about this,
0:06:16 but it was a very intentional process
0:06:19 of like having a whole team of people
0:06:21 who were like constantly iterating for multiple,
0:06:23 like on the order of multiple months
0:06:25 to like ultimately come to the form factor
0:06:27 that was what ChatGPT was,
0:06:29 which ended up being what we released to the world.
0:06:32 So there was more nuance to the story,
0:06:33 but I think people like to hear the story of like,
0:06:35 oh, it was just totally random.
0:06:36 And we threw this together.
0:06:37 – Yeah, I was gonna ask you like,
0:06:39 where did that narrative come from?
0:06:40 That’s why I’d heard as well.
0:06:41 It’s like, oh, they just do it out there
0:06:42 and like, oh, it blew up
0:06:44 and they didn’t expect it was gonna happen.
0:06:46 I was like, that doesn’t sound right to me, but you know.
0:06:49 – I don’t think they people had the perspective
0:06:50 on how quickly it would grow,
0:06:54 but really the intent was to see whether or not
0:06:56 this is something that would resonate with consumers.
0:06:58 Like it was intended as a product release
0:07:00 in a certain sense and like intended to see
0:07:02 whether or not this like basic chat interface
0:07:04 would be something that’s useful to people
0:07:06 that we could ultimately use when GPT-4 came out
0:07:09 to sort of be the thing to, you know,
0:07:11 be the catalyst for people using that product.
0:07:14 I think that’s like, I don’t know whether someone
0:07:16 at OpenAI started perpetuating that narrative
0:07:18 or whether it’s just like a media narrative
0:07:19 that took off, I’m not sure.
0:07:21 – I don’t know if it was like 10 years ago or so,
0:07:22 but there was like in Silicon Valley,
0:07:25 a whole wave of like chatbots that were released,
0:07:27 which obviously were way more primitive
0:07:28 than what’s out now.
0:07:29 But everyone was all convinced,
0:07:31 oh, this is gonna be the next thing is these chatbots
0:07:32 and it never worked.
0:07:34 And so when chat GPT came out,
0:07:35 I think there was a big question like,
0:07:36 will people actually use this?
0:07:38 You know, ’cause people in Silicon Valley
0:07:40 thought they had already seen that before
0:07:41 and then chat GPT came out
0:07:42 and like people were just blown away.
0:07:44 And I was as well.
0:07:45 – Yeah, same here.
0:07:46 I was an early Jasper customer.
0:07:49 So I had sort of experimented with
0:07:52 and seen the power of this technology early
0:07:53 from using Jasper.
0:07:56 And honestly, like GPT-4 is really what started
0:07:57 to make it much more useful.
0:08:00 Like I still think the highest leverage,
0:08:03 like largest marginal value you can get from chat GPT
0:08:05 is if you’re an engineer and you use it for coding.
0:08:07 Like most of the other things are like useful and nice,
0:08:10 but it’s like from a raw economic output perspective,
0:08:12 coding is the most useful thing
0:08:14 to use this technology for today.
0:08:16 – Yeah, I told Matt previously
0:08:18 when I finally like started using it in code,
0:08:20 I’m like, oh my God, this is such a game changer.
0:08:22 It’s like it helped me like change some code
0:08:24 from like JavaScript to C++
0:08:25 and some other things I was playing around with.
0:08:27 I was like, this is nuts that it can just do that.
0:08:31 – It’s crazy to me how that there’s still engineers
0:08:33 out there who haven’t made that jump
0:08:34 and like haven’t had that aha moment.
0:08:36 I’m like that literally makes no sense.
0:08:37 Like even for an, you know,
0:08:40 I’m sure there’s folks at all companies.
0:08:42 So this is not like a representative data pipeline,
0:08:43 but like even folks I worked with at OpenAI,
0:08:46 some of them like weren’t using AI every day
0:08:47 as a software engineer.
0:08:48 And it was always crazy to see that,
0:08:50 especially being so close to the technology.
0:08:52 – Yeah, well, I mean, the two top like coding,
0:08:55 you know, influencers like Jonathan Blow and Primogen,
0:08:58 they’re both like saying that like AI is like,
0:08:59 I’m not sure if they’re calling it a fad,
0:09:01 but they’re like, oh, it produces shit code
0:09:02 and like all this kind of stuff.
0:09:03 And it’s like, well, you know,
0:09:05 – It’s better at coding than I am.
0:09:05 – Yeah, exactly.
0:09:07 It’s better than coding than most people are.
0:09:09 It would bring the average level of code up, you know,
0:09:12 not everyone’s like the most senior developer
0:09:14 who’s been doing it for like 30 years.
0:09:16 And obviously this stuff’s gonna just keep getting better
0:09:17 and better.
0:09:19 – So I’m curious, one last OpenAI question,
0:09:21 and you don’t have to answer it if you don’t want,
0:09:23 but was there any sort of catalyst
0:09:25 that led you to leave OpenAI?
0:09:27 I mean, from the outward perspective on Twitter,
0:09:29 it looked like everything was cool, amicable.
0:09:30 You just kind of wanted to move on to something else,
0:09:33 but I’m curious if there was any story there.
0:09:36 – I think broadly, like my broad perspective is like,
0:09:38 the company just changed so dramatically
0:09:42 from what I joined, like I intentionally joined,
0:09:43 like I had worked at Apple
0:09:45 and had intentionally joined us a small company
0:09:49 because I wanted a lot of the things that come with working
0:09:51 at a small company, like being able to move really quickly,
0:09:54 being able to have high agency to go and solve problems,
0:09:57 having the green fields, all those things.
0:10:01 And I think just like very naturally over time OpenAI
0:10:03 became like less of those things for me personally.
0:10:05 And I think it was also like, you know,
0:10:06 I don’t remember if this was on camera
0:10:08 before we started chatting on camera,
0:10:10 but the comment about, you know,
0:10:14 being a human voice at OpenAI,
0:10:16 like it was a really challenging position for me to be in.
0:10:18 And I think like if you look around,
0:10:19 like there was not a lot of other people
0:10:21 who were doing that type of work.
0:10:24 And I think that just had its own whole host of challenges.
0:10:28 And I think just overall, also incredibly like,
0:10:30 as I started to have more conversations with people,
0:10:32 just became incredibly excited about like
0:10:35 where everybody else in the ecosystem was.
0:10:38 And yeah, I think there’s so much interesting stuff happening.
0:10:40 And I think like OpenAI for a long time
0:10:43 has dominated the narrative of being the benefactor
0:10:44 of this technology.
0:10:46 And also the people who are like giving
0:10:47 the most value to the world.
0:10:49 And I think there’s gonna be more companies
0:10:52 that are gonna be able to like successfully do that
0:10:53 in the next six to 12 months,
0:10:55 which I find just as a consumer
0:10:57 and as somebody who like loves the technology,
0:10:59 I think that’s such so incredible.
0:11:04 And I’m excited to hopefully get to help that
0:11:05 from a different side of things.
0:11:09 – I feel like OpenAI is really gonna miss having you there.
0:11:11 ‘Cause like there is no one else right now who does that.
0:11:14 Like, I mean, like before on Twitter,
0:11:15 you were the only one I would look to
0:11:16 like when things would change.
0:11:17 So like, I’m curious.
0:11:18 So like, so you said that you left
0:11:20 because of it no longer being a startup
0:11:23 and you wanting there to be other competitors out there
0:11:24 who can compete with OpenAI.
0:11:26 But like, why Google?
0:11:29 Why not like doing an open source AI project
0:11:30 or something like that?
0:11:32 – I feel like there’s still such an opportunity
0:11:33 at the large language wall space.
0:11:36 Like as I was exploring, it was like, you know,
0:11:38 could go to an application layer company.
0:11:40 There’s a bunch of incredible companies
0:11:41 doing interesting stuff at that layer of things,
0:11:44 but it still feels like there’s a lot of opportunity.
0:11:48 I also think like, you know, to be candid for Google,
0:11:50 like the, you know, they’ve had a challenging narrative
0:11:53 as far as like how developers feel about the platform,
0:11:54 like what they’ve been doing with AI.
0:11:58 Like there’s just this incredible moment
0:12:01 and opportunity at Google for someone who loves
0:12:03 building products for developers to really come in
0:12:05 and help support that ecosystem.
0:12:08 There’s also so many smart people at Google,
0:12:11 like they have such an incredible roadmap.
0:12:13 And again, I don’t know all the details
0:12:16 ’cause I haven’t actually, at the time of this recording,
0:12:17 haven’t actually started that role.
0:12:21 But I think they, at least from what I’ve seen externally,
0:12:22 I think they’re pushing in the right direction.
0:12:24 Like the one million context window,
0:12:27 like those models being natively multimodal,
0:12:29 like all that stuff gives me a lot of confidence
0:12:32 and hopefully what the roadmap looks like for other things.
0:12:35 And it’s also just like such a core piece of their business.
0:12:36 Like there’s a lot of people and like,
0:12:39 I think I love what the Meta folks are doing
0:12:41 and I love that they’re putting out the Lama models.
0:12:44 But in many ways, at least from like my outside perspective,
0:12:47 it’s not clear that it’s like the core driving force
0:12:48 for their business.
0:12:50 And I feel like in the case of Google and others,
0:12:52 like it is a core driving force for their business
0:12:53 at least now.
0:12:54 And I think for Meta, it’ll probably evolve to be that
0:12:57 over time as they build AI into their products and services.
0:13:00 But today, it’s like, yeah, like do I need a chat bot
0:13:02 for Instagram to be a viable product for me?
0:13:03 Like not really.
0:13:05 Like they can essentially keep that product
0:13:06 and keep going without AI.
0:13:08 And I think Google is in a very different position
0:13:10 with respect to search and some of their other platforms.
0:13:13 – Yeah, let’s talk about open source versus closed source
0:13:14 for a minute because, you know,
0:13:17 there’s this big sort of storyline unfolding, right?
0:13:20 You’ve got Elon Musk versus Sam Altman,
0:13:23 the sort of public battle of open source versus closed source
0:13:24 going on.
0:13:27 But I’m curious to hear from your perspective,
0:13:29 where do you see the value of open source?
0:13:31 Why, I know you’ve been sort of outspoken,
0:13:34 a proponent of open source, you know, on your Twitter account.
0:13:36 Where do you see that value of open source?
0:13:38 What excites you about open source?
0:13:41 – One, I think it’s like fundamentally at the end of the day
0:13:43 and people like to consider it in a different perspective.
0:13:45 But from my perspective,
0:13:47 it’s fundamentally like a business decision.
0:13:51 Like do the pros of open sourcing models
0:13:53 and all the, you know, potentially infrastructure
0:13:57 around those models end up outweighing the cost of doing that.
0:13:58 And there are very real costs.
0:14:01 Like I don’t think people, like everyone just assumes
0:14:03 that open source is like, you know,
0:14:05 a positive in every direction.
0:14:06 And it’s like certainly not.
0:14:08 If you’ve ever been an open source maintainer,
0:14:11 like it is not fun in many cases
0:14:13 to maintain open source projects.
0:14:15 Like there’s just people are,
0:14:17 you’re essentially giving something away for free
0:14:20 and everyone is asking you to do more things for free.
0:14:22 And you’re not getting any of the value
0:14:23 that’s being accrued.
0:14:27 And I think this is like the really difficult tension
0:14:30 for companies that are making this decision.
0:14:32 And I think like I love the folks at Mistral
0:14:34 and I think they’re doing really important work,
0:14:36 but it’s a challenging position for them as an example
0:14:39 where, you know, they open source a model
0:14:41 and then, you know, you can now do inference on that model
0:14:44 on any of the many, many different platforms
0:14:45 that offer inference.
0:14:47 And, you know, that led them to,
0:14:48 with their most recent model,
0:14:50 not making it fully open source yet.
0:14:54 I think there’s like some nuance about how open that,
0:14:56 the latest model that they did is.
0:14:59 And I think like more companies are going to struggle with this
0:15:00 because I do think at the end of the day,
0:15:02 especially for the model layer,
0:15:06 it’s hard to have a business that does this.
0:15:09 And I actually think this is why companies like Meta,
0:15:10 it makes a ton of sense for them
0:15:14 because like they are not a developer platform company.
0:15:17 Them taking the model and putting it out to the world
0:15:18 actually doesn’t really matter.
0:15:20 Like it’s not negative for their business
0:15:23 because their business is serving ads
0:15:25 and selling products to the four billion people
0:15:26 who use their platform.
0:15:30 And like that’s a very privileged position for them to be in.
0:15:31 And I think there’s a lot of startups
0:15:33 who don’t have that distribution
0:15:35 and are still trying to do open source models.
0:15:39 And you just run into these like very real realities
0:15:41 of running a business that make it really hard
0:15:42 to open source those models.
0:15:45 I think there’s also like the whole philosophical debate
0:15:48 about whether the technology should be open source
0:15:50 because it’s super powerful.
0:15:53 I think that like that’s a very fair argument.
0:15:55 I think there’s also a bunch of very fair arguments
0:15:57 on the safety side around not open sourcing
0:15:58 some of these models.
0:16:00 (upbeat music)
0:16:01 – We’ll be right back.
0:16:04 But first I wanna tell you about another great podcast
0:16:05 you’re gonna wanna listen to.
0:16:07 It’s called Science of Scaling,
0:16:08 hosted by Mark Roberge.
0:16:11 And it’s brought to you by the HubSpot Podcast Network,
0:16:14 the audio destination for business professionals.
0:16:16 Each week host Mark Roberge,
0:16:19 founding chief revenue officer at HubSpot,
0:16:21 senior lecturer at Harvard Business School
0:16:23 and co-founder of Stage Two Capital,
0:16:26 sits down with the most successful sales leaders in tech
0:16:29 to learn the secrets, strategies, and tactics
0:16:31 to scaling your company’s growth.
0:16:34 He recently did a great episode called How Do You Sol
0:16:37 for a Siloed Marketing in Sales?
0:16:39 And I personally learned a lot from it.
0:16:41 You’re gonna wanna check out the podcast,
0:16:42 listen to Science of Scaling
0:16:44 wherever you get your podcasts.
0:16:47 (upbeat music)
0:16:49 – Do wonder how regulation is gonna hit open source
0:16:51 ’cause it does feel like everything is going
0:16:53 to more and more regulation.
0:16:56 Europe’s heavily starting to regulate AI.
0:16:58 In America, there’s more and more people wanting
0:16:59 to regulate AI.
0:17:01 I’m kind of more on the EAC side of things
0:17:03 of we need to go as fast as possible.
0:17:05 But I do understand the concerns.
0:17:06 And so I do wonder.
0:17:09 It feels like, okay, Elon Musk is open sourcing all his AI,
0:17:11 but at some point when that gets very powerful,
0:17:12 I have a feeling the government’s gonna be like,
0:17:16 yeah, we can’t just have anyone having that.
0:17:19 So I’m curious how that stuff’s gonna play out long term
0:17:20 ’cause I think open source is very important
0:17:23 ’cause I love open AI, I love Google.
0:17:27 I don’t want open AI and Google being the future of AI,
0:17:29 which is the future of humanity, basically.
0:17:31 – Well, it’s funny ’cause I think Logan posted something
0:17:32 on Twitter the other day about,
0:17:34 can somebody bring me a hard drive
0:17:36 with the weights for Grok on it?
0:17:39 ‘Cause I can’t download 368 gigabytes.
0:17:41 So I mean, the Grok open source, I think,
0:17:43 still has some roadblocks for general consumers
0:17:45 to just start using on their own computer.
0:17:46 – Yeah, yeah, yeah, yeah.
0:17:47 – This is part of the nuance
0:17:49 that I think a lot of people miss is like,
0:17:53 there’s this massive spectrum of what it means to be open.
0:17:55 And I think to the open AI folks’ credit,
0:17:58 and I forgot whether what blog post
0:18:00 or where they talked about this,
0:18:04 but I do think having your technology cheaply available
0:18:07 through an API and making that broadly accessible
0:18:09 to the world for developers to go
0:18:10 and build products and platforms,
0:18:13 to me that is certainly an element of openness.
0:18:16 And I think putting the weights of a model available
0:18:18 to the world is also an element of openness.
0:18:20 But I think just because you make your weights available
0:18:22 does not mean that you’re actually running
0:18:23 an open source project.
0:18:28 And at NumFocus, when we evaluate open source projects
0:18:30 to see whether or not they can be a part of NumFocus,
0:18:32 there’s this huge list of things
0:18:34 that you have to go through from like,
0:18:36 who are the people who are making the decisions
0:18:38 about where the money is spent?
0:18:41 What are the governance policies, the code of conduct?
0:18:44 All that stuff, which no one is looking at.
0:18:46 They’re like, oh, just because the model is open,
0:18:48 it doesn’t matter that it’s a for-profit corporation
0:18:50 or someone who’s got a lot of money,
0:18:52 who’s driving and making all the decisions.
0:18:54 And I do think it’ll be interesting to see
0:18:58 how the narrative evolves over time.
0:18:59 I actually think in many ways,
0:19:02 open source models are much less open
0:19:04 than traditional software is.
0:19:07 If you look at any of the popular open source projects,
0:19:09 many of them have distributed governance.
0:19:12 It’s clear how they make decisions, all those things.
0:19:14 And that’s very much not the case
0:19:15 in for some of these open models,
0:19:17 which is super interesting.
0:19:18 – You mentioned Meta earlier.
0:19:20 And it’s something that I guess
0:19:21 I haven’t been able to wrap my head around
0:19:26 is why a company like Meta would open source their models.
0:19:29 From what I understand, to train one of these models,
0:19:31 it could cost millions of dollars to train
0:19:35 with all the compute power required to train the models.
0:19:37 What do you think the motives for Meta
0:19:39 to release models like this publicly?
0:19:40 Why would they do that?
0:19:44 – They are not selling a developer product
0:19:46 and they don’t have a developer platform.
0:19:48 They have, I think there is some Facebook app platform
0:19:49 or something like that.
0:19:52 But I think that’s like, I’ll slightly aside.
0:19:54 They don’t, they’re not like a cloud provider.
0:19:56 They’re not selling the model to end users
0:19:58 for them to be able to, or developers for them
0:20:00 to be able to build that in their technology.
0:20:03 So really it’s like, open sourcing the model.
0:20:06 The only way that that hurts them potentially
0:20:08 is somebody takes that model
0:20:11 and then goes and makes an Instagram competitor
0:20:13 or a Facebook competitor or a WhatsApp competitor.
0:20:15 And I think if you look at their business,
0:20:17 like they have deeply, they’re deeply entrenched
0:20:19 in the ecosystem, they have distribution,
0:20:21 they have all the money, they have all the mode.
0:20:23 So like, they actually don’t really need to worry
0:20:23 about that as much.
0:20:25 And it’s more of an existential risk for them
0:20:30 if they were to not take AI and infuse another platform.
0:20:31 Cause then all of a sudden somebody makes
0:20:33 WhatsApp with AI and Instagram with AI
0:20:36 and Facebook with AI and then, you know, they get disrupted.
0:20:38 So it’s much easier for them to justify that cost
0:20:41 of just like essentially business as usual.
0:20:44 This is the next technology frontier, take AI,
0:20:47 put it into those platforms and then now, you know,
0:20:49 potentially even, you know, they can sell services
0:20:51 to people with AI on Facebook and Instagram
0:20:52 and WhatsApp and things like that.
0:20:54 – Yeah, it almost feels like there’s an element
0:20:57 of this like meta redemption arc happening, right?
0:20:59 Where a lot of people were soured by meta
0:21:01 with the Cambridge Analytica and the data leaks
0:21:02 and all that kind of stuff.
0:21:06 And now meta is kind of saying, no, we’re good guys here.
0:21:08 We’re open sourcing our stuff.
0:21:09 – I’m sure that’s part of it.
0:21:11 Like, and I think that makes sense for them.
0:21:14 Like there’s certainly like, they’re definitely winning
0:21:15 people over by open sourcing models.
0:21:18 I think it’s like, it’s been a viable strategy.
0:21:20 – And they’ve been doing that for a while, right?
0:21:22 Like that’s been their playbook for recruiting talent
0:21:24 with like GraphQL, React.
0:21:25 – PyTorch.
0:21:26 – Yeah, yeah.
0:21:27 And that all kind of started like when like the image
0:21:29 of Facebook in Silicon Valley was kind of going down
0:21:31 at least in like the tech press.
0:21:32 They were like, everyone was hating on Facebook.
0:21:34 And then they’re like, oh, we’re open sourcing all this stuff
0:21:36 and all these developers like, oh, we love Facebook.
0:21:38 And so it kind of created this like kind of divide there
0:21:40 where all of a sudden a lot of developers were loving them.
0:21:43 And I think this is a continuation of that playbook.
0:21:47 – Let’s talk about the potential risks of open source, right?
0:21:51 So there’s this kind of scary thought around open source
0:21:54 that somebody in their basement playing with open source tools
0:21:58 could cause massive destruction and things like that.
0:22:01 I guess let’s speak about and talk about some of the risks
0:22:04 and maybe some of the counteractions that can be taken
0:22:07 to mitigate some of these risks.
0:22:08 There was that story, I don’t know, a year ago
0:22:13 about chaos GPT that tried to take over the world
0:22:16 by tweeting to like seven people or whatever.
0:22:19 But I mean, I think that’s a real fear of a lot of people
0:22:20 is that somebody in their basement
0:22:22 could create some sort of AI agent
0:22:24 that creates real chaos, real destruction.
0:22:27 – Yeah, it’s such a tough position.
0:22:30 And this is why it’s always been tricky for me to like,
0:22:34 in many ways, like I align with open AI’s principles
0:22:36 about what the risks are with open sourcing models.
0:22:39 I think there’s also a way to potentially do more in the open
0:22:40 even with some of those risks.
0:22:43 I think very fundamentally,
0:22:47 I have yet to see any proof or any like scientific evidence.
0:22:49 Essentially, there’s all of this stuff that happens
0:22:51 during the model training process
0:22:54 to make it so that the models outputs
0:22:57 are not like a net bad thing for humanity.
0:22:59 And I think there’s, you know, wide bounds of this
0:23:03 and people can disagree about how much of that you should do.
0:23:06 But that is the intent of how many companies
0:23:08 train these models today.
0:23:10 The challenging part is that once you take the weight
0:23:12 to the model and you make it accessible,
0:23:14 it is easy to essentially fine tune away
0:23:17 a lot of the safeguards that have been put in.
0:23:19 And I think today that’s like less of a problem
0:23:21 ’cause you can just, you know, get the model
0:23:22 to say a bunch of bad things.
0:23:25 And I think like all the really, really bad capabilities
0:23:27 are potentially like less of a risk.
0:23:30 I think the real challenge is like you take a GPT-5
0:23:33 or a GPT-6 and if the same principle holds,
0:23:36 like you could really get a model that’s capable
0:23:38 of doing like a large amount of damage.
0:23:40 And because it’s open source,
0:23:42 anybody can just go and fine tune, you know,
0:23:45 you could make an open source training set
0:23:47 with a bunch of bad stuff in it
0:23:48 and publish it and share it with people
0:23:50 and then they can go and do this themselves.
0:23:52 And I think also with like how compute
0:23:56 and efficient models are going to become over time,
0:23:58 like it’ll just become easier and easier
0:23:59 for people to do this.
0:24:02 I think like the only like one,
0:24:04 there could be some like scientific breakthrough
0:24:08 that would like make it so even if you fine tune the model,
0:24:10 you know, doesn’t wanna do bad things,
0:24:12 that seems like a stretch like I, you know,
0:24:13 it’s outside my realm of understanding
0:24:14 how that would be possible.
0:24:16 But I suppose that it’s possible.
0:24:18 And if we could do that, then that would be awesome.
0:24:22 I think the other option is it’s likely
0:24:25 that there’ll be some amount of pressure
0:24:27 that’s put on people who are providing compute
0:24:31 to do some, you know, moderation
0:24:33 or some sort of like security checking
0:24:37 at the compute level, at like the token generation level.
0:24:38 I don’t know how feasible that is either
0:24:40 because like you could just spin up a GPU yourself
0:24:42 and do this on your own computer
0:24:44 so you don’t need someone else’s compute.
0:24:46 But like for example, why I have conviction
0:24:49 that open AI will successfully and safely deploy
0:24:52 this technology, at least from what I’ve seen is that like,
0:24:54 there is some level of monitoring taking place
0:24:56 on the platform so they can see, you know,
0:24:59 if someone is continually, you know,
0:25:01 asking how to make bombs or bio weapons
0:25:02 or something like that.
0:25:04 Like they can actually monitor what’s taking place
0:25:07 and go and proactively take action to keep people safe
0:25:08 and keep the platform safe.
0:25:11 And I think to me that’s not possible
0:25:12 in the context of open source.
0:25:14 And yeah, it’s a big open question,
0:25:16 especially if you have your own GPUs.
0:25:18 Like you can’t, it’s essentially impossible
0:25:20 to stop somebody from doing something like that
0:25:23 if you don’t have any control over the compute
0:25:24 from a cloud perspective or something like that.
0:25:26 So it’s a tricky situation.
0:25:28 And like, I think this goes back to like,
0:25:30 it’s just a myriad of like really, really tough trade-offs
0:25:33 which is why I don’t appreciate much of the narrative online
0:25:38 which like lacks the nuance of all of these things.
0:25:41 Like there’s just like a lot of very, very real trade-offs
0:25:42 to take into account.
0:25:44 Everyone’s like, oh no, like they glaze over
0:25:45 a lot of those things.
0:25:48 And I think it’ll also become more clear
0:25:50 as the models become capable, like more capable.
0:25:52 Like today it’s like a little bit easy to be like,
0:25:53 oh, like what’s the worst that happens?
0:25:55 You get something that’s gonna write like mean jokes
0:25:58 or stupid tweets or whatever.
0:26:00 And then again, Twitter just has to take those tweets down.
0:26:03 But I think like, you know, you imagine a world
0:26:05 where the agentic systems really take off
0:26:08 and there’s great infrastructure to do that.
0:26:10 And all of a sudden at the push of a button,
0:26:13 like you have access to go spin up like millions of agents.
0:26:14 Like you could genuinely cause,
0:26:18 I could see genuine harm being caused in the world from that.
0:26:20 And thankfully I feel like we still have a little bit of time
0:26:22 to like try to figure some of that out.
0:26:24 – But yeah, there’s real challenges.
0:26:26 – Yeah, the other sort of scary thought
0:26:28 and this kind of bubbled up with the whole like
0:26:30 two-star rumors we’re going around, right?
0:26:32 Is the thought that maybe AI one day
0:26:34 will be able to break encryptions.
0:26:36 And if AI can break encryption,
0:26:38 then everybody’s bank accounts are at risk.
0:26:40 You know, all cryptocurrencies are at risk.
0:26:43 Like pretty much the internet as we know it is at risk
0:26:47 because it’s all sort of built on this cryptography.
0:26:50 Do you see a world where the AI models
0:26:52 are able to break the cryptography
0:26:54 that sort of runs the internet today?
0:26:56 – I don’t know if enough about cryptography
0:26:57 to really be informed.
0:26:59 My instinct has always been that it’s like much more likely
0:27:01 that somebody will use these models
0:27:04 to like accelerate quantum computing research.
0:27:06 And then that will be the catalyst for like,
0:27:08 I feel like it’ll be like a hardware innovation.
0:27:12 Like just, and my mental model for this is like,
0:27:14 we’ve already got AGI right now.
0:27:16 We’ve got artificial super intelligence right now.
0:27:19 There’s seven billion agents on earth
0:27:20 that are focused on this problem.
0:27:23 And like, nobody has cracked those encryptions yet.
0:27:25 So like, I don’t think it’s going to be an LLM layer
0:27:26 that ends up doing that.
0:27:28 I think it’s much more likely that there’s like some
0:27:32 physical hardware innovation that allows super computers
0:27:34 to really take off and break some of those things.
0:27:35 – Like I hear the concern of like, okay,
0:27:37 there’s going to be like these rogue agents and all that.
0:27:40 But I don’t know, like I was born in 1984
0:27:41 and like I read the book many, many times.
0:27:44 And that’s like, you know, my concern is like, okay,
0:27:46 sure, all those things could happen,
0:27:47 but also it seems very likely
0:27:49 that like when you centralize power
0:27:51 and you have like one group or two groups
0:27:53 that have all the power and then humanity starts
0:27:56 kind of like outsourcing our intelligence to the LLMs
0:27:57 ’cause they get so intelligent
0:27:59 that they’re more intelligent than us.
0:28:01 So we’re like, hey, GPT-7,
0:28:03 what do you think I should do in my life?
0:28:05 And then now you’ve got the kind of government sneaky
0:28:07 getting there or whoever, you know,
0:28:08 let’s say it’s not the U.S. government,
0:28:09 but you know, it could be, it could be other governments
0:28:11 who are, you know, supposedly are more nefarious.
0:28:13 And they’re all of a sudden kind of telling people
0:28:14 how they should live their lives.
0:28:17 And like actually kind of micromanaging people
0:28:18 on an individual basis.
0:28:20 Like having the AI kind of tell you like,
0:28:23 how could I make Nathan do this thing I want him to do, right?
0:28:25 And all of a sudden the LLMs like give me different results,
0:28:27 you know, based on what they want me to do.
0:28:29 I have a lot of concern about that.
0:28:30 Like more than the rogue stuff.
0:28:33 Because I think with all like the rogue AI stuff,
0:28:34 you’re going to have like this battle of like,
0:28:36 okay, there’s bad people using AI in bad ways,
0:28:38 but in theory, the good guys also have the AI.
0:28:40 Maybe they even have like a slightly better model.
0:28:42 And so I think those things are going to not be as big
0:28:44 of a problem as people think.
0:28:45 – I think Bill Gates said something recently
0:28:48 to the effect of all of the problems
0:28:49 that people are worried about with AI.
0:28:51 AI also solves those problems.
0:28:52 – Yes, yeah.
0:28:53 – And I feel like that’s kind of like
0:28:55 where you’re going with that.
0:28:56 – I think that’s true to a certain extent,
0:29:01 but I would, my mental model for this has long been that
0:29:04 like we could have this technology today.
0:29:08 It doesn’t mean that there’s so many humans on earth
0:29:12 in so many different like life and geographic
0:29:14 and financial positions that like it doesn’t even matter
0:29:16 if we have this technology available
0:29:19 to like quote unquote everyone,
0:29:21 like chat GBTs available to quote unquote everyone,
0:29:23 but it’s really not like it’s only available to people
0:29:25 who have access to the internet and know about what AI is.
0:29:29 And like, I think, yeah, it still feels like one,
0:29:32 Nathan to your point, I’m 100% in favor.
0:29:34 I don’t think that the best outcome for this technology
0:29:36 just from like a technological development perspective
0:29:38 is that only one company controls this.
0:29:42 And I think just the trajectory of where the ecosystem
0:29:44 is headed, it feels like that’s not going to be the case.
0:29:47 Like it feels like pretty much everyone has realized
0:29:48 they have a vested interest.
0:29:52 Like every company with any amount of technical capacity
0:29:53 is like, let’s go train a model
0:29:55 and let’s go make this happen.
0:29:56 I think the question is like,
0:29:58 how far does that trend continue?
0:30:00 And, but it does feel like people have woken up
0:30:03 to this idea and I would imagine like,
0:30:04 I don’t think the open source layer of this
0:30:05 is going to go anywhere.
0:30:07 I feel like this idea that, you know,
0:30:10 good people on the internet are just like crusading
0:30:12 into like the bad places on the internet
0:30:14 and like stopping people from doing bad things,
0:30:16 at least in my worldview.
0:30:19 Like I’ve never, you know, seen that happen before.
0:30:21 It’s usually just like there’s bad people doing bad stuff
0:30:23 and like they all get together
0:30:25 and do more bad stuff together.
0:30:26 So it’ll be interesting to see if like AI
0:30:28 is potentially a fix for that.
0:30:30 – So I wanted to ask about the future.
0:30:32 I wanted to kind of get your perspective
0:30:34 on where all of this is headed.
0:30:36 I’ll read a tweet real quick that you put out.
0:30:37 You said, in the next 10 years,
0:30:39 we’re going to have super human AI,
0:30:40 full self driving everywhere in the world,
0:30:43 humans on Mars, internet everywhere on earth,
0:30:44 supersonic commercial jets,
0:30:46 and cures for major diseases.
0:30:49 So what do you think’s coming first of those?
0:30:51 And you know, I’m just kind of curious to hear
0:30:52 your sort of future scenario
0:30:55 and what do you think is kind of coming
0:30:56 within the next few years?
0:30:57 What do you think is still 10 years out?
0:31:00 Like how does this story unfold?
0:31:02 – All of this comes back to being
0:31:04 like a physical hardware challenge,
0:31:06 which would be really interesting to see how that plays out.
0:31:07 Like I imagine the main limitation
0:31:10 on a lot of the progress that takes place is like,
0:31:13 can we get enough GPUs or I’m guessing in 10 years,
0:31:15 it’ll be some new iteration, it won’t be GPUs,
0:31:18 but some new compute to power all of these things.
0:31:20 ‘Cause you can imagine like, even if you had,
0:31:23 and this is the challenge with a lot of the narratives
0:31:25 around like super intelligent AI’s
0:31:27 that are just like doing stuff for you all the time,
0:31:31 like you basically have to have like a H100 cluster
0:31:34 in your bedroom like to power all the types of things
0:31:37 that like people think will be possible with AI.
0:31:39 So like it really does come down to like,
0:31:42 can we produce a compute for there to be an H100
0:31:45 for or not even one H100,
0:31:49 but like a rack of H100s for every human on earth.
0:31:53 And like that is like a deeply physical problem,
0:31:55 like very abstracted away from all the things
0:31:56 that AI will be able to do.
0:31:59 And AI will be able to solve parts of those problems,
0:32:01 but like AI cannot solve the problem
0:32:04 of like physically moving sand and rock and all this stuff.
0:32:06 So like there’s a lot of like very interesting
0:32:09 like traditional human problems
0:32:11 that I think need to be solved to get to that point.
0:32:13 My guess is like the boom arrow
0:32:15 is already gonna do hypersonic.
0:32:16 So that won’t be the thing.
0:32:19 Like that’s for sure, pretty much a guaranteed in my mind.
0:32:21 Someone’s intelligently responded
0:32:23 about how the windows for Mars,
0:32:25 it seemed unlikely that we would get there
0:32:27 in the next 10 years, which is really sad to me.
0:32:28 I’m like, dude, another 10 years,
0:32:30 how can they not get to Mars in 10 years?
0:32:31 I feel like that should be super soon.
0:32:34 But apparently we only have like there’s a window
0:32:36 in six years or something like that or four years.
0:32:39 And I’m like, it’s gonna be too long before the next window.
0:32:41 So we probably won’t get to Mars
0:32:43 if that person is correct in the next 10 years, which sucks.
0:32:46 And it’s not clear to me that AI is going to make that happen.
0:32:51 But I really do think like the abundance of intelligence
0:32:53 is going to be so exciting,
0:32:57 but also like have like very real challenges for society.
0:32:59 I think the thing that gives me the most hope about this
0:33:03 is like how quickly humans seem to adapt to new changes.
0:33:05 Like the fact that we have chat to BT
0:33:08 and all of a sudden like we’re pissed that it’s, you know,
0:33:11 telling us to do work and are upset about like
0:33:13 it not just doing everything for us already
0:33:15 after only a year and a half or whatever,
0:33:16 however long it’s been.
0:33:17 I think it’s like a great example
0:33:19 of like our expectations continue to go up.
0:33:22 And I think my hope is that the technology
0:33:25 like continues on like that linear curve
0:33:27 and doesn’t end up like on some crazy exponential.
0:33:29 ‘Cause I think that’s just like where the highest chance
0:33:33 for things to go wrong and people to be, you know,
0:33:34 disrupted in a negative way.
0:33:39 But if we can stay on some not actual exponential curve
0:33:41 that I’m hopeful that like society in the world
0:33:42 will be able to read that.
0:33:45 But we’re also close to this technology
0:33:50 that like if you rounded the percentage of people on earth
0:33:53 who are like actively using AI every day,
0:33:55 like it probably rounds down to zero.
0:33:58 Like it’s probably like one or 0% of the world.
0:34:00 And like that’s just like speaks to the level
0:34:03 of how far we still have to go with this technology
0:34:06 to make sure that we get like the rest of the world
0:34:08 on board using this technology bought into it
0:34:10 actually contributing to the discourse.
0:34:12 And like that is a really, really difficult problem.
0:34:13 And it honestly sounds like it’s a problem
0:34:15 that’s probably gonna take 10 years.
0:34:17 – And that’s one reason we started this podcast, you know,
0:34:19 is like there’s so many people who have no idea
0:34:21 what’s going on with AI, you know,
0:34:23 I moved from San Francisco out to Kyoto over a year ago.
0:34:25 And like, I was surprised.
0:34:26 Like I’m thinking like in my mind like,
0:34:28 oh, Japan, super futuristic, which you know,
0:34:29 I’ve been here a lot.
0:34:31 So I know it’s not always like how people imagine it,
0:34:33 but I would go out to like local cafes and stuff
0:34:34 and hang out with like locals.
0:34:36 And I would show them like mid journey
0:34:38 or show them chat GBT.
0:34:39 They were like, what is that?
0:34:41 They would just be like have no idea
0:34:42 what the hell was going on.
0:34:45 I basically just like pulled magic out of my pocket
0:34:48 and they just were so completely mind blown.
0:34:49 I was like, oh my God.
0:34:50 So like the average person around the world
0:34:53 probably has no idea what’s going on.
0:34:55 They may have heard like one or two like scary stories
0:34:59 on the news and that’s probably their entire context for AI.
0:35:02 – This needs to be like a government level project.
0:35:06 Like actually a part of when I was exploring
0:35:07 what’s out there I was looking like,
0:35:10 is the government spending money to try to like educate
0:35:11 the masses on this technology?
0:35:12 And like not just the U.S. government,
0:35:15 but like global governments because I feel like it’s just,
0:35:17 like, you know, I think your show is going to do
0:35:20 incredibly well, but it is likely to reach people
0:35:22 who are like much more interested in AI.
0:35:25 So I think it’s like a really, really hard human problem.
0:35:27 And like I think it’s going to take like,
0:35:30 I would love to see some like international level
0:35:31 collaboration to be like,
0:35:34 we need to educate humans about how to use AI,
0:35:35 but it just feels like, yeah,
0:35:37 you can’t get international governments
0:35:38 to agree on anything.
0:35:39 So who knows if that will happen,
0:35:40 but somebody should try.
0:35:43 Like I feel like it would be super useful.
0:35:44 – I think a lot of leaders around the world
0:35:45 barely understand the technology.
0:35:46 I mean, I think that’s one problem
0:35:48 of having like really old people in government.
0:35:50 You know, I hate to get like political,
0:35:52 like, but on both sides right now of politics,
0:35:54 like it doesn’t feel like the greatest spot to be in,
0:35:57 like when the AI revolution is starting to have like leaders
0:36:00 who just really can’t even fathom what’s going on with the tech.
0:36:01 – I agree.
0:36:04 My instinct is that like that will be a major catalyst
0:36:05 for like the next generation of leaders
0:36:07 to try and run for office.
0:36:10 Like I actually know some folks who I worked with
0:36:13 at like a J.C.L.A. through open source.
0:36:15 And they were at the Schmidt Ventures,
0:36:17 Schmidt Futures from Eric Schmidt.
0:36:20 And you know, the guys running for his credit,
0:36:22 running for the house of representatives in Georgia,
0:36:23 which is incredible.
0:36:24 You should actually have him on the show.
0:36:26 I’ll happily connect you.
0:36:29 But like specifically through this platform of like,
0:36:31 the world is changing and like the current leaders,
0:36:33 like who are all great and have a bunch
0:36:34 of awesome accomplishments,
0:36:36 like don’t understand how this technology works.
0:36:39 And like it’s really difficult to adapt to that change
0:36:41 if you don’t understand how the technology works
0:36:43 and you don’t have your hands on it every day.
0:36:46 And I think, yeah, it’s a critically important problem
0:36:47 for us to get right.
0:36:48 – This is a HubSpot podcast.
0:36:51 And, you know, HubSpot talks to entrepreneurs
0:36:53 and business owners and things like that.
0:36:55 And, you know, in your role at OpenAI,
0:36:57 you dealt with a lot of, you know,
0:36:59 SaaS founders and business owners.
0:37:02 What advice do you have for business owners
0:37:05 looking to work with AI as a piece of their business?
0:37:09 – The biggest challenge with people who aren’t like
0:37:12 inherently like giddy about AI and technology
0:37:16 is just getting past like the blank page problem.
0:37:18 And I think this is like part of the challenge
0:37:20 with ChatGBT fundamentally as a tool is you show up
0:37:23 and you’re like, you basically are, it’s blank.
0:37:25 And you basically already need to know
0:37:27 how this technology helps you.
0:37:31 I think GBTs are like a really great step in that direction
0:37:35 because they like are a very tangible small use case
0:37:37 that like I don’t have an AI problem.
0:37:39 I have like a finance problem
0:37:42 and I need help with my books or whatever it is.
0:37:45 I think all of those like looking for those
0:37:48 like very specific tangible use cases
0:37:49 and trying to automate stuff around them.
0:37:54 Like I’ve always been a, I’ll give my really quick shout out
0:37:55 to the people at Zapier.
0:37:58 Like I think Zapier is going to be an incredible catalyst
0:38:02 for people who are just getting used to this technology
0:38:04 to like truly automate things.
0:38:08 And there’s so few platforms that are targeting
0:38:10 that demographic of people.
0:38:12 And I actually think HubSpot is one of them.
0:38:14 I’ve been a huge fan of HubSpot and Darmash
0:38:16 and the whole team doing a bunch of incredible work
0:38:18 around AI and I think like more and more companies
0:38:21 need to do that and I think more and more people
0:38:23 need to go and find those use cases.
0:38:25 And it’s just really hard and I empathize with that a lot.
0:38:27 Like I feel this challenge for myself.
0:38:28 I’m like, I’m always busy.
0:38:30 Like how can I use AI to help me?
0:38:31 And it’s like not always obvious
0:38:34 like what the use cases are.
0:38:37 I think the real tangible advice is like
0:38:38 go and play with the technology.
0:38:41 Like you have to like get your hands dirty with it.
0:38:42 You have to play with it.
0:38:43 You have to run into the edges.
0:38:46 You have to like explore the tools a little bit
0:38:47 and like that kind of sucks.
0:38:50 And I think like platforms like the one that you all have
0:38:52 can like hopefully be something to like help
0:38:54 get some of the right resources to those people.
0:38:56 But yeah, you just have to play with the stuff
0:38:59 and find out what adds value and like hopefully find something
0:39:01 that adds enough incremental value
0:39:03 that it’s like worth you spending your time
0:39:04 continuing around it.
0:39:06 And I think like chat GBT and a bunch of these other tools
0:39:08 like really will add enough value
0:39:10 if you spend the time to play around with them
0:39:12 and get your hands dirty.
0:39:15 – Yeah, I actually onboarded my dad onto chat GBT
0:39:18 by showing ’cause he’s actually a business owner himself.
0:39:19 He does window covering.
0:39:21 So nothing tech related,
0:39:23 but I showed him how to use chat GBT
0:39:25 to help him respond to his emails.
0:39:29 And now he uses chat GBT to reply to all of his emails.
0:39:30 And he’s constantly finding like new ways
0:39:32 to use chat GBT in his business,
0:39:34 but that email was that catalyst.
0:39:36 So just like something really simple like that
0:39:38 like help me write my emails better
0:39:41 is a really good way to get into it, I think.
0:39:42 I need to figure that out with my mom.
0:39:44 I show it to her and she was like,
0:39:45 “Oh, that’s so impressive, that’s so amazing.”
0:39:47 And then she was like, “Okay, now what?”
0:39:50 – It’s that now what that is like the opportunity.
0:39:55 I think like no one has really nailed that like now what piece.
0:39:57 And I think if you can like help people with that now what
0:39:59 like that is a huge opportunity
0:40:03 that I think, yeah, there needs to be more thought put into.
0:40:04 – Very cool.
0:40:06 Well, this has been an absolutely amazing conversation.
0:40:07 We’ve had a blast.
0:40:09 Hopefully we can have you back on in the future.
0:40:10 Once you’ve settled into that role
0:40:12 let Google a little bit and maybe talk about
0:40:13 what’s going on there.
0:40:14 But I really appreciate the time.
0:40:17 Is there anywhere you prefer people to follow you?
0:40:18 Is Twitter the best place?
0:40:19 Do you have a website?
0:40:21 Any where people should go after listening to this?
0:40:23 – Yeah, I’m posting on Twitter.
0:40:25 I’m posting on, I try to post on LinkedIn
0:40:28 ’cause I feel like there’s a underserved AI market on LinkedIn.
0:40:30 Like everyone’s all the cool stuff’s happening on Twitter.
0:40:32 I’m like, we got to help all the normies on LinkedIn.
0:40:35 So I spend time on LinkedIn as well.
0:40:36 And yeah, I have a website
0:40:39 but there’s nothing as useful as on Twitter and LinkedIn.
0:40:41 At official Logan K.
0:40:43 And then I’m like, Dennis, just Logan Kilpatrick.
0:40:44 – Awesome.
0:40:45 Well, thank you so much for spending the time with us today.
0:40:47 This has been such a fun conversation.
0:40:50 I can’t wait to get it out to the world.
0:40:52 And yeah, thanks for joining us today.
0:40:53 – Thank you for having me
0:40:54 and congrats on the launch of the show.
0:40:55 This is awesome.
0:40:57 (upbeat music)
0:41:00 (upbeat music)
0:41:02 (upbeat music)
0:41:05 (upbeat music)
0:41:08 (upbeat music)
0:41:10 you
0:41:12 you
Episode 5: Why did Logan Kilpatrick leave OpenAI for Google amidst the AI surge? Join hosts Matt Wolfe (https://twitter.com/mreflow) and Nathan Lands (https://twitter.com/NathanLands) as they delve into this significant career move with Logan Kilpatrick (https://twitter.com/officialLoganK), the former head of developer relations at OpenAI. Kilpatrick’s new role at Google places him at the forefront of AI product leadership, following substantial contributions to AI tools like Chat GPT.
This episode unravels the layers behind Logan Kilpatrick’s pivotal shift from OpenAI to Google. As a key figure in the development of groundbreaking AI technology, Kilpatrick reveals his insights on the evolution of AI, addressing the challenges and excitement that encouraged his transition. The discussion offers a peek into the potential future of AI, the ethics of open-source models, and the continuous need for innovation in technology sectors.
Check out The Next Wave YouTube Channel if you want to see Matt and Nathan on screen: https://lnk.to/thenextwavepd
—
Show Notes:
- (00:00) Logan Kilpatrick shares insights on OpenAI, Google.
- (05:09) Chat GPT creation process deliberate and intentional.
- (09:13) Shift from small to large company challenging.
- (11:10) Excited about opportunities in language model space.
- (16:42) Nuance of openness in technology and projects.
- (18:36) They’re not selling a developer platform.
- (22:22) Open source GPT models raise safety concerns.
- (26:30) Concerns about centralized power and AI influence.
- (39:56) Hardware challenges for AI.
- (31:42) Exciting challenges of increasing artificial intelligence adoption.
- (37:29) “Play with technology to find value.”
- (38:07) Introduced dad to chat GPT for business.
—
Mentions:
- Google I/O 2024: https://io.google/2024/
- OpenAI: https://www.openai.com/
- Chat GPT: https://chat.openai.com/
- Google AI: https://ai.google/
- Get HubSpot’s Free AI-Powered Sales Hub: enhance support, retention, and revenue all in one place https://clickhubspot.com/gvx
—
Check Out Matt’s Stuff:
• Future Tools – https://futuretools.beehiiv.com/
• Blog – https://www.mattwolfe.com/
• YouTube- https://www.youtube.com/@mreflow
—
Check Out Nathan’s Stuff:
- Newsletter: https://news.lore.com/
- Blog – https://lore.com/
The Next Wave is a HubSpot Original Podcast // Brought to you by The HubSpot Podcast Network // Production by Darren Clarke // Editing by Ezra Bakker Trupiano