AI Entrepreneur Matthew Berman On The Power of LLMs – “It blows my mind.”

AI transcript
0:00:06 In the long run who wins oh man, let’s just say I’m a big believer in open source. I hope you’re right
0:00:11 Yeah, they’re taking the scorched earth mentality the scorched earth strategy and by the way
0:00:14 I’m a hyper competitive person, so I love it
0:00:21 Hey, welcome to the next wave podcast. My name is Matt Wolf. I’m here with my co-host Nathan Lanz and today
0:00:29 We’re talking to a serial entrepreneur and AI expert Matthew Berman. He’s got a very popular AI YouTube channel
0:00:33 But let’s just go ahead and get right into it. I’m curious about your story a little bit
0:00:38 I you know, we see all of your YouTube videos. You seem to be super tapped into the AI world
0:00:42 But how did you get into AI in the first place?
0:00:47 I saw chat GPT along with the rest of the world and I was completely enamored with it
0:00:51 so I decided to start a YouTube channel just to document my
0:00:55 learning process and hopefully share my learnings with other people and
0:01:03 Luckily my third video went pretty viral. It was about Leonardo dot AI and I thought oh well, this is easy and
0:01:08 Yeah, the channel just went from there and midway through last year. I went full-time on it
0:01:18 When all your marketing team does is put out fires they burn out fast sifting through leads creating content for infinite channels
0:01:22 Endlessly searching for disparate performance KPIs. It all takes a toll
0:01:26 But with HubSpot you can stop team burnout in its tracks
0:01:32 Plus your team can achieve their best results without breaking a sweat with HubSpot’s collection of AI tools
0:01:38 Breeze you can pinpoint the best leads possible capture prospects attention with click-worthy content and
0:01:44 Access all your company’s data in one place. No sifting through tabs necessary
0:01:46 It’s all waiting for your team in HubSpot
0:01:53 Keep your marketers cool and make your campaign results hotter than ever visit hubspot.com slash marketers to learn more
0:02:01 When it comes to YouTube and creating a lot of this AI content like what’s your source of
0:02:04 Information for all of this it’s one of the questions
0:02:07 I know I get asked a lot is like how do you keep your finger on the pulse of all this stuff?
0:02:11 And I know for me just keeping my finger on the pulse is literally my full-time job now
0:02:18 So it seems like you’ve sort of found like this niche for yourself like I mentioned you in my videos
0:02:23 And I know a lot of the other AI creators mentioned you in their videos now as like the guy to go watch
0:02:30 Whenever a new large language model comes out right like Phi3 just came out right and I was like Phi3’s out
0:02:33 I could talk about it, but Matthew Berman’s gonna do a better job than me
0:02:36 So go check out his channel cuz he’ll probably make a video about it
0:02:40 So like I I love that you’re doing that. How did you sort of fall into that?
0:02:46 Like what is it about the like large language models and sort of comparing them and figuring them out that made you want to go
0:02:54 Down that rabbit hole. I had this real love for the local model being able to download a model have it on my computer run it and
0:02:57 I know this isn’t exactly true
0:03:03 But essentially have the entirety of world knowledge and like just a handful of gigabytes on your computer
0:03:09 Still just saying it out loud blows my mind and so I wanted to benchmark
0:03:16 The the models that I was playing with I’m curious like right now like what’s the best local model you could try right now like for text
0:03:20 Yeah, I’m gonna mention two companies and I’m sure everybody watching this video has heard of them
0:03:24 Obviously meta meta ai’s with the llama three models. Those are
0:03:26 probably the best and
0:03:32 Maybe only slightly better than the Mistral companies models. So Mistral AI has Mistral
0:03:34 They have mixed role which is a mixture of experts model
0:03:43 If I am going to recommend a local model, obviously it depends on how how much RAM you have how much VRAM you have
0:03:49 But generally you’re gonna choose either one of the Mistral models or one of the Llama models now Microsoft
0:03:55 Just recently released the Phi model PHI Phi 3 and those are pretty capable
0:03:59 They tend to be smaller and they tend to need more fine-tuning to
0:04:02 For specific tasks, so they don’t have as broad of knowledge
0:04:07 But they’re really performance and they are still very high quality
0:04:13 So I think between those three families of models probably my favorite is gonna be the llama three models
0:04:18 But all three of them are fantastic. Yeah. Yeah, no, I’ve used llama three quite a bit
0:04:27 Especially when you combine it with GROC not GROK GROC, but GROQ GROC when you combine it with that version of with the
0:04:31 What are they called? Is it the LPU their language processing unit or something?
0:04:37 But when you use that it’s just like insane like the speed that you get back from the models on there
0:04:41 The GROC company GROQ and it’s kind of nuts
0:04:47 They you know llama three like 8b or this it’s like 800 tokens per second
0:04:54 And you would think like okay. Wow, you achieve something incredible now like take a break, but no, no
0:04:56 Now they’re like, okay, that wasn’t enough
0:05:00 We’re just gonna continue to work on increasing speeds and so it’s I can’t remember the exact number
0:05:05 But they have some it absolutely insane input token speeds. I think it’s like
0:05:10 2,000 tokens per second. Don’t quote me on that. But yeah, that that company is
0:05:14 Kind of nuts. They they are all about the speed and they’re doing really well
0:05:18 So would you say they’re like a competitor to Nvidia?
0:05:23 Are they making chips to compete with Nvidia or do they sort of work in tandem with Nvidia chips?
0:05:31 So they are definitely a competitor to Nvidia. However, Nvidia doesn’t actually offer inference as a service
0:05:37 They sell you the chips, right? And then like these big data center companies can buy the chips and offer inference
0:05:41 GROQ they used to sell chips and
0:05:45 Offer inference and they actually acquired this company. I’m forgetting the name of it now
0:05:50 But they they essentially acquired a company. That’s like the front end of the inference service
0:05:55 And so like I mentioned, they used to sell chips and then at a certain point maybe a month ago
0:05:59 They decided to stop selling chips and just offer inference
0:06:07 So they are building out massive data centers with their own chip technology and just strictly offering an endpoint an API endpoint
0:06:13 Or a cloud service like chat GPT except it’s lightning fast and you can use different open-source models
0:06:18 So are they trying to be like cheaper faster, but lower quality, but in some use cases, that’s okay
0:06:22 Is that the kind of yeah, you’re probably right Nathan
0:06:27 It’s like if you’re if you’re talking about like that last 5% of quality GPT 4.0
0:06:35 Like it’s gonna win but you know, I open source is catching up quickly when it comes to large language models
0:06:41 Have you found that like this model is good for X and this model is good for Y and this model is good for Z
0:06:45 Right like have you found certain models to be better for specific use cases?
0:06:53 Yeah, and and actually that’s really why I like open-source because these companies like meta will put out this raw model like a Llama 3 and
0:07:02 Then Eric Hartford will make the dolphin flavor of it and all of these versions are very good at a particular thing
0:07:05 Whether it’s being uncensored or whether it’s role-playing
0:07:11 Math coding in fact Mistral just released code stroll. I believe it’s called it was just today
0:07:13 So I haven’t had a chance to look at it
0:07:23 But yeah, so like that is also why I love open-source because you can have all of these fine-tuned models specific to different use cases where in that particular use case
0:07:30 9 times out of 10. They’ll be better than a GPT for more broadly speaking, right?
0:07:35 So GPT 4 is like across the board GPT 4.0 is it’s just a better model
0:07:40 But if you look at each individual use case and you find the best open-source use case
0:07:48 Typically, you can find one that is just as good and yes much cheaper and you can get it to be much faster
0:07:51 It’s interesting because you’ve got open AI you’ve got anthropic
0:07:58 They’re both sort of closed models, right? You’ve got to basically use them through their website. You can’t install them locally
0:08:01 We don’t really know what they’re trained on
0:08:06 It’s kind of all closed off and then you’ve got the open models like Llama, Mistral, 8x, 7b
0:08:09 Google has Gemini that’s closed-source and Gemma that’s open-source, right?
0:08:15 And then Mistral also has a mix of both closed and open-source stuff as well exactly in the long run who wins
0:08:19 Do you think I know that’s a very loaded question?
0:08:22 But like I’m just curious like what is your sort of initial gut thought?
0:08:29 You know when you when you have a person or I should say a company Mark Zuckerberg with meta AI dumping
0:08:32 hundreds of millions of dollars into buying these
0:08:34 server farms chip farms
0:08:39 attracting the best talent in the world and then just giving it away for free crazy, right?
0:08:45 They they’re taking the scorched earth mentality the scorched earth strategy and and by the way
0:08:49 I’m a hyper competitive person. So I love it like they were behind, right?
0:08:52 They weren’t they weren’t anywhere close to open AI. So they’re using the scorched earth
0:08:59 Strategy, I think likely what is going to happen is for a while closed-source
0:09:03 specifically flawed chat GPT will probably be
0:09:11 three to six months ahead of open-source, but it’s like the four-minute mile like once you’ve seen
0:09:14 Somebody else do it. You know, it’s possible
0:09:20 And so once you see for example a GPT 4 oh where they have the multimodal model off like voice
0:09:26 It sounds super real all of a sudden your intonations and your voice can be input as an input to the model
0:09:32 Other open-source builders look at it and they say, oh, yeah, okay. Well if we didn’t already know about that like yeah
0:09:38 now let’s go do that and so I think there’s going to be this the gap but it’s going to shrink over time and
0:09:41 Here’s my other thought about who wins in the long run
0:09:48 If you’re an open AI you have chat GPT and you’re building and the models themselves are becoming commoditized quickly
0:09:56 And so you you have chat GPT and the value is with all of the developer tools that you build around chat GPT
0:10:02 However, all of those developer tools that you build around it are only applicable to chat GPT
0:10:10 So if one a developer of business wanted to use a different model, they couldn’t they are completely locked into the open AI platform
0:10:16 Whereas with meta or if somebody builds a suite of developer tools on top of open-source models
0:10:19 You could swap out the open-source models as
0:10:26 Much as you like you can find that perfect fine-tuned model for you that is efficient high quality low cost and
0:10:34 I think that’s a really powerful strategy because you as a buyer of inference as a buyer of the model outputs
0:10:37 I’m not locked into a platform
0:10:40 And so that’s why I like if I were to choose as a business
0:10:45 I would probably do the initial experimentation of whatever I’m building
0:10:50 Using one of the closed-source models and then as soon as I found something that worked and my code is pretty sound
0:10:57 I would try to convert it over to open source as quickly as possible because platform lock-in is real. Yeah, I mean
0:11:01 I kind of agree with you and I hope you’re right like yeah, I want open source to win
0:11:07 But I’m pretty skeptical. I mean a lot of the stuff you’re saying kind of assumes that GPT 5 is gonna be slightly better
0:11:13 And that it can and also that open AI is not gonna get to some form of self-improvement before open source, right?
0:11:20 If they do that changes everything right and then and they’re in the rumors from people. I know who knows Sam
0:11:24 You know a lot of things sound like GP 5 is very very good
0:11:27 and then people are going to be shocked quite soon and
0:11:33 So I I think that everyone’s comparing to you know GPT 4 and I think that’s like really old
0:11:38 I think open AI is probably starting GPT 6 right now and GPT 5 is basically done
0:11:41 And it’s gonna be way better than anything that currently exists
0:11:44 So I think there’s some validity to what you’re saying
0:11:51 There was an interview that Jan Lacune the head leave his chief AI scientist at meta AI gave on the Lex Friedman podcast and
0:11:59 The way that he talked about it was kind of this path the AGI is it’s not like an on/off switch
0:12:06 It’s not like suddenly gonna happen. There’s not these huge step functions of improvement. It’s more gradual
0:12:09 It’s more subtle than that, but he’s always been trailing so far
0:12:11 So I so just because he works at meta
0:12:15 He has not innovated much at all like he’s always been behind and catching up
0:12:19 He hasn’t actually been the one who’s created new things. You know what you’re sounding like Elon Musk right now
0:12:25 But it’s true. It’s true. That’s it’s that that is the first principles way of looking at it
0:12:30 It’s not as unless it’s published. Okay. Yeah, which is crazy. Yeah, so yeah, I agree. That’s crazy
0:12:34 so it would be hard for me to believe that open AI has some
0:12:36 mind-blowing
0:12:37 technology
0:12:43 Innovation that we just could not even fathom to this point and they would release it all at once
0:12:50 It’s hard for me to imagine that scenario. I like the most ahead. I would guess they are is
0:12:52 15%
0:12:57 Maybe and then that gets that gap gets closed within six months
0:13:01 So I I’m less bullish on the open AI long-term play
0:13:06 especially because models are becoming commoditized and as they become commoditized
0:13:11 They are going to find it more and more difficult to attract the best talent
0:13:15 To get more funding to get the subscribers
0:13:20 Right because you can go anywhere and so it’s just a race to the bottom on pricing and so that that will actually
0:13:26 Help defeat the moat that they have in terms of kind of just model
0:13:27 quality
0:13:29 I hope you’re right. My gut is that you’re very wrong
0:13:37 We’ll be right back but first I want to tell you about another great podcast you’re going to want to listen to
0:13:43 It’s called science of scaling hosted by mark robert’s and it’s brought to you by the hubspot podcast network
0:13:48 The audio destination for business professionals each week hosts mark robert’s
0:13:55 founding chief revenue officer at hubspot senior lecturer at harvard business school and co-founder of stage two capital
0:14:03 Sits down with the most successful sales leaders in tech to learn the secrets strategies and tactics to scaling your company’s growth
0:14:09 He recently did a great episode called how do you solve for a siloed marketing and sales?
0:14:12 And I personally learned a lot from it. You’re going to want to check out the podcast
0:14:16 Listen to science of scaling wherever you get your podcasts
0:14:25 I feel like sam altman too has been like really sort of trying to set these expectations because whenever you hear him talk
0:14:28 He’s constantly talking about like the world doesn’t like massive changes
0:14:35 They want this stuff to kind of happen gradually and he keeps on saying that kind of wording every time he’s interviewed
0:14:39 So, you know, I kind of lean more on the side of like
0:14:43 I don’t know if gpt5 is going to be as big of a leap as everybody
0:14:47 You know might think it is because of sam’s subtle little hints
0:14:53 Like to me it sounds like he’s trying to manage that expectation by saying things like the world wants incremental steps
0:14:55 They don’t want this quantum leap all at once
0:15:00 I think I think it’s because gpt5 is going to be amazing. He’s trying to calm people down
0:15:04 Is he saying that because they have some incredible thing and they
0:15:07 Want to drip it out over time and they don’t want to reveal their secrets
0:15:13 Or are they saying that because they don’t and they want to manage expectations? It could be either. Yeah, it could go either way
0:15:18 You’re totally right. You know, I have this sort of theory that open ai
0:15:23 And you guys could definitely debate me and try to find some holes in my logic here
0:15:27 But I have this feeling that open ai is actually sort of
0:15:30 In not not a very good place right now
0:15:33 Because if you you look at open ai, right, they’ve got two main things
0:15:35 they’ve got their api that
0:15:41 Other companies can go and build ai related platforms on top of and then they’ve got their consumer facing product with this
0:15:47 Which is chat gpt. Well, their consumer facing product in chat gpt is
0:15:52 That’s becoming more and more commoditized, right? Like it’s just built into
0:15:55 Graham and whatsapp and google search
0:15:58 I mean, not very great in google search yet
0:16:03 But it’s built into google search and it’s just like very very commoditized anybody who wants to talk to an ai
0:16:05 There’s a hundred different places. They can do it now
0:16:10 So the need to go and pay 20 bucks a month to do it at chat gpt
0:16:16 The value of that’s getting smaller and smaller and you look at the api side, right chat gpt or
0:16:24 gpt 3.5 or gpt 3 that was the only game in town for a long long long time if you wanted to use an ai
0:16:25 api
0:16:31 Right, but now we’ve got clod from anthropic. We’ve got gemini. We’ve got llama. We’ve got
0:16:37 Mistral we’ve got all of these other options for apis and a lot of them cheaper than what open ai offers
0:16:42 So their two main business models have both become kind of commoditized
0:16:47 Right and then, you know, you put all of the safety stuff and all of the weird drama and lawsuits and stuff
0:16:49 You stack that on top
0:16:52 To me it paints a picture that open ai is
0:16:59 Probably got to do something they either got to have something really big with gpt 5 or
0:17:00 You know
0:17:06 Siri uses gpt now and when when the new iphone launches it’s got
0:17:11 It’s got gpt as the siri model and that can reinvigorate open ai
0:17:15 But right now I kind of feel like they’re in trouble, but i’m curious your thoughts
0:17:18 Like are there holes in that logic?
0:17:23 So I okay a lot to unpack. I think for the most part you’re you’re 100 right?
0:17:26 I think from a consumer perspective open ai
0:17:29 actually has
0:17:34 Pretty big dominance, right? They they have a pretty big moat just because chat gpt
0:17:40 Is the verb now it is ai for most people right it is ai nobody even knows of
0:17:45 Anthropic, but your point of you know, google search is going to have it
0:17:50 Facebook is going to have it instagram is going to have it, but it’s not going to be chat gpt
0:17:55 It’s going to be llama. It’s going to be gemini and it’s going to be more native into the existing
0:18:02 Interaction of whatever that is and so I remember sam altman. Do you guys remember when?
0:18:06 gpt apps kind of came out for a little bit where you can like call
0:18:09 Like price line and oh the plugins
0:18:14 Yeah plugins. Yeah, that’s what they were called. So just a couple months after launch
0:18:17 He said something super interesting that you you just reminded me of he said
0:18:24 We kind of realized people don’t want to go to they don’t want to have their apps in chat gpt
0:18:26 They want to have chat gpt in their apps
0:18:34 And and like that stuck with me and that’s it’s very akin to what you’re just talking about matt where
0:18:37 Like if google search has ai
0:18:39 Facebook has ai
0:18:41 telegram
0:18:43 Instagram all the grams they have ai
0:18:50 Well, like you’re not going directly to chat gpt. So then that leads us to the apple partnership potentially, right?
0:18:53 Yeah, first of all that blows my mind. What is apple doing?
0:18:57 Trillions of dollars and like they couldn’t do this
0:19:03 You know, there’s a really like a handful of engineers in a basement can pump out a decent model. So
0:19:08 It’s like very disappointing as somebody who’s been a long time apple fanboy
0:19:12 Yeah, but that would like go back to what I was saying is I think open ai is very ahead
0:19:18 And when you actually see the behind closed doors, yeah, they could throw it together something like gpt 3.5
0:19:23 Five is so far ahead that yes apple would just say we’re bowing. We’re partnering with you
0:19:28 We’re gonna make the best hardware. We’re gonna continue, but you have won the game. And so we’re going to uh
0:19:33 I’ll make an alliance with you. And so I think that’s what’s happening with apple in my opinion
0:19:39 It wouldn’t be actually the first time that apple did something like this, right? They they bowed out of the search game they and and
0:19:41 google
0:19:43 pays apple
0:19:47 Billions of dollars a year to be their search engine
0:19:53 So like would would it be like a similar setup? I you know, I don’t I don’t know that that is that is interesting. Um
0:20:00 Can you see open ai paying apple? I don’t I I think actually apple’s gonna pay them if I had to guess, um, yeah
0:20:05 It’s interesting because the apple real estate is important. Obviously. I I assume with the google thing
0:20:08 Uh, apple had more leverage because they could come in and say
0:20:12 Hey, we actually do have the talent to build something like google
0:20:15 Maybe it’s not gonna be as good. It’s gonna be 95
0:20:21 Uh, and so with that argument then they could get it where okay. Yeah, okay. Google, you know google you pay us
0:20:26 But if open ai is very far ahead and then apple doesn’t have that leverage to come say that to them
0:20:30 Like hey, yeah, we can just catch up overnight if they’re if they’re not able to say that
0:20:35 Then they have no leverage and then I think that would result in open ai getting paid
0:20:38 I believe well, I think I think this is just a stopover for apple
0:20:40 I think um, you know, you look at what apple did with intel, right?
0:20:43 They put intel chips in all of their computers for the longest time
0:20:49 But as soon as their m series of chips came along. All right. Bye bye intel. I think it’s the same kind of thing
0:20:54 I think chat gpt is their stopover to whatever they’re building. I think that’s it. I think that’s a good analogy
0:20:57 Yeah, they’d be crazy not to be trying
0:21:01 They’d be able to yeah, I think it’d be crazy long term to not be trying. Yeah. Yeah, for sure
0:21:06 I mean look when I first saw gpt 4o and the voice interactions. I was like, okay
0:21:12 Well, that’s siri right that that was that is the promise of what siri should have been a long time ago
0:21:18 Right and they couldn’t accomplish it and gpt 4o, which is not as you said nathan. It’s not even gpt 5
0:21:20 They were able to accomplish something
0:21:29 Really impressive. I guess by the way, there were also rumors that it was google’s gemini that was going to be powering siri
0:21:33 So I don’t know no wonder open ai is hiring for internal risk
0:21:37 Like spy, right? Didn’t you see that job posting?
0:21:45 They were hiring for like an internal risk assessor or basically like prevent the leaks. That’s really the job description prevent the leaks
0:21:51 Then again, you know microsoft is open ai, right and and apple and microsoft have a long
0:21:54 Contentious history. Well, I would not say they are open ai
0:21:59 I mean because you know the whole like that open ai can get out of it when they have a gi and that’s
0:22:05 You know, you can you could possibly argue. They already have a gi depending on what level and you know
0:22:06 It’s depending on the definition
0:22:11 So I wouldn’t I don’t I don’t think microsoft has a complete hold over open ai. They have a lot strong influence
0:22:15 But I don’t think a complete hold. Yeah, so I was just looking at it
0:22:18 I was just watching an interview of elan musk. I think it’s something
0:22:22 Called viva tech something like that. He called it microsoft’s open ai
0:22:27 He like said multiple times. I could tell like oh, he’s poking for sure. He’s talking
0:22:32 I mean, that’s the same thing with like that’s the same thing with like you and right like you’re just following orders boy
0:22:35 like it’s like
0:22:41 I I thought it was so funny. It definitely gave me a smile when I heard him say that but I I’m I’m on a different
0:22:44 Uh, I I have a different position to you Nathan
0:22:51 I think satya nadella is playing for d chess and everything everybody elan musk
0:22:56 Meta ai they’re all his pawns because he invests in open ai
0:23:05 He takes all their tech builds it internally builds it into every level of windows, but doesn’t call it open ai
0:23:11 Also partners with meta on the open source. I think he also did an investment
0:23:13 I might be wrong on this with anthropic like
0:23:16 He basically put his chips on
0:23:24 On every potential option. Well, he basically bought inflection ai too like inflection ai just got consumed by my airsoft
0:23:29 He’s I think satya nadella is just he’s blowing my mind right now just for d
0:23:31 ceo chess
0:23:34 Yeah, that’s it’s kind of like the classic microsoft playbook as an effort. What was it? It’s like
0:23:39 Extend exterminate. What’s what’s the other part of that? There’s nothing missing. Oh, I remember that
0:23:44 But that was like always microsoft’s playbook right was like to get things and then like make them a bit bad
0:23:48 And then like you basically enough like killing off the the competition miss mobile
0:23:49 They were strong
0:23:55 They were a little bit late, but they got very strong with cloud and now they’re early and strong on ai so
0:23:59 Very bullish on microsoft not investment advice
0:24:03 Yeah, I mean he’s doing way better than the ceo of google. That’s for sure
0:24:09 Yeah, yeah, that’s right. Yeah. Yeah, I mean if you do any searches right now about google ai
0:24:15 All you’re gonna find is their blunder with their like image generation model that couldn’t get the races correctly
0:24:19 And how google is teaching people that they should be eating rocks
0:24:24 Like google is um, yeah, they’re in a tough spot right now. I’d say yeah
0:24:29 That makes me like but that goes back to like the thing of adding ai into everything that you were talking about earlier matthew
0:24:33 Like i’m not so sure about that like yeah, you add ai to instagram and all these things
0:24:38 I think we’re gonna be seeing more new experiences versus just tacking ai on to old things personally
0:24:43 Uh, because like if you look at google, they’ve they’ve basically just tacked on ai because they’re they’re in panic mode
0:24:47 And they’re in a panic mode for a few reasons. I think people are not really thinking about like
0:24:50 Yes, sure like ai is going to eat their lunch or whatever
0:24:53 But also there’s a flood of ai content
0:24:58 On the internet now and they’re having to deal with that and they don’t have a clear answer to how to deal with that
0:25:04 Um, and the the the algorithm leak that came out a few days ago shows that like they don’t currently have any way
0:25:09 To deal with any of this they’re kind of like falling back on like okay. Who has the highest authority?
0:25:12 Well, now it’s uh, it’s a reddit site. Well, okay. Now people are just shitposting on reddit
0:25:19 So now it’s how do you answer that? You know the highest authority sites have people shitposting and stuff. Um, and so
0:25:22 I i’m not so sure that uh
0:25:27 You know just tacking on ai the things is is gonna be the plague is yeah, I don’t know where google even goes from
0:25:30 I agree with you nathan. I I think that’ll level the playing field
0:25:35 Uh, it’ll give people a lot more options to be maybe first exposed to ai
0:25:43 But I agree. I I don’t know what it looks like but new experiences with ai it seemed like the inevitable future
0:25:48 GPT-40 keeps coming to mind like that experience of being able to actually just
0:25:54 Have a real conversation with ai that can understand my tone my emotion
0:25:57 Also reflect back its own tone emotion
0:26:01 That’s really powerful the voice interface which really hasn’t been tackled
0:26:05 Seems like the most obvious
0:26:07 user interface
0:26:11 Yeah, and that’s where I do hope the open source like stays up, you know stays close to open ai
0:26:16 So like like the startups can actually be building those new experiences. It’s not just like oh, it’s all open ai
0:26:20 Yeah, agreed another thing that that I’ve followed you a lot for is
0:26:27 Um, you’re sort of coverage of ai agents, right? You’ve talked about ai agents a lot in videos
0:26:32 And it feels like most of the stuff that I’ve played around with isn’t quite there yet
0:26:38 I mean, what are your thoughts on ai agents? Have you played with anything that is like exciting you in that world?
0:26:43 Is there anything that’s like bubbling up that you’re like, all right, if you want to see an ai agent at work
0:26:48 Go mess with this. Yeah, so I I’m I’m very very bullish on ai agents
0:26:51 There’s there’s really like two main products
0:26:55 Autogen by mark soft, which is more of a research project
0:26:58 and then crew ai
0:27:00 Disclosure i’m an investor in crew ai
0:27:01 so
0:27:05 Very bullish on agents and there’s there’s a few reasons and I’ll I’ll also answer your question
0:27:10 I’m bullish on agents because when you just do a single prompt to ai
0:27:16 You’re you’re just not going to get the type of results as if you first of all had a more complex prompts
0:27:19 But also allowed it to
0:27:25 iterate with itself to reflect on its own output to work with other different models
0:27:33 In coordination to give tools to them. So at the end of the day agents to me are really two things
0:27:35 It’s the ability to put together
0:27:39 multiple large language models to work together which
0:27:43 Have been proven through different research papers like reflection and
0:27:47 Tree of thoughts to output better results
0:27:55 And then it’s also kind of all of the infrastructure around the large language model and the workflows that you need to bring it to a production level environment
0:27:58 like I mentioned tools
0:27:59 benchmarking
0:28:04 Logging all of this stuff is kind of all in these agent frameworks now
0:28:07 And so if you’re building production level ai
0:28:11 It kind of goes hand in hand to use one of these that that’s how I see it
0:28:15 But I also agree like the it’s still very early days
0:28:21 um, the agents often don’t behave exactly like you need them to and that’s the
0:28:28 Nondeterministic factor at work there. I think when you’re talking about use cases that work really well
0:28:30 Automating things that are very well defined
0:28:34 So in the work environment research analysis
0:28:39 You know crafting content all of these things are use cases that agents do really really well
0:28:42 Beyond that we’re still trying to figure it out and I think
0:28:45 The improvement is going to come
0:28:53 Because of two things model improvements and then framework improvements and then as you combine those two things and they both get better
0:28:59 They’ll kind of build off of each other and get exponentially better over time and we’ll be able to automate
0:29:02 More and more real-world tasks with agents
0:29:07 That do you think we’ll ever actually see a large action model?
0:29:10 Oh, man
0:29:15 That’s uh, yeah, okay. So I look I I was a fan
0:29:20 Of the rabbit device like I got it like that’s what you’re really asking about right?
0:29:23 Yeah, I want to I want to get into the rabbit thing. Yeah
0:29:29 So okay, so large large action model, um, there there’s actually a few examples of it in the field right now
0:29:35 There’s there’s two projects. There was one research paper which allowed the large language model to control
0:29:36 a
0:29:40 Windows mac linux environment through kind of a special
0:29:49 Version of those environments and it worked really well open interpreter. That was the other right open interpreter was a and now they have the one
0:29:52 I think it’s just called one which is like a little device
0:29:56 But it essentially allows you to control your computer and that’s really what a large action model is it’s right
0:30:03 Can the large language model write a script to execute things on a computer dynamically?
0:30:07 I think we’re gonna have that as like a middle ground
0:30:08 but
0:30:15 Like that is that is like the stop gap to a place where large language models can just execute code
0:30:18 Directly like you are just speaking your command
0:30:21 Uh, they interpret the command and then they just uh
0:30:27 Write code for the end device whatever that is. So let’s say you have a smart fridge
0:30:32 You say tell me what’s in my fridge. It writes a script to go execute on that fridge
0:30:37 Um, now, I’m sure a lot of people who are wary of security are shuttering right now
0:30:44 But um that I do see as the future and I made a video about you know developers probably won’t be needed in 10 years
0:30:48 Yeah, yeah, we actually recorded a podcast episode about that concept as well, but we never released it
0:30:55 I would be interested in watching that. Um, so yeah, I guess like right now large action models don’t work very well
0:30:56 especially the ones that
0:30:59 overlay a grid on top of an operating system because
0:31:05 It’s just hard for the large language models to predict an x y coordinate on top of an image
0:31:09 Um, but I’ve seen some decent examples of it and especially as
0:31:18 And this is why again, I’m so bullish on microsoft as they expose more of their operating system to the ai directly in the direct
0:31:21 In the ai control the operating system directly and it’s a well-defined
0:31:24 Uh interface between them
0:31:29 The the kind of idea of ai controlling your computer becomes more real
0:31:35 Yeah, yeah when it comes to the rabbit, I I loved the concept of it, right?
0:31:38 Like I like that large action model concept of like hey go do this thing
0:31:44 You train it once at your computer and then forever beyond that you can press a button and get it to do that same thing again
0:31:50 Right, but obviously as we’ve kind of seen so far the rabbits sort of under delivered on some of its promises
0:31:52 But yeah, we could just kind of leave it there
0:31:54 yeah, yeah, I think this is uh
0:32:00 Degree of over promise under deliver a very strong degree of that. I you know
0:32:08 You can watch coffee zilla’s video about whether it’s a scam or not. I I don’t believe so but um, certainly under delivered
0:32:13 Yeah, yeah, I I honestly don’t think they built it with the intention to scam people
0:32:17 But I do think maybe they got in over their head or something like that. So yeah, well, you know
0:32:24 So the the whole um, you know our our mutual friend Bilal Sado. Uh, he runs the ted ai podcast
0:32:28 He just had Helen tone her on who was one of the board members of open ai
0:32:34 What a get one of the one of the big things that she said in that interview was she broke down a lot of the things
0:32:41 That sam altman lied about right? Um, you know, one of the things she mentioned was they found out about chat gpt on twitter
0:32:44 Yeah, I think I think the most damning one was that
0:32:50 Open ai’s board didn’t know that sam altman owned the open ai fund. Yes
0:32:57 And and in parallel went in front of the senate and said I have no financial incentive in open ai
0:33:03 Now I guess like if you want to break it down to technicalities
0:33:09 Maybe that’s true. But even even that either that’s just plain old false to me, right?
0:33:11 Yeah, it’s crazy
0:33:16 I mean to me at first I thought like this is crazy. They learned about it from twitter
0:33:20 But then like the more comments I got and the more I thought about it the more I’m like
0:33:23 That’s actually not that really that’s not a deal, right? Because
0:33:28 Like at the time the api was being used in a lot of tools
0:33:33 It was out there in the wild and so basically open ai made their own sort of
0:33:38 Um, their own sort of tools using their own api to show off what it was capable of
0:33:41 It was a research preview when they first put it out, right? So like
0:33:44 If you’re thinking you’re just putting out a research preview
0:33:50 Are you really running that by the non-profit board that’s over here and not involved on a day-to-day basis?
0:33:52 Probably not, right?
0:33:54 It’s like the more time I’ve had to think about that the more I’m kind of like
0:33:56 Yeah, that’s probably not that big of a deal
0:34:01 But the thing that you’re talking about where he’s flat out saying I have no financial interest in this
0:34:07 But then he does have financial interest in the open ai startup program like that to me feels very shady
0:34:12 The whole thing is weird though. He doesn’t own equity in the company is also that’s the shocking thing to me though
0:34:14 Like why is the board not fixing that?
0:34:19 Like that’s that’s the thing they should be fixing like how can you have a CEO who doesn’t have upside and
0:34:25 Like I respect bill wall. Obviously. He’s our friend. But like Helen toner. Like what’s her background?
0:34:29 I mean, she’s basically like a political writer who has done some puff pieces for china, right?
0:34:32 So like what is like what does she bring to the board?
0:34:35 Like how does she get on the board in the first place is my big question?
0:34:37 And so I I’m not saying we should say that she’s lying
0:34:41 But I also don’t like sam alman saying the opposite of what she’s saying. So
0:34:46 You know as a as a fellow builder and also I have friends who know sam and say he’s a great person
0:34:47 He’s a very honest person
0:34:52 I wouldn’t say that we just should assume that sam’s lying and Helen’s telling the truth like she could be lying
0:35:00 I can’t remember exactly, but I think she has some background in ai ethics and and and things of that nature
0:35:06 But let’s put that aside for a second. Yeah, right. Yeah, she was outed, right? She she was put
0:35:10 Kicked to the curb out of open ai kicked off the board after
0:35:14 Essentially trying a mutiny. So yeah, so she’s going to be biased
0:35:21 It’s easily provable whether or not he owned or owns the open ai fund. So that that’s like one issue
0:35:24 the other issue is whether he
0:35:31 Released chat gpt without telling them. I think madame in in your corner on this like I I didn’t really see that as a big deal
0:35:33 He even said early on he was like
0:35:37 I didn’t think it was going to be as big as it was like I thought we were going to put out this little
0:35:41 Experiment get some feedback and then roll out a bigger
0:35:46 Project and and so I don’t I don’t think he put it out intending to be like ha ha going behind the
0:35:50 Boards back on this. I think he just didn’t think it was a big deal and he just put it out
0:35:53 There’s also the report of like
0:35:56 emotional abuse
0:35:58 And I like that. What does that mean?
0:36:06 Yeah, like I’m gonna say what does that mean? Give me give me the receipts. I need to see some documentation or something
0:36:11 Because what some person might consider emotional abuse might just be a strong disagreement on something
0:36:16 Or you didn’t get what you wanted or you know, like I it’s hard for me. I’m not saying it’s not true
0:36:18 But it could be true like how could you say it is true?
0:36:25 Um, the only one that is just like cold hard fact is he yeah, he did own open ai’s fund
0:36:29 I’ve tried thinking like it depends on like what he owns the fund. What does that mean?
0:36:32 Like does he actually like is he someone who’s signing for the fund?
0:36:37 Like like there’s like nuance there that I think should be unwrapped a little bit like
0:36:42 Like he could just be the person who signs for him or does he actually owns the entire there’s like different levels of this and
0:36:43 I mean, I think it’s crazy
0:36:48 Like I said, like I think he should own the fund or own part of it or something like there should be some financial upside for him
0:36:49 If he said that to the senate
0:36:54 Yeah, that’s a major screw up and I think a lot of like my critiques right now are about him
0:37:00 Like his judgment would be the the thing like right like like why did he have hell in on the board?
0:37:07 Yeah, things like this. Why does he not have any ownership in the company? That’s not I don’t think that’s the smartest way to go about things
0:37:11 Um, well, I think you know, he he’s been very very rich for very very long
0:37:17 And so I think maybe when they started open ai first of all, it was just like a research lab
0:37:22 So I don’t think they ever plan to necessarily commercialize it. He was already ultra rich
0:37:27 Maybe he didn’t he just didn’t have that long term view that oh, I I need to be
0:37:31 financially incentivized or aligned with this company
0:37:34 Um, well, it seems to be at the at the mind that like the long term it doesn’t matter
0:37:39 Like if you reach agi money, does it matter like like that’s why he did like, you know, universal basic basic income
0:37:45 He did those experiments which leads you to believe that yeah, he believes that once you reach agi money’s no longer a real thing
0:37:50 So does it matter and by the way, Nathan, um, I’ve heard the same thing from both friends
0:37:56 Who I have who know sam alman and then also just kind of what everyone’s saying on twitter their stories
0:37:59 They all say like yeah, great guy was there for me
0:38:02 trustworthy like so like
0:38:08 The the only thing that I really have to point at is the ai or the open ai fund thing and you know
0:38:10 I I don’t know exactly what happened there
0:38:16 Also, I just pulled up an article from axios from just a little while ago about a month ago
0:38:21 That says he’s no longer the owner or controls that fund in association with the company. So
0:38:24 Obviously, uh, if that is true
0:38:32 He and open ai more generally post the board export leaving realize that that was not a good idea
0:38:35 So i’m wondering what when he did the senate hearing
0:38:37 Was he still an owner at that time?
0:38:38 yeah
0:38:45 Yeah, he was for sure. I mean and and like we can go back and parse the exact words he used and I’m sure people will
0:38:52 Like if i’m the senate, I’m thinking like hey, we need to give him a call and bring him back here and answer some questions because yeah
0:38:56 Like it was almost a joke. I forget who asked but he’s like well, you don’t have any
0:39:00 Incentive in the company. How is that possible? He’s like, well
0:39:05 I forget it’s exact words, but it’s like, uh, well, I have money and no, I don’t so
0:39:08 What he was trying to convey was
0:39:11 Well, of course, I don’t have any incentive monetary incentive in the company
0:39:17 So thus I can make the best decisions for the company because I don’t have some financial
0:39:19 incentive
0:39:22 Which is actually kind of the opposite of the way the world works and actually not the truth
0:39:24 Yeah
0:39:29 So i’m curious has has like anything that’s happened recently, right? Like there’s been a lot of reports, right?
0:39:34 There’s the scarlet johansson thing which I think is probably more sort of
0:39:39 I don’t want to say coincidental, but I mean there’s nothing wrong with going
0:39:44 I like the sound of scarlet johansson’s voice. Let me go hire someone that’s got a kind of similar voice
0:39:50 There’s nothing illegal about that. There’s nothing. I even feel immoral about that, right? Um, like
0:39:56 Do you think any of this like crazy news that’s been coming out sam putting himself on the new safety board things like that
0:40:00 Has any of it changed your opinion or perspective on sam or open ai?
0:40:07 I think so I I watched um the all-in podcast a week ago david sacks had a lot of good points
0:40:10 Uh on on this and and he said like, you know
0:40:15 One coincidence is okay, but once you have these coincidences stacking up
0:40:24 Sequentially all of a sudden maybe they’re not coincidences and maybe you know open ai isn’t as well run as as we all thought
0:40:29 I think the like new open ai security committee is like the most uh
0:40:35 Like just like surface level like just a pr stunts completely
0:40:39 It’s essentially sam altman who runs the board as a sub
0:40:44 Group of the board doing that having this security committee when just last week
0:40:48 Ilya and yawn the two top security guys at open ai left
0:40:52 It’s like okay, so you’re basically creating your own board to oversee
0:40:57 Or your own committee to oversee your own board and your own company. Okay that i’m sure that’s
0:41:03 Not biased whatsoever. Yeah, and don’t forget too the government is creating their own
0:41:07 Uh sort of ai safety committee and guess who’s on that board?
0:41:09 sam altman satyadala
0:41:12 sundar kichai
0:41:15 Right. Yeah regulatory capture is a real thing
0:41:19 Look, I I like open ai. I like that they
0:41:25 Brought all of this incredible innovation and really opened the world’s eyes to what’s possible with artificial intelligence
0:41:31 Um, but like yeah, I um, let’s just say I’m a big believer in open source. Yeah
0:41:33 I’m a big believer in open source
0:41:37 But I I think a lot of stuff you’re seeing at open ai is what you would expect to see
0:41:42 From a company that is approaching the most important thing that humanity could ever build
0:41:46 Right agi like you would expect that there would be major
0:41:51 Uh, the emotions would be very high. You would expect that major mistakes would be made
0:41:56 We’re still human you would expect that there would be like culture classes of people having very different opinions
0:42:00 of what you do with this new fire that we’re inventing, right?
0:42:05 Um, and so I kind of I personally think that’s what’s going on like he’s under immense pressure
0:42:11 He’s definitely made some judgment mistakes like even tweeting out her that like the whole thing about the voice and all that most
0:42:15 Bullshit, but him tweeting out her was a mistake, right? That’s like that was a mistake
0:42:19 And so I think that’s that’s what we’re seeing here and and even all the drama people leaving stuff
0:42:24 I see in the helen statement. I see that as culture class. I see that as
0:42:28 Uh, the people who were more concerned about what agi could do
0:42:33 They are now out, right and the people who are more on you know, we want to build agi
0:42:37 We think that you know net. It’s a good for humanity. They’re the ones now
0:42:41 In charge of the company and I personally think that’s a good thing personally
0:42:45 Yeah, I mean, you know, it’s it’s hard to speculate what’s going on inside
0:42:50 I think Nathan you could be very right about that and actually likely are right. It’s just like
0:42:53 Tumultuous time in the company like hyper growth
0:42:56 Emotions are high. They gotta be
0:43:03 Yeah, yeah and matt by the way, um, you mentioned, you know, using scarlett Johansson’s voice alike, right?
0:43:05 I’ll look alike of the voice
0:43:07 I’ll reference david sax again in the last all-in podcast
0:43:13 He said he’s like actually this happens all the time if you’re a director and you can’t afford hiring scarlett Johansson
0:43:19 You essentially say get me a scarlett Johansson type and it is somebody who kind of fits that
0:43:25 role of scarlett Johansson, you know female blonde kind of, you know
0:43:30 Has the same history of movies in the same view like it happens all the time
0:43:31 So I I don’t think they did anything wrong with that
0:43:36 But I also like the fact that they tweeted her and yeah, that’s the misstep right there, right?
0:43:43 Is the yeah, they tweeted it. He tweeted her and then the voice sounded similar to scarlett people put those together and went
0:43:47 Oh, he was you know trying to clone scarlett’s voice
0:43:50 I think that was the misstep not actually hiring a voice actor that sounds close to her
0:43:54 Tweeting the word her is the misstep, right?
0:43:59 So Matthew, I’d love to you know hear what you think about uh, the news from xai that they you know
0:44:05 The Elon Musk raised six billion dollars on eight million pre a few a few people said I mean that the company’s worth 18 billion
0:44:09 No, that means it’s worth 24 billion that’s how that’s how pre and post money works
0:44:15 So at 24 billion valuation in this early stage, it’s like I think that’s the largest fund raise ever
0:44:19 Like at a at a early stage. What do you think he’s going to do with that money?
0:44:21 Do you think it’s just all about grok is or something?
0:44:25 You know bigger that yeah in the works or what do you think first of all?
0:44:29 Him raising that much money is all about his name rightfully so
0:44:35 Yeah, it’s not like this is some random dude coming out of the woodwork building a AI company
0:44:37 This is Elon musk who has proven he is
0:44:42 One of if not the best entrepreneurs of all time. Okay, so that aside
0:44:48 Um, I think he already tipped us his his hat or his cards. Uh, he said he’s going to be buying a ton of
0:44:53 CPU or GPUs, right? He’s going to be investing heavily into
0:44:59 Uh, Nvidia cards building out. I I hopefully I’m saying this right. He said the biggest
0:45:03 GPU cluster ever he said a hundred thousand h 100s, I believe
0:45:08 Wow, so, um, I think that is the right play. So so a few things
0:45:14 Um, that is the way to attract talent, right? Because the GPU is really the bottleneck
0:45:20 So if you have the GPU, you can attract the best talent who could hopefully build the best models
0:45:26 I’m hoping and I am all for more competition in the space closed source open source. I don’t care
0:45:32 Competition is good for the end user the consumer. So I I I’m very very bullish on
0:45:37 Using all of that money for for buying the compute. I really like that strategy
0:45:40 I I they they have a data set
0:45:42 that
0:45:48 Is incredible that really nobody else has right open ai has been announcing all of these partnerships
0:45:50 So they’re trying to get all this data
0:45:55 But like xai has twitter’s data. That’s that’s a crazy
0:45:59 And possibly take tesla and then nerling and then space
0:46:04 And so that’s the other question Nathan. How do they how do they uh, how does
0:46:11 Xai relate to tesla and that’s actually something that worries me as a tesla investor because yeah me too
0:46:15 Elon is kind of holding the company hostage right now if they don’t
0:46:17 approve
0:46:23 Of his comp package, which is essentially he wants to own 25 of the company. He’s going to go do ai somewhere else, which
0:46:26 That is rightfully so. Okay. So
0:46:30 So him holding the company hostage is not right like well, yeah
0:46:35 I would say that’s not right, but his comp the fact that they’re trying to hold back his comp package
0:46:38 So he basically did a deal where if you look at the video clips from I don’t know
0:46:43 How long ago was it when the comp package? Was it like was it already like 10 years or something? Was it five years?
0:46:44 It’s it’s it’s been a long time
0:46:48 But anyways when when the pump that when the comp package was uh introduced
0:46:53 Like there’s video clips of saying that the goals in the comp package were so ridiculous that like what the hell is he doing?
0:46:59 He’s crazy. There’s no way that anyone would ever reach these goals right and he doesn’t reach it. He’s not gonna get paid
0:47:04 He’s insane and uh, and so of course if you reach those goals, he should be compensated for reaching those goals
0:47:09 It’s insane that over a decade now that he’s created this company into this huge behemoth in the industry
0:47:14 And then now some investors are trying to say oh, uh, we don’t like your politics or whatever
0:47:17 So you you shouldn’t get paid for what you did the last 10 years. Well, they did
0:47:22 That’s crazy. It is the comp, right? That’s crazy. I mean it’s absurd. It’s absurd. Um
0:47:27 He is yeah, I think you you you told that story perfectly
0:47:32 It’s easy with 2020 vision to look back and and say like wow you made so much money an absurd
0:47:35 crazy amount of money which he did
0:47:37 But his initial compact like tesla was
0:47:44 Nothing when he joined. Yeah, it was like a it was it was like a an off-the-shelf electric motor put in
0:47:49 Oh, what are those little cars? I think the lotus, right? Yeah, that’s all it was
0:47:53 Like he’s the one who turned it into one of the most valuable companies in the world
0:47:59 He is the one who changed the car industry forever. So holding the company hostage hurts me as a shareholder though
0:48:03 So I don’t like that. But I understand and I want him like go ahead
0:48:07 If you want 25 fine, like if you want to make every decision fine
0:48:14 But then if he goes and he takes ai and he goes and builds it somewhere else x ai without kind of integrating it into tesla
0:48:21 Then all of a sudden what what’s where’s autopilot going to be and and like and then yeah, um tesla’s just a car company
0:48:25 All of a sudden their valuation is going to it’s going to plummet because it’s not
0:48:30 This vision anymore. It’s just a car company and and there’s a lot of car companies out there
0:48:33 So I I don’t know it’s it’s uh, yeah, what do you think?
0:48:36 Yeah, you know, there’s a reason that they say like silicon valley pirates, right?
0:48:39 Like my friends ollie murty who had this company called peanut labs back the day
0:48:44 They used to do this big pirate cruise every year and tons of silicon valley elites would come out to it
0:48:48 We’d all dress up like pirates, right every year, uh, there there’s a reason that that kind of tradition exists
0:48:53 Right is it like people who start big companies? They often break rules, right?
0:48:57 And they often do things that from the outside people be kind of shocked about the stuff that actually goes on
0:49:04 That uh, you know and and there is kind of pirate behavior of like going off to get the treasure and doing whatever the hell it takes to get it
0:49:10 Uh and breaking rules along the way and so I I think from that perspective like yeah, he’s being screwed over at tesla
0:49:13 So I’m not shocked at all that like yeah, there’s some rules that he can’t hold them hostage
0:49:17 But he’s trying to do that. That’s not shocking at all. I mean you can you can look at
0:49:24 Other founder-led companies versus non founder-led companies that maybe previously were let’s look at a few of you examples meta ai
0:49:32 Very much founder led zuckerberg is in charge. He still has those super super shares as far as I remember and so he’s able to
0:49:36 invest a ton of money into
0:49:38 uh, vr
0:49:39 ar
0:49:42 And then on a dime when the market doesn’t like it
0:49:47 Kind of turn the ship around to a 180 and now they are like booming with ai
0:49:52 And so he’s able to do that now you look at google where sergey
0:49:57 And larry are no longer kind of as day to day as they once were
0:50:01 And and google you can see they like they were slow to get ai
0:50:06 They literally made the paper transfer or attention is all you need which was the defining paper that
0:50:10 All of the current uh wave of ai is built on
0:50:13 They couldn’t productize it couldn’t commercialize it
0:50:19 And even when they did they they stumbled they had the woke ai and it’s just like it’s so slow
0:50:22 I think they’re finally starting to get their act together which is good
0:50:28 But like it’s very clear when you have a founder-led company and somebody who can make decisions quickly somebody who could
0:50:29 You know
0:50:34 Break rules not you know not break laws but break break the rules how to break break the mold
0:50:36 as a as a uh
0:50:42 Tesla shareholder give ilan his money like just let him make the decisions and uh if he’s wrong
0:50:46 Hopefully that he he doesn’t get paid as well like that. I think that should just
0:50:51 Like he’s proven he could do it. So let him try it again. I mean that’s uh
0:50:57 I think we covered so much ground on this episode and talked about so many things but this has been awesome
0:51:01 We should definitely uh do a round two at some point if you’re open to it
0:51:04 I would love to you know, thank you so much for for hanging out with us
0:51:09 Before we wrap though like where should people go? I know you’ve got your youtube channel
0:51:13 You’re doing stuff over on x. Where’s the best place to go? Uh, check out what you’re up to
0:51:16 Yeah, uh, definitely check out my youtube channel
0:51:22 Matthew Berman just search it in the search bar and then Matthew Berman com if you want to check out my newsletter
0:51:27 Awesome. Well, thanks again for for coming on and uh, just sort of nerdin out about ai with us today
0:51:30 Yes, sir. Anytime guys. Seriously. This is fun
0:51:32 [Music]
0:51:34 [Music]
0:51:36 [Music]
0:51:38 [Music]
0:51:40 [Music]
0:51:42 [Music]
0:51:44 [Music]
0:51:46 you
0:51:48 you

Episode 10: Are closed-source or open-source AI models the future of artificial intelligence? Nathan Lands (https://x.com/mreflow) and Matt Wolfe (https://x.com/NathanLands) delve into this question with guest Matthew Berman (https://x.com/MatthewBerman), a serial entrepreneur and founder of a popular AI YouTube channel.

In this episode, we explore the pros and cons of closed models from giants like OpenAI and Google versus open models like Llama and Mixtural. Matthew Berman shares his insights on the evolving AI landscape, the potential of future models like GPT-5, and the impact of integrating AI into major platforms such as Google and Facebook. We also speculate about OpenAI’s partnership with Apple and the wider implications for the tech industry. Additionally, the discussion dives into the job market for AI specialists, Silicon Valley’s pirate culture, and the challenges of hyper-growth within companies pushing the envelope on AI innovations.

Check out The Next Wave YouTube Channel if you want to see Matt and Nathan on screen: https://lnk.to/thenextwavepd

Show Notes:

  • (00:00) Tracking AI advancements.
  • (05:58) Love for open source: diverse, specific, cost-effective.
  • (09:23) Swap open source model for flexibility and quality.
  • (11:41) Doubts about OpenAI’s long-term future, models commoditized.
  • (15:03) Competition and challenges ahead for OpenAI.
  • (19:06) Apple’s leverage with Google, OpenAI’s potential payment.
  • (20:14) GPT 4’s accomplishment, versus rumors of Google’s Gemini.
  • (23:26) Concern about integrating AI into existing platforms.
  • (28:25) Large action model can control computers dynamically.
  • (34:31) Sam Altman downplayed release intended for feedback; denied malintent.
  • (37:39) Senate considering recall after contradictory financial statements.
  • (42:56) Elon Musk raised $6 billion, largest fundraise.
  • (45:32) Questioning validity of Elon’s compensation package.
  • (47:35) Silicon Valley pirates break rules for success.

Mentions:

Free Resources:

Check Out Matt’s Stuff:

• Future Tools – https://futuretools.beehiiv.com/

• Blog – https://www.mattwolfe.com/

• YouTube- https://www.youtube.com/@mreflow

Check Out Nathan’s Stuff:

The Next Wave is a HubSpot Original Podcast // Brought to you by The HubSpot Podcast Network // Production by Darren Clarke // Editing by Ezra Bakker Trupiano

Leave a Comment