AI transcript
0:00:14 Our mission is to really transition the world from links to answers and build the ultimate knowledge app.
0:00:18 Hey, welcome to the Next Wave podcast.
0:00:19 My name is Matt Wolfe.
0:00:23 I’m here with my co-host Nathan Lanz, and we are your chief AI officer.
0:00:29 It is our goal with this podcast to keep you in the loop with all the latest AI news, the coolest AI tools.
0:00:36 Ed set you up for success in this Next Wave that we’re entering into with the world of AI and technology.
0:00:38 Today, we’ve got an excellent show for you.
0:00:44 We’ve got the founder and CEO of Perplexity on the show, Arvind Srinivas.
0:00:53 We had a fascinating conversation with Arvind, and he told us his story, how he went from growing up in Chennai to moving out to California and going to university at Berkeley.
0:00:55 He’s worked at Google DeepMind.
0:00:56 He’s worked at OpenAI.
0:01:09 He’s got a pretty impressive resume, and now Perplexity is one of those companies that all the venture capitalists in Silicon Valley are chasing after and just throwing a ton of money at because they love the idea.
0:01:12 And we want to dissect that and break that down in this episode.
0:01:14 It’s probably the best way to do research with AI.
0:01:16 Actually, I’ve started using it to do research for the episodes.
0:01:25 Yeah, when you look at the current AI landscape, right, you’ve got these large language models like GPT-4 and Anthropics Clawed and Gemini.
0:01:32 And you’ve got all of these large language models that are sort of put in the form of a chat bot like we see with chat GPT and Clawed, right?
0:01:41 Perplexity took a little bit different of an angle on it, and they wanted to make sure that, A, they were searching the web, and B, they were citing all of their sources.
0:01:51 And they were really sort of the first AI chat query platform that started to cite the sources and share where they actually found the information.
0:02:01 They have a really, really great Chrome extension where you install the Chrome extension and it will essentially search anything on the site or domain that you’re on.
0:02:03 I found that to be super, super helpful.
0:02:05 So I love this approach Perplexity took.
0:02:15 And Nathan, you and I were talking right before we hit record about how they’re totally agnostic to the actual underlying large language model.
0:02:21 Yeah, which means they’re also like a huge supporter of open source, which obviously I’m a big fan of because I don’t like the idea of just having one
0:02:24 company or two companies that rule the future of AI.
0:02:30 So it was great to hear from Arivan, like his thoughts on open source and how Perplexity is kind of supporting open source by being agnostic.
0:02:32 Yeah, I mean, it’s a cool place to be, right?
0:02:39 Because you can go use Perplexity and you don’t have to worry about, all right, what is the best model out there right now?
0:02:44 I mean, people like Nathan and I were constantly going, all right, Clawed is marginally better than chat GPT.
0:02:46 So let’s use that now instead, right?
0:02:48 We’re keeping our finger on the pulse of that kind of stuff.
0:02:56 But if you use something like Perplexity, well, it’s always just going to use the most beneficial model for what you’re trying to achieve.
0:02:56 Yeah.
0:03:03 When all your marketing team does is put out fires, they burn out.
0:03:07 But with HubSpot, they can achieve their best results without the stress.
0:03:16 Tap into HubSpot’s collection of AI tools, breeze to pinpoint leads, capture attention and access all your data in one place.
0:03:19 Keep your marketers cool and your campaign results hotter than ever.
0:03:23 Visit HubSpot.com/marketers to learn more.
0:03:33 So on this episode, our event is going to break down his entire story about what Perplexity was before it is what it is now.
0:03:36 And when he started, it was completely different.
0:03:42 So he’s going to break down that whole story arc for you of how it started and how it got to where it is today.
0:03:44 We talk about the current state of AI.
0:03:48 We talk about all of these devices that AI is getting rolled out into.
0:03:56 You’re going to learn about the past, present and future of AI and how Perplexity is firmly placing themselves in the center of all of it.
0:03:59 So let’s go ahead and jump in with Aravind Srinivas.
0:04:03 Aravind Srinivas, thank you for joining us today on the next wave.
0:04:05 Thank you for having me and Nathan, Matt.
0:04:07 You know, I’ve been a big fan of Perplexity since the beginning.
0:04:11 I think when I first saw you tweeting about it and tried it out and was blown away.
0:04:20 So I think it’d be useful to know how did you get Perplexity started from starting out in India to now having one of the hottest startups in Silicon Valley?
0:04:21 Like, what was that journey?
0:04:29 I mean, by the way, I think it’s better to tell the true story than something that’s retrofitted to make it look like a much better story for PR.
0:04:31 Yeah, OK. Yeah, yeah, yeah. The true story is great.
0:04:40 Look, I never intended to start any company, but then there was this movie I watched, really deeply impacted me, called The Pirates of Silicon Valley.
0:04:42 I don’t know if you guys have seen that movie.
0:04:43 Yeah, I have, yeah.
0:04:49 It’s one of the most authentic portrayals of Steve Jobs and Bill Gates and Microsoft and Apple.
0:04:50 Yeah.
0:04:53 And I was like, OK, I really need to be at Silicon Valley.
0:04:54 It’s fantastic.
0:04:59 And then did not have enough money to go do a master’s myself.
0:05:01 So I thought, OK, someone else has to pay for you.
0:05:03 So why don’t I try for a PhD?
0:05:10 And I’ll get the best way to do PhDs and get started in some kind of research and establish like a tracker garden.
0:05:14 So I went to a professor at IED and said, hey, like, can you help me do some research?
0:05:19 And he was like, yeah, you know, there’s this paper called Atari Games AI.
0:05:23 Like, there’s this company called DeepMind that trained in AI to play Atari games.
0:05:26 Why don’t you try to reimplement that whole paper?
0:05:31 So like, he got me excited about all these ideas like transfer learning and hierarchical learning and things like that.
0:05:38 And like, I wrote a few papers with them and that got me an admission in UC Berkeley for doing a AI PhD.
0:05:45 And there I did a little more work and like OpenAI noticed my work, particularly this guy called John Shillman.
0:05:49 He’s the guy who in basically like the research inventor of chat.
0:05:54 At that time, he was doing research more in our realm and he invited me to do an internship.
0:05:57 And until that point, I was kind of like on a high.
0:06:01 I was thinking I was really doing well writing papers, like coming all the way from India here.
0:06:06 And when I entered OpenAI, I was like, damn, like a back on your face.
0:06:11 It’s still humbling. The people here are like stalwarts, like superstars.
0:06:14 All of them are really amazing, like talented people.
0:06:17 But it was not a very stable organization at the time.
0:06:20 It’s probably never been stable for what it’s for the hotel.
0:06:25 So then I got to work on all these unsupervised, ungenerative models,
0:06:27 got an internship at DeepMind.
0:06:32 And that’s where I think I got the entrepreneurial ambitions because I always wanted to start a company like that.
0:06:38 Where I knew I would not be successful starting the next like Instagram or to talk anywhere,
0:06:42 even if like luck was on my side because I don’t have the skill set of like
0:06:44 hacking the dopamine of people.
0:06:48 So my skill set is more, OK, thinking about like some problem more deeply
0:06:51 and trying to see what we can do with some research, but quickly ship it to product.
0:06:54 That was the sweet spot I was trying to get at.
0:06:56 And Google is a great example of that.
0:06:58 So that was very motivational.
0:07:00 That doesn’t mean I wanted to start a search startup.
0:07:03 It was just like motivational to try to start a company in that fold.
0:07:06 And, you know, one thing led to another.
0:07:10 I tried, you know, this TV show Slick and Valley, right?
0:07:11 You wouldn’t believe it.
0:07:15 I actually thought it was for a com, like meant for a comedy.
0:07:19 But people told me it’s to be real.
0:07:21 I lived in Silicon Valley for 13 years.
0:07:23 It’s it’s definitely real.
0:07:25 I was, I was at Berkeley.
0:07:25 I was in Berkeley, right?
0:07:28 So I was not very well connected to Silicon Valley.
0:07:32 So I thought this show was just meant for laughs.
0:07:34 But people told me, dude, don’t laugh at this.
0:07:39 I cry watching it because it’s too real and reminds me of my own life.
0:07:42 And then I was OK, if I like it’s compression, generative models.
0:07:44 All of that was like amazing.
0:07:47 Try to start convinced people to work with me.
0:07:49 Nobody wanted to do any company.
0:07:53 Something that you realize as a founder is like every time you go to your friends
0:07:57 and say, let’s start a company either over drinks or coffee, doesn’t matter.
0:07:59 All of them would say, hell, yeah, let’s do it.
0:08:01 And then you just forget about it.
0:08:03 Yeah, people don’t really tell how hard of doing a startup is, I think.
0:08:06 You it’s real when you just say, yeah, I’ve started it.
0:08:09 This is the company.
0:08:10 Are you willing to join?
0:08:13 Whether you join or not join, I’m going to I’m going to do it.
0:08:15 And that’s when people are like, wait, is this real?
0:08:16 Is this serious?
0:08:19 And then they’re like spooked and interested, right?
0:08:22 So the reason you’re not having cofars is people don’t think you’re serious enough.
0:08:27 Anyway, so all that, like one thing led to another and like, it’s the stupidest idea
0:08:31 to one of my first investors saying, hey, like, we need to disrupt search.
0:08:36 So it’s hard to disrupt Google through the text form factor.
0:08:42 So how about we disrupt Google through the vision form factor, through the vision pixels?
0:08:46 So imagine we all wore a glass and we all saw this and then we could just ask
0:08:48 questions about whatever we see.
0:08:50 And he was like, OK, all the sun’s cool.
0:08:53 But look, you’re you’re like literally one person right now.
0:08:57 And like, you’re not going to be able to execute on this yourself.
0:09:02 Start focusing on more narrow things, get a team and then try to build up towards this.
0:09:06 So that was a very good advice given to me by this great investor named Elad Gill.
0:09:07 Oh, Elad.
0:09:10 And then like, Naft, Friedman and Elad decided to fund me.
0:09:12 They were like, OK, look, you’re from open AI.
0:09:16 You have all this, like you’ve done work in deep mind research.
0:09:17 You understand these things.
0:09:19 But again, like, you don’t have any idea.
0:09:21 You don’t have any product.
0:09:25 So we’re going to give you like one or two million to play around and tinker.
0:09:26 We’ll see what happens.
0:09:30 And then I take that money and like, we start focusing more on like searching over databases,
0:09:36 like searching over your own spreadsheets, searching over CSVs, asking questions about like data sets.
0:09:37 And that was fun.
0:09:38 Like as a data nerd, I really loved it.
0:09:43 And we got like, my co-founders Dennis and Johnny joined to try on.
0:09:45 They were all excited experiment too.
0:09:49 And then like, we went to enterprises and said, hey, dude, it’s like we have this thing.
0:09:51 We used to show demos.
0:09:55 What if you gave us your data and we powered search over that?
0:09:58 You just upgrade the functionality for your users.
0:10:03 Like you’re into websites like pitchbook and crunch based and like, and all of them will listen to us,
0:10:08 watch our demos and be like, our engineering teams can do this, man.
0:10:09 So thank you.
0:10:14 And we feel like really depressing every week where like we would keep doing demos and nobody wants it.
0:10:20 And then I one day I just realized nobody cares about like a three person startup.
0:10:22 They think they can do it themselves.
0:10:24 They don’t value you and it’s fair.
0:10:24 It’s fair.
0:10:27 Like you’ve not earned their value yet.
0:10:32 I’m sure this will be useful for bigger companies, but they are never going to talk to us.
0:10:37 If these smaller companies don’t talk to us, like let’s earn the attention of the bigger guys
0:10:41 by doing search over public data sets that are really big.
0:10:45 Only then they’ll get convinced that we can handle large databases.
0:10:50 And so we started scraping Twitter because I really mean, obviously we all like Twitter.
0:10:50 We are all using it.
0:10:52 Yeah, right.
0:10:53 Yeah, X as it’s called today.
0:10:57 And Jack Dorsey had Twitter API.
0:11:03 Elon also had it, but Elon basically charges so high that it’s impossible to lose it now.
0:11:05 But Jack Dorsey had the Twitter API.
0:11:10 And if you’re an academic access accounts, you can just scrape a lot of tweets every day.
0:11:14 So we will just create these academic access accounts.
0:11:16 And non-commercial use.
0:11:19 And we keep scraping social graphs and tweets.
0:11:20 And then we will power search over that.
0:11:26 We will power search over like, oh, like how many followers does Nathan have that math is also following?
0:11:32 Or like, what are the tweets of math that Nathan has liked in the last 10 days?
0:11:38 Or like, which of Nathan’s tweets has Elon Musk replied to or?
0:11:39 There is stuff like that, right?
0:11:40 Right.
0:11:53 And then you can sort of like, it’s fun, like these sort of social searches and like, like tweets about AI or like tweets about like, like 3D diffusion models that like, like the math has tweeted about.
0:11:57 You can do a lot of these searches that current Twitter just like really sucks at, right?
0:12:04 And once we build this demo, we show it to a few people like Elon and like Jeff and like, they’re all like blown away by that.
0:12:07 Damn, like this is a completely new experience.
0:12:14 Like, okay, large language models can help you build new search experiences that was never possible before.
0:12:16 And they invested in us.
0:12:31 And then we use that their investment as a credibility to like attract some engineers, at least two engineers joined us after that, saying, hey, okay, like, look, you guys may not be well known, but looks like you got funding from some top people.
0:12:38 So you won’t be like Randall’s, you know, at least I can trust you to like work with you guys and the demos are actually really impressive.
0:12:39 So let’s work together.
0:12:45 And then we went to these bigger companies and said, hey, like, look, these things are working.
0:12:46 Do you want to work with us?
0:12:51 Now they would at least like take our meetings more seriously and say, okay, you know what, these are all our problems.
0:12:52 Like, what do you want to do?
0:12:54 So that was the stage we were in.
0:13:00 And then one fine day we were like, hey, like this part of talking to companies and like trying to sell solutions to them is not even fun.
0:13:14 So why don’t we just like search over the whole web, like make the LLM just look at the links, take the relevant parts of the links and then let the LLM do all the reasoning in terms of whether it has to return a table or a paragraph or citations or whatever.
0:13:16 And then we built a little more general solution.
0:13:23 Like, I think Paul Graham talks a lot about this, like how often when you realize a simpler way to do something, like it becomes a big unlock for you.
0:13:29 And then one weekend, like we prototype this idea of just taking the links and summarizing them with citations.
0:13:32 And then it was working reasonably well.
0:13:38 And three days before chat, GBT got released, OpenAI put out this DaVinci 3 model.
0:13:39 Yeah.
0:13:46 And that model just made these summarizations so much better that we were like, damn, this is truly a big deal.
0:13:47 It’s an inflection point in AI.
0:13:51 Everyone’s like, this is a technology we never even knew we wanted.
0:13:54 But now that we have it, we just like don’t want to go back.
0:14:01 And they did one thing, which is they don’t have, they had a knowledge cut off really, and they don’t have citations.
0:14:03 They don’t have like grounding in facts.
0:14:09 So there was a space for somebody else to come and put a fact grounded citation powered answer bot.
0:14:11 And we already had it.
0:14:13 It’s not even like we had to build it in like three days.
0:14:14 We already had it.
0:14:17 We just had to put together a web front and then we got it ready quickly.
0:14:19 We sent it to a few investors.
0:14:22 I remember my first feedback was from this guy.
0:14:24 I really respect Daniel Gross.
0:14:27 And he said, Arvind, this is cool.
0:14:30 You should not have it as in like, it’s not a hit button for your query.
0:14:35 It’s a submit button because it’s that slow takes like 10 seconds to get an answer.
0:14:37 It’s almost like I’m submitting a job.
0:14:43 So you should tell us submit button and have a queue of queries or something from there onwards.
0:14:47 So now being asked like, how is the service so fast?
0:14:48 That is the progress, right?
0:14:52 We have made not just because of our own engineering team, which is amazing.
0:14:59 Also, the fact that chips are getting better, faster, cheaper models are getting better, faster, cheaper.
0:15:06 And we made a bet when we launch, there were similar other services in our space that we’re
0:15:08 also launching a mix of search and LLMs.
0:15:13 But we are the only ones who have the conviction that it should just be answers.
0:15:15 The links are only in sources.
0:15:18 Others are like, I still want to have the 10 blue links.
0:15:21 I want to have a sidebar with a chatbot.
0:15:24 I want to have a summary panel at the top.
0:15:26 I don’t want to like change it too dramatically.
0:15:31 And we were like, dude, if you don’t change it dramatically, no one’s going to really realize
0:15:32 you’re different from Google.
0:15:35 They’re just going to think you’re Google with some add ons.
0:15:36 Like, that’s not exciting.
0:15:41 You have to be truly differentiating that, okay, even if you’re worse than Google, even
0:15:45 if you’re slower, even if you suck at navigational queries that people go back
0:15:50 to Google, they’ll at least register in their minds that you are better than Google
0:15:54 on certain things, which is like actually asking a question, deeper research.
0:15:56 And they’ll come to you for answer engines.
0:15:57 They’re not going to come to you for search engines.
0:15:59 They’re not going to come to you for product comparisons.
0:16:03 They’re not going to come to you for like ordering San Pellegrino, right?
0:16:06 They’re going to come to you for asking whether San Pellegrino or the Crocs.
0:16:07 What should I, what should I get?
0:16:10 It’s going to register in their mind why you’re different and better.
0:16:12 So that is the position we took.
0:16:16 We had conviction that like, even if we got answers wrong, even if people made fun
0:16:21 of us for, you know, like hallucinations over time, all these problems will get much better.
0:16:26 And that ended up being one of the best decisions we made to be called as an answer
0:16:29 engine instead of like search engine with LLMs on top.
0:16:32 And so we have been proven right.
0:16:37 Our thesis was correct that this is the right format to interact with information on the web.
0:16:40 And that ended up being for Black City.
0:16:43 Our traffic has been growing exponentially since we started.
0:16:50 So then we said, OK, look, we were initially on this treasure hunt, trying to figure
0:16:52 out some product that would resonate with users.
0:16:55 This is the product that it’s growing in terms of traction.
0:16:58 Also, let’s commit ourselves to building a company.
0:17:01 Let’s not be a seed round $2 million project.
0:17:04 Let’s try to build a company around it and a business around it.
0:17:09 And so we went and raised venture funding rounds and use that money to like keep growing
0:17:12 even more. And that’s our current plan.
0:17:17 Our mission is to like really transition the work from links to answers and build
0:17:18 the ultimate knowledge app.
0:17:21 Like if people go to Black City, they should just feel smarter every day.
0:17:23 That’s the vibes we want people to feel.
0:17:29 We don’t want the vibes of dancing girls on TikTok or like celebrities posting stuff on
0:17:32 Instagram. We just want the vibes of feeling smarter.
0:17:38 And I think asking questions is a great way to feel smarter, discovering new threads,
0:17:41 your friend sharing interesting queries with each other.
0:17:44 These are all utility values we’re trying to add to people’s lives.
0:17:48 We’ll be right back.
0:17:51 But first, I want to tell you about another great podcast you’re going to want to listen to.
0:17:55 It’s called Science of Scaling, hosted by Mark Roberge.
0:18:00 And it’s brought to you by the HubSpot Podcast Network, the audio destination for
0:18:01 business professionals.
0:18:06 Each week hosts Mark Roberge, founding chief revenue officer at HubSpot, senior
0:18:11 lecturer at Harvard Business School and co-founder of Stage 2 Capital, sits down
0:18:15 with the most successful sales leaders in tech to learn the secrets, strategies and
0:18:18 tactics to scaling your company’s growth.
0:18:23 He recently did a great episode called How Do You Solve for a Siloed Marketing and
0:18:25 Sales, and I personally learned a lot from it.
0:18:27 You’re going to want to check out the podcast.
0:18:31 Listen to Science of Scaling wherever you get your podcasts.
0:18:36 I’m curious on your thoughts on this.
0:18:40 So obviously there’s a lot of people that are worried about like content creation, right?
0:18:45 If I’m creating, if I’m writing blog posts or creating podcasts or making YouTube videos
0:18:49 and, you know, doing my best to like SEO them or whatever so that people will find them.
0:18:55 And these chatbots in the future will just sort of answer the question without me
0:18:57 actually needing to navigate to the site and read the article.
0:19:02 Is it sort of disincentivized content creators to keep on creating content?
0:19:05 If people aren’t like clicking over to their website anymore.
0:19:08 I’m just curious your thoughts on that whole argument around it.
0:19:13 Our model of the citation or attribution is, I would say it’s kind of the right model.
0:19:18 Now you can ask like, what about these future AI models that are just training on me?
0:19:22 Like as they like joke, all publicly available data.
0:19:27 So I don’t have a, I just don’t, we don’t do that ourselves.
0:19:30 Like we’re not in the business of creating these large foundation models.
0:19:35 So we’re not like taking the models and like benefiting from the data you create.
0:19:39 One thing I think Nat Friedman has said about this, I kind of like this.
0:19:44 It should be okay to train on someone’s data as long as you’re not like literally
0:19:46 we’re bad at reproducing it.
0:19:47 It’s kind of similar.
0:19:53 Like for example, when I watch any of your, you guys’s broadcasts or YouTube videos,
0:19:55 is it fair to say I’m training on it?
0:19:58 Because I’m kind of consuming your data, right?
0:20:04 But if I were literally taking that and ripping it off and like, and creating value out
0:20:09 of it without giving you any kind of attribution is like saying, okay, according
0:20:13 to Matt or according to Nathan, if without saying that, if I’m just literally like
0:20:16 reproducing your thing word by word, that seems problematic.
0:20:22 And that is basically the whole core point that New York Times is being against Open AI.
0:20:27 And I think there’s some, you know, responsible, but I was like, they kind
0:20:30 of over-engineered the prompts to show those cases.
0:20:34 But the deeper, deeper point being made is that like, there is a potential to just
0:20:35 regurgitate content here.
0:20:38 So what, what, what, what happens?
0:20:40 Like, like, should, should the person be given credit?
0:20:45 And I think the current paradigm of like people fighting for licensing deals and
0:20:49 trying to make money out of people, like the AI companies also doesn’t seem like
0:20:49 the right solution.
0:20:51 It seems like a temporary solution.
0:20:56 The longer term solution is like, whatever value is created per query, it should
0:21:01 be shared by the person surfacing the answer and the site and the sources that
0:21:05 got cited, which is more of the Spotify model, which works.
0:21:10 So this is the sort of thing I kind of feel all AI companies should subscribe to
0:21:13 not being overly greedy, because if people don’t continue to create good
0:21:17 content on the web through their blogs or tweets or like journalists writing their
0:21:22 good essays or YouTube creators, paying good videos, then there’s really no value
0:21:24 in your bot either.
0:21:28 Your bot is only as useful because it’s surfacing good content from the web and
0:21:32 getting into the hands of people who are asking questions relevant to that.
0:21:34 And if people stop creating good content, your bot is also not going to be
0:21:35 that useful, right?
0:21:37 You do, we need a two-way relationship.
0:21:42 And so instead of trying to be greedy, like, and trying to create a company that’s
0:21:47 eating all the profits like Google did in the previous era, if you’re like less
0:21:51 greedy and like more long-term focused like Spotify, I think you can create a
0:21:55 much better model here. And that’s something we are aspiring to do.
0:21:58 Yeah, Google just kept getting more and more greedy over time too, right?
0:22:01 Like adding more and more advertising links at the top.
0:22:04 We’re now when you do a Google search, you’re saying like five or six or seven
0:22:08 or eight or results or they’re like instantly answering the question or
0:22:12 they’re sending you to one of their properties to get the first result.
0:22:17 Yeah, a lot of people think mistakenly that Google pay everyone some money
0:22:21 for being able to use their content in the 10-loop in QI.
0:22:23 Reality is not that reality.
0:22:24 They don’t pay anybody anything.
0:22:27 I’m curious on your thoughts about the, you know, the whole open source,
0:22:30 closed source debate. Obviously, that’s a very hot topic right now.
0:22:32 Elon Musk is calling out Sam Altman.
0:22:34 There’s that whole battle going on.
0:22:37 Do you think you think the future of like the large language models,
0:22:41 do you think it’s going to be more open source, closed source, a combo of both?
0:22:43 Like, what are your thoughts on how this is all going to play out?
0:22:46 I think it’s combo of both.
0:22:50 Open source will always lack the best closed source model.
0:22:54 And that’s probably only one company in the world that has the money
0:22:57 and the incentives to keep open sourcing models, which is meta.
0:23:01 Everybody hates Zuckerberg, but that’s the only guy who’s truly committed to open source.
0:23:07 Rest of the people are all like kind of like proxy open source or like whatever.
0:23:11 Doesn’t need to make fun of them because everyone’s trying to do the best they can, right?
0:23:17 Like, nobody is able to have a cash cow like Zuck to be able to like spend so much money
0:23:21 and get open source at all and like give away the benefits because unlike Google,
0:23:23 he doesn’t even have a cloud business.
0:23:26 He doesn’t want to have either. He’s like, I don’t care.
0:23:29 I just want to like make more ad revenue.
0:23:33 And so he has the incentive to just give it out and own the ecosystem
0:23:39 and ensure that he profits from the developers who are like building on top
0:23:41 so that their engineering can benefit the meta.
0:23:45 And anybody else is not truly committed.
0:23:51 And I, so from, so then we should say like when can we beat GPT-4?
0:23:52 That’s the right question to ask.
0:23:57 Maybe it’s this year, maybe, you know, I hear they’re trying their best.
0:24:04 But given that he’s purchased 600,000 H100s, it’s inevitable that he beats them, right?
0:24:05 Like, it’s just a matter of time.
0:24:09 No, then you can say, okay, by the time he meets them, would Sam have a better model?
0:24:11 Definitely.
0:24:16 Like they’ve already had a year for GPT-4 and they’ve been upgrading GPT-4
0:24:19 through the course of the year, but they had a year to build an even better model.
0:24:25 So most likely that would be a version of closed source either as OpenAI or Anthropic
0:24:29 or Gemini and that would be better than the best llama at that point.
0:24:32 But that doesn’t mean closed source is getting destroyed.
0:24:37 Like most people just want to use APIs and you need somebody else to serve these models.
0:24:41 But you don’t have to overly depend on one provider.
0:24:42 I think that’s the future we want.
0:24:44 You don’t want to overly depend on one provider.
0:24:50 And you want the ability to take these models and customize them for what you want to build yourself.
0:24:56 And if you have, if there is a lot of friction in being able to train and deploy your own models,
0:25:00 because literally you have to get a GPU cluster, you have to train things,
0:25:01 you have to deploy, you have to do evals.
0:25:06 Like, people think like, oh yeah, I’ll just take this model and I create like fake news bots in
0:25:08 the world and I’ll destroy the world or something.
0:25:10 That’s not how internet works actually.
0:25:13 It’s hard to create bots by the way.
0:25:15 Like there are so many layers of security you need to bypass.
0:25:20 And like there are so many solutions to like fighting the bots problem and fake information
0:25:25 problem compared to like say banning the use of open source models.
0:25:30 Because the more you block people from having access to powerful technology,
0:25:33 even more motivated they’ll be to like get access to it.
0:25:38 You know the news of how this Chinese engineer was like leaking all the details, right?
0:25:44 From Google and like having somebody else badge them and run real office.
0:25:48 So this is what is going to happen if you go too much on the other extreme.
0:25:52 You know, a lot of the concerns that people have about the open versus closed
0:25:55 also has to do with, you know, some of the bias elements, right?
0:25:59 Like the people are worried that if Microsoft or Google or one of these
0:26:04 companies is in control, well now it’s a big corporation who controls the narrative
0:26:07 that is coming out of these bots, right?
0:26:12 Where open source maybe you could steer it and sort of have your own sort of biases,
0:26:15 preferences, whatever inside of the model.
0:26:16 100%.
0:26:18 I think it’s scary to have like one company that then
0:26:21 in the future determines what was human history.
0:26:25 And they’re like telling you the answer and like and it’s not exactly the truth.
0:26:29 It’s like some modified version of the truth that fits some agenda that they have.
0:26:31 Close source is going to continue to be way ahead like Erevan said,
0:26:35 but I’m glad the open source is there because we definitely need alternatives.
0:26:37 So it’s not just one company ruling everything.
0:26:38 Yeah.
0:26:38 So what do you think?
0:26:41 You mentioned Zuckerberg real quick.
0:26:42 I’m just curious on your thoughts on this.
0:26:46 You mentioned that he’s incentivized to open source it.
0:26:49 What is the incentive for Meta to be open sourcing it?
0:26:53 We don’t have to think about anyone as altruistic or like a good or bad person.
0:26:55 Just purely capitalistically.
0:27:00 It’s in this incentive that other engineers and build on top of Lama than GPDs.
0:27:05 So that like the engineering people do in the open source ecosystem,
0:27:08 Meta can learn from that and like use it in their products.
0:27:13 Like if you can see how other people take Lama and like make it faster,
0:27:18 learn how to like fine tune it, get it deployed on the edge devices,
0:27:24 like learn how to personalize these LLMs with like very limited parameter efficient fine tuning.
0:27:29 All these are like algorithmic benefits that Meta can just look at what people are doing
0:27:32 in the open and put it in their products.
0:27:35 Instead of saying, oh, I’ll hire all the best engineers in my company
0:27:38 and then like only rely on their own like brains to do these things.
0:27:42 Because you want the whole ecosystem to benefit faster, right?
0:27:46 And you also benefit from the ecosystem benefiting.
0:27:49 And you have the cash cow, you have the user base to like,
0:27:51 you know, go and deploy all this at scale.
0:27:55 He actually benefits a lot by putting it out and like letting other people build on top.
0:27:59 Now, there’s the other argument that I believe he’s making
0:28:04 honestly, but people can be skeptical of his true intentions that he’s saying this,
0:28:08 if you really care about safety, you rather want as many eyeballs on it.
0:28:12 You can’t be the person who comes and says, we need to make all this safe.
0:28:14 This could go really wrong and dangerous.
0:28:18 So you better trust like these us four or five people in the world.
0:28:20 Well, like all these like billion dollars in funding
0:28:25 and like tightly tied to like Microsoft or, you know, Google or Amazon.
0:28:27 And like, you know, we’ll decide what is good for you.
0:28:31 But you’d rather have as many people have access to these things, right?
0:28:35 If it is truly dangerous, you’d rather have as many people be aware
0:28:39 and educated and having access and like trying to be able to have opinions about it, right?
0:28:42 Because that way, even if somebody is misusing it,
0:28:44 you at least know how people can misuse things.
0:28:45 Right.
0:28:49 And that way, you’ll be able to build guardrails against it.
0:28:51 Instead of just saying, trust us and we know what we’re doing.
0:28:55 Well, this has been an amazing conversation.
0:28:57 Everybody needs to check out perplexity.ai.
0:29:00 There is a free version that you can use of it.
0:29:02 There’s also a premium version.
0:29:03 I’m on the premium version.
0:29:05 I’ve also have a rabbit R1 coming.
0:29:08 So I’m excited to play around with that with perplexity on board.
0:29:13 Is there anywhere that you want people to follow you, maybe on Twitter, YouTube,
0:29:14 something like that?
0:29:16 Where do you want to send people after listening to this episode?
0:29:18 Perplexity underscore AI.
0:29:19 That’s our Twitter handle.
0:29:25 And mine is AROVShrinivas, A-R-A-V-S-R-I-N-I-V-A-S.
0:29:26 Very cool.
0:29:29 Well, thank you so much for spending the time with us today
0:29:32 and answering all of our questions and hanging out with us.
0:29:34 And yeah, it’s been a great conversation.
0:29:34 Thank you.
0:29:35 Thank you, Irvin.
0:29:38 [MUSIC PLAYING]
0:29:42 [MUSIC PLAYING]
0:29:45 [MUSIC PLAYING]
0:29:48 [MUSIC PLAYING]
0:29:52 [MUSIC PLAYING]
0:29:55 [MUSIC PLAYING]
0:30:05 [BLANK_AUDIO]
Is our search functionality changing? How will AI change how we find information? Who will usher in the next wave of the online search experience? The Next Wave answers those questions and more as Matt Wolfe (https://twitter.com/mreflow) and Nathan Lands (https://twitter.com/NathanLands) talk with Aravind Srinivas (https://twitter.com/AravSrinivas), CEO of Perplexity A.I. They discuss perplexity’s beginnings, Arivand’s journey from Open AI to Google Deep Mind, to Perplexity, and how they are changing how people use search functionality, differentiating themselves from Google. Plus, open source vs closed source, AI’s implications for creators and more!
Check out The Next Wave YouTube Channel if you want to see Matt and Nathan on screen: https://link.chtbl.com/4FZET15d
—
Show Notes:
(00:00) Aravind’s beginnings from India to Berkley and Silicon Valley
(05:28) Entering OpenAI humbled Aravind and motivated him to pursue entrepreneurship.
(07:34) Pitching a bold idea for disrupting Google search.
(11:10) New search experiences utilizing large language models.
(15:23) Deep research brings clients seeking answers, not products.
(16:50) Transitioning the world’s search results from links to answers
(19:17) Value sharing model needed for AI companies.
(22:11) Incentive for profits, expecting technological advancements.
(25:31) Encouraging open source for mutual benefit.
—
Mentions:
- www.Perplexity.ai
- the Pirates of Silicon Valley (Movie)
- Google Deep Mind
- Silicon Valley (TV Series)
—
Check Out Matt’s Stuff:
• Future Tools – https://futuretools.beehiiv.com/
• Blog – https://www.mattwolfe.com/
• YouTube- https://www.youtube.com/@mreflow
—
Check Out Nathan’s Stuff:
- Newsletter: https://news.lore.com/
- Blog – https://lore.com/
The Next Wave is a HubSpot Original Podcast // Brought to you by The HubSpot Podcast Network // Production by Darren Clarke // Editing by Ezra Bakker Trupiano