The 4-Step Blueprint To Building a Successful AI Startup w/Vijoy Pandey

AI transcript
0:00:03 – Humans are amazing tool developers.
0:00:06 From the moment we created Fire to the first beer,
0:00:08 this is the latest, greatest,
0:00:10 shiniest tool that we’ve developed.
0:00:12 – What you’re describing is actually one of the areas
0:00:14 that I’m A, most excited about,
0:00:15 but also seems to be the thing
0:00:17 that most people are scared of.
0:00:20 – We are actually witnessing a step function change
0:00:23 when it comes to human creativity and productivity.
0:00:25 And so because of that,
0:00:26 there’s going to be a fundamental change
0:00:30 at how businesses operate and how society functions.
0:00:33 (upbeat music)
0:00:37 – When all your marketing team does is put out fires,
0:00:39 they burn out fast.
0:00:40 Sifting through leads,
0:00:42 creating content for infinite channels,
0:00:45 endlessly searching for disparate performance KPIs,
0:00:46 it all takes a toll.
0:00:47 But with HubSpot,
0:00:50 you can stop team burnout in its tracks.
0:00:52 Plus your team can achieve their best results
0:00:54 without breaking a sweat.
0:00:56 With HubSpot’s collection of AI tools,
0:00:59 queries, you can pinpoint the best leads possible.
0:01:02 Capture prospects attention with click-worthy content
0:01:05 and access all your company’s data in one place.
0:01:07 No sifting through tabs necessary.
0:01:10 It’s all waiting for your team in HubSpot.
0:01:11 Keep your marketers cool
0:01:14 and make your campaign results hotter than ever.
0:01:16 Visit hubspot.com/marketers to learn more.
0:01:20 (upbeat music)
0:01:21 – Hey, welcome to the Next Wave Podcast.
0:01:22 I’m Matt Wolf.
0:01:24 I’m here with Nathan Lanz.
0:01:25 And in this episode,
0:01:28 we’re gonna break down a four-step process
0:01:31 to start a business in the world of AI.
0:01:34 We’re also gonna discuss how new startups in AI
0:01:37 can compete with the big boys like Google and Microsoft.
0:01:40 Today’s guest is named Vijoy Pandey
0:01:44 and he is the senior vice president of OutShift by Cisco.
0:01:46 And I think you’re really gonna enjoy this conversation.
0:01:47 So let’s dive right in.
0:01:50 Hey, Vijoy, thanks so much for joining us.
0:01:52 It’s great to have another conversation with you.
0:01:54 How are you doing today?
0:01:54 – I’m doing good.
0:01:55 – Awesome.
0:01:57 Well, let’s just dive right in.
0:02:01 Let’s talk a bit about how the business landscape changes
0:02:03 over the next five to 10 years
0:02:06 with this new AI era that we’re coming into, right?
0:02:09 There’s a lot of huge technology
0:02:11 that’s just exploded over the last two years.
0:02:13 And now pretty much every business seems
0:02:16 to be integrating AI in some way.
0:02:20 How do we see this changing the business landscape?
0:02:22 – Yeah, so that’s actually a great question.
0:02:25 I think every time I look at that transition
0:02:28 that’s happened in the past, I would say five years,
0:02:31 but especially in the last two years,
0:02:33 we are actually witnessing a step function change
0:02:36 when it comes to human creativity and productivity,
0:02:38 especially when we’re looking at AI
0:02:40 and generative AI in particular.
0:02:42 And so because of that,
0:02:43 there’s going to be a fundamental change
0:02:46 in how change and how businesses operate
0:02:49 and how society functions.
0:02:50 And because of these reasons,
0:02:53 we are in for a really interesting ride.
0:02:57 And you might think like, okay, so we’ve heard this before.
0:02:59 So what’s different this time?
0:03:03 I mean, I think first and foremost, generative AI,
0:03:06 I mean, AI has been generating content for a while.
0:03:11 I mean, I think the first quote unquote generative AI system
0:03:15 that at least I know of was this computer program
0:03:19 called ELISA, and which was a conversational Q&A program.
0:03:22 But it was based on export systems.
0:03:24 So it was based on if-then-else statements.
0:03:28 It was pretty hard-coded in the way they approached problems.
0:03:32 But it did generate answers and it was a chat bot.
0:03:34 I mean, so things have existed for a while,
0:03:37 but this time with neural networks,
0:03:38 with neural networks with context,
0:03:41 transformers and everything that’s been happening,
0:03:43 few things are taking shape.
0:03:47 One, the creation is actually getting super smart.
0:03:49 So we’re looking at creation
0:03:51 when it comes to not just text,
0:03:53 but audio, video and multimodal.
0:03:57 So you can switch between text, audio and audio to images
0:04:01 and that switching between all modes of communication
0:04:02 is a big deal.
0:04:03 So that’s a big deal.
0:04:08 The second big deal, which I think is even bigger,
0:04:10 and this to me is the most exciting bit,
0:04:15 is these frontier models are actually beginning to reason.
0:04:18 And so what that means is they are trying
0:04:21 to build semantic relationships
0:04:24 between the elements of the grammar.
0:04:26 So let’s take a simple example.
0:04:29 In English, they’re trying to build semantic relationships
0:04:32 like you and I would in terms of what’s a verb,
0:04:34 what’s a down, what’s a proposition,
0:04:36 how do I combine these things
0:04:38 to form an intelligent statement?
0:04:39 And so that semantic relationship,
0:04:42 whether it’s English or Japanese, it doesn’t matter.
0:04:45 They’re building those semantic relationships.
0:04:47 But that’s not what’s super exciting.
0:04:52 What’s super exciting is those same semantic relationships
0:04:55 are being built across mathematics.
0:04:58 They’re being built across how proteins come together.
0:05:00 So there’s a grammar and language for proteins.
0:05:03 There’s a grammar and language in math.
0:05:06 There’s a grammar and language in how molecules combine
0:05:08 to build new materials.
0:05:10 And so those are the places
0:05:12 where I think things get really interesting.
0:05:15 So once you have these semantic relationships,
0:05:19 you can actually start reasoning about does this make sense?
0:05:20 Does that make sense?
0:05:22 Can I take a large ambiguous problem
0:05:24 and break it down into smaller steps
0:05:25 that I can then solve?
0:05:29 So to me, that is the next big step
0:05:32 that is being enabled through these frontier models.
0:05:35 And then the third bit is the way we interact
0:05:37 with these models is also changing.
0:05:42 I mean, I talked about Eliza and Eliza was text in, text out.
0:05:46 When we started with chat GVT, it was text in, text out.
0:05:47 And so it was great as an assistant.
0:05:50 You ask a question, you get a response.
0:05:50 It could be a summary.
0:05:53 It could be some code snippet.
0:05:57 But in the end, it’s still a text response.
0:05:58 Sure, you changed it now to multimodal,
0:06:01 but it’s a content response.
0:06:05 What’s happening now is we are moving towards agents.
0:06:07 And agents are going to be autonomous.
0:06:09 They’re going to be always on.
0:06:13 They’ll be always listening to inputs from the environment.
0:06:15 So you don’t have to push things to it.
0:06:18 It’s always pulling information.
0:06:19 And then once it pulls information,
0:06:21 it actually takes action.
0:06:24 Instead of giving you some content to absorb
0:06:26 and then take a decision or action,
0:06:29 the agent will take action on its own.
0:06:31 But that’s not the end of it.
0:06:34 What we actually figured out is,
0:06:36 and this is something fascinating,
0:06:40 the one that Andrew Ng and these other folks have been doing,
0:06:44 is think of these agents as being no different
0:06:47 from you or I, from humans, right?
0:06:52 You will not come to VJoy and ask VJoy a question around,
0:06:57 “Hey, VJoy, help me plan my next trip to Italy.”
0:07:00 And then two minutes later, you come to VJoy and say,
0:07:03 “Guess what, I’m having this chest pain.
0:07:05 Can you tell me what that could be?”
0:07:06 And then you told her, I’m gonna say,
0:07:10 “I’m looking for stocks to buy, with stocks to buy.
0:07:12 You will not do that.”
0:07:14 I mean, it’s like you go to subject matter experts
0:07:17 and you actually figure out what the subject matter experts
0:07:21 have to say, and you trust those subject matter experts.
0:07:22 But chat GPT behaves in this,
0:07:25 what’s called one short or zero short approach,
0:07:28 where you say, “Give me this,”
0:07:29 and chat GPT just picks it up.
0:07:32 And I’m just picking at GPT, but it’s like,
0:07:34 Anthropic Gemini, I mean, you take your pick, right?
0:07:38 So what we’re looking at now in the agent work flows is,
0:07:43 can we build these really thin, small, model-based,
0:07:47 really accurate subject matter expert agents
0:07:51 that can come together, collaborate,
0:07:55 constantly learn, and solve a higher-order problem?
0:07:58 And so what Andrew Ng and people like that have shown is,
0:08:01 take a simple thing like developing code.
0:08:05 Instead of asking GPT or Gemini to spit out code,
0:08:08 which is one short, zero short,
0:08:11 you actually say, “Okay, I have one agent.
0:08:14 Maybe it’s GPT based with Generate’s code.”
0:08:16 I have another agent that is sitting on the side,
0:08:19 which is, again, even small and accurate,
0:08:22 which is actually going to test for correctness.
0:08:26 Then I have another agent who’s going to sit and test
0:08:30 for security, scale, and if you have these four or five agents
0:08:33 come together and work on a coding problem
0:08:34 or a software development problem,
0:08:38 the output that you get sometimes is 10x better
0:08:42 than what you get from a single or one-shot approach.
0:08:46 So these three things of creation, reasoning,
0:08:50 and agentic ways of interacting with these systems,
0:08:51 I think these are game changers.
0:08:54 I think they are going to change everything that we do.
0:08:56 They’re going to change the way we approach
0:09:00 not just software work, services work,
0:09:01 but even physical work.
0:09:05 And there’s a PWC study that actually says
0:09:08 that AI, especially based on all of these things,
0:09:12 is going to add 15 trillion plus of value
0:09:14 to the economy by 2030.
0:09:16 That’s trillion with a T.
0:09:20 It’s like huge amount of value because
0:09:25 of this wide applicability across agentic workflows
0:09:28 and embedded forms in robotics as well.
0:09:32 Because it’s the same thing, just embedded in a robotic form.
0:09:34 Yeah, I think people hear that about agents
0:09:36 and it sounds like sci-fi to them.
0:09:37 They’re like, oh, that’s cool.
0:09:40 That’s coming in 10 or 20 years.
0:09:41 I think a lot of business leaders
0:09:44 don’t realize this is probably like one to three years
0:09:47 where you have very good agents that actually work
0:09:49 and can go off and do work for your company.
0:09:51 And so probably at Cisco, I think it’s great
0:09:53 that you guys have been doing the out shift
0:09:55 because I think more companies should be doing that.
0:09:58 We’re thinking about, OK, even if I’m a big company,
0:10:00 medium-sized company, how can I be innovative
0:10:02 and keep trying new things?
0:10:04 Because like in the age of AI, disruption
0:10:05 is going to happen so fast.
0:10:07 So what lessons have you guys learned at Cisco
0:10:10 and is there anything like maybe our audience could learn from
0:10:13 about how to be more nimble even as a big company?
0:10:14 Yeah, I mean, that’s a great question.
0:10:18 I think the big thing here is the velocity
0:10:21 and the nimble aspect of doing business
0:10:24 and being able to experiment and learn from it
0:10:27 and then iterating on it is actually the key attribute here.
0:10:30 And that’s why out shift exists.
0:10:32 I mean, that’s the reason for us to exist.
0:10:36 And that’s a great advantage that all these small startups
0:10:39 out there have and you have in the industry
0:10:43 because you have the ability and you don’t have the baggage
0:10:50 to support a customer base that is mired in brownfield pain.
0:10:53 Now, again, the thing I would note here
0:10:57 is the one place that startups can come in and disrupt
0:11:00 is to disrupt that brownfield pain.
0:11:02 So I’m not saying that you should not go after that.
0:11:06 You should absolutely go after brownfield pain
0:11:10 because that’s the place where something like AI could come in
0:11:13 and disrupt the industry pretty massively.
0:11:15 There’s a complexity that businesses are trying
0:11:16 to just grapple with.
0:11:18 So all things that we are dealing with
0:11:22 is this widening gap that we see as out shift
0:11:24 between all of these frontier models,
0:11:27 all of these foundation models, big or small,
0:11:29 and the capabilities that they’re providing,
0:11:32 everything that we talked about, creation, reasoning,
0:11:35 assistance to agents, and all of those frameworks,
0:11:38 all of that is happening at breakneck speed.
0:11:42 And our customers, enterprises, including ourselves,
0:11:45 when we think about us, Cisco as a customer,
0:11:49 we are struggling to consume these in real-world use cases.
0:11:51 And so what are the reasons?
0:11:52 Four big reasons.
0:11:55 Number one, I may have an idea.
0:12:01 I mean, all of us use AI for consumer-oriented tasks.
0:12:02 And we are getting pretty good at that.
0:12:06 I mean, my kid has been using this for his homework.
0:12:10 For God knows, since the day chat GPT got released.
0:12:13 So we’re using it every day.
0:12:16 But businesses might have some ideas,
0:12:17 but they don’t know where to start.
0:12:21 So step number one is, can we build something
0:12:24 that enable businesses to just experiment?
0:12:27 So if they have an idea, I mean, we talked to HR,
0:12:30 we talked to finance, sales, legal,
0:12:34 like all of these teams within these large enterprises,
0:12:37 they have so many ideas.
0:12:41 We, at one point, we tabulated like 150 ideas, use cases,
0:12:44 that these folks want to come in and experiment with.
0:12:46 But there is no easy way.
0:12:48 So is there an easy way that somebody
0:12:51 can provide for these teams to come together
0:12:55 and experiment with their ideas very quickly at low cost?
0:12:57 So that’s number one.
0:13:01 Number two, now that you figured out, OK, this use case,
0:13:04 this idea sort of makes sense, then
0:13:05 you need to customize it.
0:13:11 Because, again, to our earlier conversation, GPT or Gemini,
0:13:15 they don’t have context around an enterprise’s data
0:13:19 sources and enterprises’ knowledge bases, internal websites,
0:13:21 snowflake instances.
0:13:22 There’s a whole bunch of data sources
0:13:26 that an enterprise does business on that should not
0:13:29 be accessible to these public models.
0:13:33 So if you need to customize it for your use case,
0:13:36 you need to bring in these sensitive data sources
0:13:38 and knowledge bases and customize these models
0:13:40 with those data sources.
0:13:41 It’s not just that.
0:13:44 You need to figure out what policies make sense.
0:13:46 Because the last thing you want is,
0:13:50 if personally, if Nathan, you don’t have access
0:13:53 to a particular document, but because I
0:13:57 customize my internal assistant, using that document,
0:14:01 suddenly, Nathan has access to all of the answers
0:14:04 that the assistance is providing based on that document.
0:14:07 That’s a big problem.
0:14:11 So carrying that source of truth,
0:14:17 carrying that identity across data access, knowledge base
0:14:20 access, as well as assistant and AI access
0:14:21 is the other big one.
0:14:23 So that’s customization.
0:14:27 We’ll be right back.
0:14:29 But first, I want to tell you about another great podcast
0:14:30 you’re going to want to listen to.
0:14:34 It’s called Science of Scaling, hosted by Mark Roberge.
0:14:37 And it’s brought to you by the HubSpot Podcast Network,
0:14:40 the audio destination for business professionals.
0:14:43 Each week, host Mark Roberge, founding chief revenue
0:14:46 officer at HubSpot, senior lecturer at Harvard Business
0:14:49 School, and co-founder of Stage 2 Capital,
0:14:51 sits down with the most successful sales leaders
0:14:55 in tech to learn the secrets, strategies, and tactics
0:14:57 to scaling your company’s growth.
0:14:59 He recently did a great episode called
0:15:02 How Do You Solve for a Siloed, Marketing, and Sales?
0:15:04 And I personally learned a lot from it.
0:15:06 You’re going to want to check out the podcast.
0:15:09 Listen to Science of Scaling wherever you get your podcasts.
0:15:15 The third bit– now you’ve customized it.
0:15:16 You’ve seen things are working.
0:15:18 Maybe it makes sense.
0:15:20 Now let me go forward and make sure
0:15:24 that I’m getting value out of this use case.
0:15:29 So ROI analysis is the other big, big problem.
0:15:31 So if you’re in the observability space,
0:15:35 if you’re in the what we’re calling prompt routing space,
0:15:38 like, does this model make sense for these use cases?
0:15:41 Or should you be looking at something else?
0:15:44 Because mistrial might be good for something.
0:15:47 GPTO might– before O might be better for something.
0:15:52 Maybe a Llama 3, 4, or 5 billion parameter
0:15:53 might be good for something.
0:15:55 Or maybe something that is distilled
0:15:57 might be good for something else.
0:16:00 So how do you pick and choose which models make sense?
0:16:04 How do you pick and choose which data sources are actually
0:16:07 being effective in your use case?
0:16:09 How do you figure out that, hey, I’m
0:16:13 paying X amount of dollars to all of these foundation model
0:16:17 providers, but my business process at the end of it
0:16:20 is really at the same place?
0:16:25 So have I actually benefited from spending all this money,
0:16:29 from bringing AI into the equation for these use cases?
0:16:32 That’s a big question right now for all of these enterprises.
0:16:34 So is there a before and after when
0:16:38 it comes to the business workflow, before AI, after AI?
0:16:40 So that, I think, is pretty critical,
0:16:44 because we are now entering a phase where
0:16:47 there’s a justification needed.
0:16:51 The hype cycle is a little warming down a little bit,
0:16:52 and you need to justify–
0:16:54 Well, let’s see what happens with GPT-5, right?
0:16:55 [LAUGHTER]
0:16:57 Let’s see what happens there, yeah, exactly.
0:16:59 But I think it’s a good thing, because I
0:17:01 think you’re getting to the point where you’re now
0:17:04 getting into real-world use cases, especially
0:17:05 in the B2B context.
0:17:10 And now, finally, the fourth step is all of this great.
0:17:11 Now you’ve deployed.
0:17:12 Now you want to scale it.
0:17:15 You want to make sure that there’s security behind it.
0:17:18 You want to make sure that there’s data prevention behind it.
0:17:21 You want to make sure that it’s trusted and safe.
0:17:22 So it’s not hallucinating.
0:17:26 It’s bias-free, it’s ethical, and so on and so forth.
0:17:28 So those are the four steps–
0:17:32 easy start, customization, ROI analysis,
0:17:34 and trust, safety, and security.
0:17:37 These are the places that enterprises are struggling with.
0:17:40 So if you want to start up, innovate in this space.
0:17:43 And there is so much to innovate here
0:17:45 that I, myself, can probably farm out
0:17:48 like hundreds of companies here to go ahead
0:17:50 and tackle all these problems.
0:17:52 But you should have to deal with HR and legal.
0:17:54 I was hoping you were going to say just replace HR and legal
0:17:56 with AI, and then you can do this.
0:17:57 That was my hope.
0:18:00 Well, we are on a long way away from that, Nathan.
0:18:04 And that actually brings up a great, great point, actually.
0:18:06 There is a pretty big debate around,
0:18:09 even if we go through these agent work flows where agents
0:18:12 are now coming in and taking autonomous action,
0:18:16 people get worried, well, is my job at risk?
0:18:18 And so one of the things that I would say
0:18:24 is humans are amazing tool developers.
0:18:27 From the moment we created fire to the first spear,
0:18:28 whatever, right?
0:18:31 We’ve been amazing tool developers.
0:18:34 This is the latest, greatest, shiniest tool
0:18:36 that we’ve developed.
0:18:39 And we will have to make it better.
0:18:45 We will have to make it better so that we can actually
0:18:51 figure out a way for these tools to do the menial tasks.
0:18:53 Whereas we elevate ourselves to solve
0:18:57 the more ambiguous, the harder, the ill-defined problems,
0:19:01 because we need to go after those higher-order problems.
0:19:03 So even something like HR, I mean,
0:19:08 it’s the parts of that process that, in fact, our HR teams
0:19:11 come to us and say, things like reservation summarization,
0:19:14 things like skill set matching, these are things that nobody
0:19:16 wants to spend time on.
0:19:19 So yes, automate those tasks, so.
0:19:21 But how do you advertise a role?
0:19:23 How do you attract a candidate?
0:19:26 How– I mean, these are human-touch processes.
0:19:28 I mean, humans are not going to go away
0:19:30 from these kinds of processes.
0:19:32 It’s just that you’re making these humans better
0:19:34 at what they do.
0:19:37 Yeah, I think what you’re describing
0:19:40 is actually going back to the sort of agents thing,
0:19:44 I think that’s one of the areas that I’m A, most excited about.
0:19:46 But also, that seems to be the thing
0:19:48 that most people are scared of is like, all right,
0:19:52 if we get these AI agents running and doing all these jobs,
0:19:54 like, now what does that leave humans up to?
0:19:58 And I know a lot of humans are good decision-makers
0:20:00 and can sort of do the higher-level thinking
0:20:05 and then get the agents to run it, but not everybody.
0:20:08 Define a lot.
0:20:11 So fast-forwarding five, 10 years,
0:20:13 like, where does that leave humans?
0:20:15 Because I actually think we might even get to a point
0:20:18 with a lot of these AI systems and agents
0:20:20 that they’ll move up that chain
0:20:23 and do more and more of that higher-level thinking.
0:20:27 So this is obviously a very philosophical, theoretical question,
0:20:30 but where does that leave humans five to 10 years from now?
0:20:32 Yeah, I mean, that’s actually a great question.
0:20:35 Again, I mean, if I had my crystal ball,
0:20:37 I’m going to polish that and give you an answer.
0:20:41 But I think probably 99% of the time, I’ll be wrong as well.
0:20:44 So I’ll do my best and try to answer this question.
0:20:46 But the way I think about this is,
0:20:49 so first and foremost, it’s a tool.
0:20:52 And humans have created the tool.
0:20:55 Humans will continue to refine the tool.
0:20:59 Our biases, our insecurities,
0:21:02 all of the things that we stand for
0:21:06 will actually seep into the tool as well.
0:21:10 So first and foremost, as people, as humanity,
0:21:15 we should be pretty careful and pretty cautious
0:21:18 about how we build the future around AI
0:21:22 and just be deliberate in figuring out
0:21:24 whether there’s bias, figuring out
0:21:26 whether there’s hallucinations,
0:21:29 making sure that security problems are taken care of,
0:21:33 making sure that privacy and data ownership
0:21:34 is actually taken care of.
0:21:37 So I think there’s a whole element there
0:21:38 that we need to pay attention to.
0:21:41 But the way to think about this is,
0:21:43 and my favorite analogy here is,
0:21:48 we should be behaving as if there is a printing press
0:21:53 being invented two blocks down my house.
0:21:54 So two blocks down my house,
0:21:57 there is a printing press that is being invented.
0:22:01 I cannot be sitting here sharpening my quill
0:22:04 because that is the wrong thing to do, right?
0:22:05 I know that there is a printing press.
0:22:06 I should embrace it.
0:22:08 It’s the shiniest toy.
0:22:10 It is going to change the world.
0:22:11 Do not sharpen your quill.
0:22:15 Go out there, figure out how that printing press operates,
0:22:19 and then maybe start writing a novel, right?
0:22:23 And let the printing press do the job of printing it out
0:22:26 instead of physically writing that novel out.
0:22:30 So I think that’s the analogy I would throw out there
0:22:35 where embrace it, learn it, use it, improve it,
0:22:37 and get your fundamentals right, right?
0:22:41 So in all of this journey, again,
0:22:43 like I said, there’s going to be another AI winter.
0:22:45 I mean, all this hyper side,
0:22:47 we are going to hit an AI winter
0:22:51 because there’s some fundamental problems like memory.
0:22:53 Reasoning is not solved yet.
0:22:55 There’s a whole issue around,
0:22:57 can we make these models unlearn?
0:22:58 So there are these fundamental problems
0:22:59 that need to be solved.
0:23:01 Sustainability is a big one.
0:23:03 I mean, we are burning down the planet at this point
0:23:04 to make all these models.
0:23:05 So how can we make it–
0:23:08 – I would argue, though, that I would argue, though,
0:23:09 AI is what’s going to solve that.
0:23:11 I would argue that we can’t save ourselves out of that.
0:23:12 So anyways.
0:23:13 – Agreed.
0:23:16 I mean, every problem that I just described here, Nathan,
0:23:20 there is an AI for that solution,
0:23:23 and then we’ll be using AI to build new solutions as well.
0:23:25 So there is that duality that exists
0:23:26 in everything that we do.
0:23:28 I mean, that boat has sailed.
0:23:30 So I would say that if you’re sitting here thinking
0:23:33 that AI is not going to change your world,
0:23:35 that’s the wrong place to be.
0:23:37 The printing press is being developed.
0:23:38 So that is going to happen.
0:23:41 And we’re going to leverage AI to solve
0:23:44 sustainability problems, health care problems,
0:23:48 environmental problems, education problems,
0:23:52 accessibility problems across the board.
0:23:53 But as far as individuals are concerned
0:23:57 and where this thing is headed, I would say be pragmatic.
0:23:59 Know that there are more problems to be solved,
0:24:01 and we will be solving them over time.
0:24:06 In fact, there is a famous quote by Thomas Kuhn.
0:24:08 I don’t remember the exact quote,
0:24:12 but paraphrasing it, what it says is,
0:24:13 there are these step function changes
0:24:16 that happen in scientific discovery.
0:24:21 And between those step functions, real work is done.
0:24:23 Because then you take that step function,
0:24:25 the output of that step function,
0:24:28 and you actually make it work in real world scenarios.
0:24:30 That’s where we are right now.
0:24:33 Secondly, between those step functions,
0:24:36 you’re actually doing the work of the next step function.
0:24:38 So just be pragmatic about it.
0:24:40 Figure out what needs to be fixed.
0:24:42 Go and fix it.
0:24:45 Because that’s where we humans thrive.
0:24:48 Now, when it comes to one of the negatives of this,
0:24:52 and I heard this from somebody, and I’m a photographer,
0:24:55 I love to take pictures.
0:24:57 And then you’re looking at models like Sora,
0:24:59 and you’re looking at what AI can do.
0:25:01 And one of the photographers that I admire,
0:25:04 he came back and said, you know,
0:25:10 I really want AI to solve for all the menial work
0:25:13 that I don’t want to do.
0:25:16 I don’t want AI to solve for the stuff that I enjoy doing.
0:25:18 And so there’s a little bit of a pet peeve
0:25:20 that I personally have as well.
0:25:22 Whereas they’re going and tackling things
0:25:25 like video generation and image generation,
0:25:27 which all of us enjoy.
0:25:29 But then even there, if you take a step back and think about it,
0:25:34 it’s like painting used to be a career for a lot of people.
0:25:36 And then photography came on the scene.
0:25:41 Yes, the demand for painters reduced quite a bit.
0:25:46 But the remaining painters that existed and exist to this day,
0:25:47 what did they do?
0:25:49 They upskinned.
0:25:54 And now your paintings and a painting that you would buy
0:25:58 is way more expensive than a photograph that you would buy somewhere.
0:26:02 So that’s again, going back to humans are good at innovating,
0:26:05 at creating, at doing something new,
0:26:09 even in a shape or form that has been automated through AI.
0:26:11 So that I’m a firm believer.
0:26:13 So we’re talking about like what, five to 10 years?
0:26:17 So I personally think over the next two to three years,
0:26:20 work is going to get dramatically easier and more fun.
0:26:22 Like a lot of tedious things you normally do in work,
0:26:24 a lot of that’s going to get automated.
0:26:25 And so work is going to get better.
0:26:28 Like people are just going to enjoy working more, which is going to be great.
0:26:31 But I think long term, I think we are possibly heading to like a period
0:26:35 where like in 10 years where work is probably going to be optional.
0:26:36 I mean, I really believe that.
0:26:39 Like once you start combining AI with robotics,
0:26:42 you know, the cost of a lot of goods should come down.
0:26:45 And so we’re going to have to rethink a lot of things in society.
0:26:48 So like, yeah, maybe more people are going to use AI to learn how to play guitar.
0:26:51 And maybe, yeah, AI could do that better, but you enjoy playing guitar.
0:26:53 So you do that, right?
0:26:56 Yeah, Nathan, just, just to add to that, I mean, I think you’re absolutely right.
0:27:00 So that in my head, there are two axes of innovation.
0:27:04 So there’s innovation that actually automates things that we do.
0:27:09 And then there’s innovation that actually abstracts away things that we do.
0:27:12 So abstracts away the complexity of things that we do.
0:27:14 And these are two separate axes.
0:27:19 So the analogy that I have is, let’s take, let’s take the example of building a house.
0:27:23 You could build a house brick by brick, brick by brick, build a wall.
0:27:27 Once you’ve built a wall, then you build the house by combining these walls
0:27:29 and a roof and so on and so forth.
0:27:30 You can automate that process.
0:27:35 In fact, there are robots that actually go ahead and lay those bricks out for you.
0:27:36 They exist.
0:27:42 So you can still be building brick by brick, wall by wall and automate that process.
0:27:48 But then you can abstract the complexity and you actually can move the unit of work
0:27:50 to be a higher order function.
0:27:55 And you can do that by saying, I’m going to the 3D print this entire house.
0:27:58 When you 3D print that entire house, it’s like, yes, you’ve abstracted out
0:28:03 the complexity of building bricks, laying out walls, connecting these walls,
0:28:05 putting a roof together, all of that is gone.
0:28:09 So now your unit of work is actually very, very different.
0:28:15 It’s abstracted out to a unit of work that is democratizing, in fact,
0:28:16 the way you would build houses.
0:28:20 So abstraction always democratizes work.
0:28:25 And you’re 100% right that with agent work flows, starting with software
0:28:30 development and the tech space, then moving towards services because it’s
0:28:31 sort of knowledge work.
0:28:36 And agents are suited for that at least in the next two years, three years.
0:28:41 And then looking at embedded AI, which is taking these agents,
0:28:46 putting them in robotics, we are going to move towards a utopia that is just
0:28:51 great, Nathan, which is we will abstract work out where you once get all the time
0:28:56 they want to do what they do best, which is argue.
0:28:57 I’ve had three years.
0:28:59 No, hopefully that’s not it.
0:29:04 I was hoping you were going to go towards like, yeah, let’s build the new
0:29:07 Coliseum and let’s think of things like that, which then goes into the argument
0:29:09 of we should be accelerating more, because yes, this is all going to require
0:29:11 more energy.
0:29:13 And so if we try to have less energy, just none of this is going to work
0:29:17 out for humanity because people are going to demand more energy.
0:29:21 So we have to be building AI faster and then hoping that AI can help us solve
0:29:23 those problems versus trying to save our ways out of it.
0:29:25 Because like, yeah, when people are not working, what are they just going to sit
0:29:26 around in VR?
0:29:27 I hope not.
0:29:29 I hope they’re going to be off and like, I want to go build, I want to build a city
0:29:32 off on the moon using robots, right?
0:29:33 Like, I hope that’s what people are doing.
0:29:35 It’s like amazing stuff like that.
0:29:37 And I think that’s what we’re looking at in the next 10 to 20 years.
0:29:42 So yeah, actually, that comment about VR is actually pretty interesting.
0:29:46 What so the way we think about, if I might digress a little bit, but the way
0:29:50 we’re thinking about agenting workflows is like, like we just described.
0:29:52 I mean, they’re going to be agenting workflows.
0:29:56 They’re going to solve software and tech problems first, because guess what?
0:30:00 Tech folks are building agents and agenting workflows.
0:30:03 They’re going to disrupt the thing that they’re most comfortable with first.
0:30:03 Right.
0:30:05 So that’s going to happen first.
0:30:08 Then we are going to go after the services industry.
0:30:10 And you see a lot of data points already.
0:30:13 The PwC, a study that I mentioned.
0:30:16 There’s a Sequoia video that talks about it as well.
0:30:19 People are talking about the 15 trillion plus services industry that’s going
0:30:23 to get disrupted with agenting workflows.
0:30:29 I think before we get to embedded agents and robotics, the third step
0:30:35 that’s going to happen is actually avatars, social networks and the metaphors.
0:30:41 Have you think about where Meta is going and where Zuck is going with all of this?
0:30:44 I mean, he pretty much said that in his last learnings call.
0:30:48 And when he released Lama 3, but we’ve been theorizing about this for a while,
0:30:54 where why would Meta make Lama 3?
0:30:57 So there was a whole bunch of reasons why you would want to make it free.
0:31:00 First of all, could disrupt the industry.
0:31:01 Sure.
0:31:03 Yeah, flip the table over and that’s yeah.
0:31:06 I mean, I’m just going to make the bar here.
0:31:08 Let’s see if five other companies disappear.
0:31:11 I mean, that’s a great reason.
0:31:12 Great.
0:31:17 But really, I mean, they’ve got the data to train really good models.
0:31:21 And one of the things that we are realizing is there’s this whole motion
0:31:25 behind synthetics and using synthetic data to train models.
0:31:29 And we are realizing as an industry that there’s going to be model collapse
0:31:31 if we’re going to use synthetics.
0:31:35 So true human data is actually pretty valuable with surprise of surprises.
0:31:38 I mean, we knew that, but it’s actually being proven.
0:31:42 And guess who has a ton of data is the goos of the world and the metas of the world, right?
0:31:44 So he can leverage that.
0:31:45 He can really build good models.
0:31:53 But the third bit that he’s moving towards is almost a social network number three,
0:31:59 a version three, which is version one was you interacting with friends and family.
0:32:07 Version two of social networking was all of us interacting with influencers like Nathan and Matt here.
0:32:10 So all was interacting with influencers.
0:32:14 So these are not in our immediate friends and family space.
0:32:19 These are people who are known that you can interact with outside of our close circle.
0:32:25 Social network number three of version three is going to be all of us interacting with virtual avatars.
0:32:31 And the way you’re going to train these avatars is using these open core models
0:32:39 that are going to get trained on how you do things and then behave like a virtual Nathan or a virtual Matt.
0:32:46 When it comes to, let’s say gaming, when it comes to finance, when it comes to creativity.
0:32:48 Like a husband playing guitar.
0:32:54 Sorry, that world is coming whether you like it or not, Nathan.
0:33:00 I mean, I know we want to spend people to Mars, but people also will be sitting in.
0:33:01 No, no, I’m a gamer.
0:33:03 I started my career in gaming.
0:33:05 I was a top player on EverQuest back in the day.
0:33:07 So I, yeah, I definitely get it.
0:33:08 I hope that we don’t go there though.
0:33:12 Because like for me, the addiction was not healthy when I was young, right?
0:33:15 So I’m like, I kind of, I hope to steer things away from that.
0:33:18 Like, yeah, sure, hopefully some part of society does the whole VR thing.
0:33:20 But yeah, let’s not stop there.
0:33:23 Let’s think beyond VR and like go off and build amazing stuff again.
0:33:26 Yeah, but there’s a pretty important use case here.
0:33:28 And we’ve done that within Outshift.
0:33:34 What we did is we have a designer, his name is Mark Schiavelli.
0:33:35 He’s an awesome designer.
0:33:41 Now his skills are pretty, pretty much quite a bit in demand, right?
0:33:48 So everybody’s going to the designer because we have a philosophy of design something first,
0:33:52 then get customer feedback, then try and build something and then iterate.
0:33:55 So design, learn, build, iterate.
0:34:00 And so design is the first step in the process as everybody was going to Mark and his team.
0:34:04 And so what he did was he said, “Let me train a virtual Mark.”
0:34:12 And so he ended up training a virtual Mark and virtual Mark is like, I would say 70% as good as real Mark.
0:34:16 Right now, let’s go get better next year.
0:34:27 But what I’m trying to get to from here is one thing that this does is it actually democratizes skills.
0:34:36 So think about an expert in finance that today some of us can afford to pay for and get their advice.
0:34:49 But the larger strata of humanity is not able, economically, societal barriers, whatever, is not able to access those skills or those SMEs.
0:35:00 So I believe that there is the flip side of this, which is you can actually democratize those skills whereby a lot more of humanity can actually benefit from those skills.
0:35:03 And that is something that we need to tap into.
0:35:06 So yes, there is the aspect of VR and gaming and all of that.
0:35:17 But there is a pretty real use case here around democratizing knowledge even more and giving that access to parts of society that have never had access to that.
0:35:23 And that, I think, will drive up again, creativity and productivity even more.
0:35:26 Yeah, I think we’re already actually seeing Meta do this to some degree, right?
0:35:36 They just rolled out this week or last week a new feature where anybody on Instagram and Facebook could go and train their own mini AI version of themselves.
0:35:44 And so somebody can go and talk to the virtual Matt Wolf that has all of the data on me and sort of understands how I would respond.
0:35:46 Like they just rolled that feature out.
0:35:51 The next step just feels like, all right, now let’s embody it into an avatar, maybe in VR, right?
0:35:53 So it’s already happening.
0:35:55 We’re already seeing that play out.
0:36:02 And the more accurate they get and the more export they get going back again to our agent conversation.
0:36:16 If you train them to be really good exports and let’s say security, where it comes to application security, like go really narrow, go really deep and be exports on that so that you have hallucinating less.
0:36:18 You are providing accurate answers.
0:36:26 And after a couple of these training sessions and fine-tearing sessions, I get comfortable enough where they probably hallucinate less than I do.
0:36:28 And they make a few mistakes that I do.
0:36:30 I’m like, yeah, go for it, right?
0:36:33 And we are rapidly approaching that world.
0:36:38 And once we oppose that world, maybe I’m thinking, maybe I should monetize that, right?
0:36:45 And maybe let others access that avatar and leverage my skill in a much broader, more scalable way.
0:36:50 So you’re on the beach in Hawaii, checking the VJoy podcast, like, oh, he’s doing a great job.
0:36:55 He just sent me a report and asked me for feedback on a few key parts.
0:36:57 Not in a virtual environment.
0:36:59 Yeah, exactly.
0:37:00 That was my point, though.
0:37:04 Yeah, our next podcast interview with VJoy is going to be VJoy’s avatar.
0:37:06 Because I mean, it’s the same thing anyway.
0:37:07 Our avatars as well.
0:37:08 We’ve already been replaced.
0:37:10 You just don’t know it.
0:37:14 But, you know, Andre Carpathia, actually, you know, he left open AI, started a new startup.
0:37:17 And this is kind of the same idea of what he’s doing.
0:37:22 He’s trying to help educators educate at bigger scales than ever possible, right?
0:37:28 A single educator can go in, put all of their knowledge into a system,
0:37:32 and then anybody can go and access that educator now.
0:37:34 And it doesn’t matter what language you speak, right?
0:37:39 I could go in and if I only know Japanese, I could learn from that person in Japanese, right?
0:37:42 If I only know English, I could go and learn from that person in English.
0:37:49 So I feel like that ability to, you know, take a base of knowledge, put it into a system,
0:37:52 and then just let anybody access that at scale.
0:37:58 I mean, think about how that, like, can benefit, you know, countries, lesser developed countries
0:38:00 that don’t have access to some of this information.
0:38:04 To me, that’s like, that’s the real power of what we’re talking about here.
0:38:08 I think about from, like, the Silicon Valley perspective, like, people don’t know how to do startups.
0:38:11 They don’t understand anything about startups if you’re not in San Francisco,
0:38:16 or most people don’t, and there’s all this knowledge just, like, you know, within a few blocks, right?
0:38:19 And, but you could have, like, a Paul Graham bot, right?
0:38:23 Where it’s like, you just, you taught the Paul Graham, it’s like, he’s, like, interviewing you.
0:38:26 Instead of wasting your time with someone who’s gonna be super, you’re scared of talking to them
0:38:30 because you don’t know anything about startups, you can just talk to the bot and, like, practice on the bot
0:38:34 and see if you have, if your idea makes any sense at all and have the bot, just like,
0:38:36 you can just feed it all of Paul Graham’s essays.
0:38:37 He’s had some of the best essays ever, right?
0:38:41 You just feed it all the essays, then you’re chatting with the bot based on the essays,
0:38:44 and I just think of what that’s gonna do for so many different industries.
0:38:48 Like, yeah, people who before couldn’t do startups, now they’ll be able to try startups or whatever.
0:38:51 Yeah, I mean, that actually brings up two points.
0:38:56 So one is, like, we’ve talked about digital twins, and basically, now digital twins are actually going to be real.
0:39:00 I mean, again, that’s what we’re talking about, whether it’s in education, whether it’s in healthcare.
0:39:05 I mean, one of the things that we always talk about is things like drug discovery.
0:39:09 Yes, we talked about the fact that we’re building these frontier models to do
0:39:12 protein folding and figure out novel drugs.
0:39:16 You can try it on a population, which is what a frontier model would have
0:39:21 at a statistical level, but will it work for me?
0:39:27 It’s something that we can make these agents that are trained on my nervous system, on my
0:39:33 habits and the way I eat, the way I exercise or don’t exercise for that matter.
0:39:36 I mean, all of those things can be part of this bot.
0:39:42 And then you can try these drugs out without actually impacting the human themselves.
0:39:46 So it’s drug discovery, it’s material discovery, all sorts of digital twins.
0:39:53 The other thing I would say is, like, you were talking about these bots and how they can train
0:39:57 people and give them access to, like, Silicon Valley and startups and all that.
0:40:00 There’s an interesting anecdote that I want to bring up.
0:40:06 So when we shipped our first assistant within our product,
0:40:09 this product called Primaptica, it is actually a security product.
0:40:12 It secures cloud-native, cloud-first applications.
0:40:22 And so when we shipped this assistant, the whole goal here was, can we improve the day-to-day?
0:40:23 Can we make people more efficient?
0:40:28 Can we make SecOps and DevOps and SREs more efficient when they’re using this product?
0:40:30 Can we help them communicate?
0:40:34 Because a lot of the friction that exists is SREs don’t want to talk to devs.
0:40:39 And devs are like, oh, they are friction points and they’re not letting us move fast.
0:40:43 So there’s always this pain, and we want to just bridge that gap.
0:40:45 So that was the intent.
0:40:47 And that we did.
0:40:49 We did actually solve that intent.
0:40:50 But guess what?
0:40:53 When we started looking at the kinds of queries that were coming in,
0:40:58 the kinds of queries that were coming into that bot or the assistant were
0:41:03 — help me understand what a CVE means.
0:41:08 And if you’re a security expert, you would know what a CVE is.
0:41:10 But maybe you’re a newbie in security.
0:41:14 You don’t know what a CVE is, and you’ve joined this team,
0:41:19 and you’re a fresh out-of-school grad, and you don’t want to ask somebody and embarrass yourself.
0:41:27 You can ask stupid questions to a bot that you would not ask some expert, right?
0:41:30 And here you have the expertise of 10 people.
0:41:33 And you can ask really stupid questions.
0:41:34 I do it all the time already.
0:41:42 I don’t like to ask that really stupid questions and not lose my face.
0:41:44 And that’s like a positive use case.
0:41:45 Yeah, no, totally.
0:41:50 I mean, I literally do that with AI already, because I’ve got chat GPT on my phone.
0:41:54 The other day, my daughter was asking me, why this root beer called root beer if it’s not a beer?
0:41:55 And I’m like, well, let’s ask.
0:41:59 It’s like, I use it for things like that right now.
0:42:02 Questions that I feel too dumb to ask somebody in real life,
0:42:04 but I have no problem asking a computer.
0:42:08 I asked AI about at 7-Eleven today with my wife.
0:42:11 She was like, because in Japan, 7-Eleven’s everywhere, right?
0:42:14 And but she didn’t know the history of like 7-Eleven, it being 7 days a week.
0:42:17 And I knew that part, like, oh, it’s 7 days a week, and she goes,
0:42:18 but what does 11 mean?
0:42:19 I’m like, I don’t know.
0:42:24 And it was 7 AM to 11 PM.
0:42:26 So I learned that today using AI, right?
0:42:28 But anyways, yeah.
0:42:33 But again, if you look at the serious aspect of this is, again, democratizing knowledge.
0:42:38 And so you’re providing knowledge to, it’s not just asking the stupid questions,
0:42:45 but it’s also enabling people, like you were saying earlier, in countries where there is no
0:42:51 such access for societal groups that have not had access.
0:42:55 In the past, to just suddenly have access.
0:43:00 I mean, the way Google democratized the web and knowledge, this is taking it 100x forward.
0:43:06 Okay. So the last topic I want to touch on, you’ve already kind of touched on it a little bit,
0:43:12 but I think one of the big fears with AI right now, especially among like smaller startup
0:43:18 founders, is that you’ve got the Microsofts, the Googles, you know, the Metas out there
0:43:20 that we’ve already seen this happen, right?
0:43:25 Somebody would go and develop something with like an open AI API.
0:43:29 And then two months later, chat GPT just makes that a feature of their product, right?
0:43:33 Or Microsoft just goes and makes that a feature of their product.
0:43:41 So how do you see smaller startups actually competing and, you know,
0:43:43 staying in the game against the bigger incumbents?
0:43:45 Yeah, that’s a great question.
0:43:51 I think the way to think about this is, again, go back to history and learn from history.
0:43:57 So at some point, there was a similar sentiment and statement being made about
0:44:01 saw these big cloud providers is like, why should I build a product?
0:44:03 Well, I know, and I’m not going to name the cloud provider.
0:44:08 But when that cloud provider is actually just going to consume it in their ecosystem,
0:44:15 what, the fusics, all of these companies, large companies, they can only go and scale
0:44:18 in certain areas.
0:44:23 I mean, there is a thesis behind which all of these companies are going to put their wheat behind.
0:44:25 So for example, let’s take open AI.
0:44:34 Open AI is going to go after the biggest, baddest, best foundation model, frontier model.
0:44:36 That’s why they’re called frontier models.
0:44:39 It’s like, it’s always going to be the best model out there.
0:44:45 And everyone else, if you’re in the model game, you will be compared against open AI.
0:44:49 I mean, there’s no question, at least for the next couple of years, right?
0:44:51 I don’t know how it will change in the future.
0:44:58 But open AI is not going to concentrate on things that don’t fit into that
0:45:01 sort of mold or in that swimway.
0:45:07 Yes, they might show an app ecosystem and they might dabble into a few things.
0:45:11 Because they’re trying to figure out avenues of revenue.
0:45:15 They’re trying to figure out how you can use that foundation model to build use cases.
0:45:20 Because only then people will come and use that model and they can monetize.
0:45:23 So there’s an ecosystem that they’re going to build up.
0:45:25 But they’re not going to, they cannot.
0:45:33 I mean, no company on planet Earth can do everything excellent in an excellent way all the time.
0:45:34 Yet.
0:45:44 But that is like, it’s one of those things where what is the niche that you want to go after?
0:45:50 Go squarely after that niche, figure out use cases, figure out especially brownfield
0:45:53 pain points that all of these companies tend to avoid.
0:45:58 But customers are willing to pay for that brownfield pain to disappear.
0:46:01 So go after that brownfield pain point.
0:46:06 Take these new tools and disrupt that brownfield pain point.
0:46:07 That is the way to succeed.
0:46:10 So go and hunt for these pain points.
0:46:14 You probably already know that this audience probably already knows that you are dealing
0:46:16 with it day in, day out.
0:46:20 You need to just take a breather, think through all those 10 of those look like.
0:46:24 Pick one, go and solve it using these new tools.
0:46:30 That’s the way to enter the market because nobody is going to solve all of the pain points
0:46:32 all the time in a perfect manner.
0:46:35 I mean, I’m a firm believer of that yet.
0:46:41 Maybe by that point, by that I feel that everybody has made enough money.
0:46:45 Our audience has made enough money that they can all be on that Hawaiian beach.
0:46:46 Yeah.
0:46:47 Or they’ll be in VR.
0:46:53 Well, this has been an amazing discussion, Vejo.
0:46:56 I thank you so much for joining us and talking about this stuff with us.
0:47:00 I think when we came into this call, we sort of anticipated going in one direction
0:47:02 and we went in a totally different direction.
0:47:07 And it was, I think, even more fascinating than where we were going to take it originally.
0:47:14 But if people want to go and learn more from you and hear more of what you have to say about
0:47:16 this stuff, is there somewhere online they can go check you out?
0:47:19 Are you on Twitter, YouTube, any place like that?
0:47:20 Yeah, so I think so.
0:47:23 They can check out our website, outshift.com.
0:47:27 You can follow us on LinkedIn.
0:47:28 You can follow outshift.com.
0:47:30 You can also follow me on LinkedIn.
0:47:36 And then we have a newsletter called The Shift, which we actually send these nuggets of information
0:47:37 every so often.
0:47:39 We don’t pitch.
0:47:43 It’s all about nuggets of information, so you can subscribe to that as well.
0:47:44 Amazing.
0:47:46 Well, thank you once again for joining us today.
0:47:48 This has been such a fun conversation.
0:47:49 I really appreciate it.
0:47:49 Thank you.
0:47:50 It’s been a lot of fun.
0:47:52 Thanks for your time.
0:48:08 [Music]

Episode 18: How can embracing AI solve fundamental problems in areas like sustainability, healthcare, and education? Matt Wolfe (https://x.com/mreflow) and Nathan Lands (https://x.com/NathanLands) dive deep into this topic with Vijoy Pandey (https://x.com/vijoy), who leads Cisco’s Outshift team.

In this episode, Vijoy Pandey reveals the 4-step blueprint to building a successful AI startup and emphasizes how AI’s integration in different sectors could revolutionize the way we live and work. Covering everything from the potential of AI to democratize knowledge, to the challenges startups face against tech giants, this conversation is packed with insights on adopting AI for larger societal benefits.

Check out The Next Wave YouTube Channel if you want to see Matt and Nathan on screen: https://lnk.to/thenextwavepd

Mentions:

Check Out Matt’s Stuff:

• Future Tools – https://futuretools.beehiiv.com/

• Blog – https://www.mattwolfe.com/

• YouTube- https://www.youtube.com/@mreflow

Check Out Nathan’s Stuff:

The Next Wave is a HubSpot Original Podcast // Brought to you by The HubSpot Podcast Network // Production by Darren Clarke // Editing by Ezra Bakker Trupiano

Leave a Comment