AI transcript
everyone’s talking about vibe coding but the reality is for most things vibe coding doesn’t
work right now and even the guy who coined the term andre carpathy he recently posted that he’s
now trying to provide more context to models because he’s realized that’s what you have to
do to get good results back welcome to the next wave podcast i’m your host nathan lands and today
i’m going to show you the secret weapon that all the top ai coders are using you know everyone’s
talking about vibe coding this vibe code that but what they’re not telling you is that you can’t
vibe code most of anything that’s actually important right now for any important ai coding you want to
give it the proper context to know what it’s doing versus just throwing everything at which is what
cursor and windsurf and a lot of these other tools that everyone’s talking about do today i’ve got the
founder of repo prompt eric proventure on here and he’s going to show you how you can use repo prompt
to take your ai coding to the next level so let’s just jump right in cutting your cell cycle in half
sounds pretty impossible but that’s exactly what sandler training did with hubspot they used
breeze hubspot’s ai tools to tailor every customer interaction without losing their personal touch
and the results were pretty incredible click-through rates jumped 25 and get this qualified leads
quadrupled who doesn’t want that people spent three times longer on their landing pages it’s incredible
go to hubspot.com to see how breeze can help your business grow
hey we’ll be back to the pot in just a minute but first i want to tell you about something very
exciting happening at hubspot it’s no secret in business that the faster you can pivot the more
successful you’ll be and with how fast ai is changing everything we do you need tools that actually
deliver for you in record time enter hubspot spring spotlight where we just dropped hundreds of updates
that are completely changing the game we’re talking breeze agents that use ai to do in minutes what used
to take days workspaces that bring everything you need into one view and marketing hub features that use
ai to find your perfect audience what used to take weeks now happens in seconds and that changes
everything this isn’t just about moving fast it’s about moving fast in the right direction
visit hubspot.com forward slash spotlight and transform how your business grows starting today
thanks for coming on yeah yeah it’s nice uh you know finally put a face to it you know we tried it for a
while and uh it’s cool you’ve been using uh repo prompt for a few months now yeah yeah i’ve been telling
people about repo prompt for like the last you know probably six months or so kind of felt like it’s been
almost like my like ai coding secret weapon you know it’s like yeah everybody talking about cursor
and now windsurf and i do find cursor useful but i was like why is no one talking about repo prompt
because like for me every time i’d get into like a complicated project as soon as the project got a
little bit complicated the code from cursor would just stop working for me like it would just not know what
was going on you could tell it wasn’t like managing the context properly and then when 01 pro came
out that was when i really noticed repo prompt and started using it a lot yeah you had to go to 01 pro to
really get the best out of ai for coding at that point absolutely wouldn’t even work with the 01 pro
and so repo prompt was by far the best and it was just kind of shocking me like only like a few people
on x are talking about this yeah most people don’t know about it yeah i mean like it’s the only tool
that i use to work with ai and you know for a long time it was just sonnet and i would feel like i was
able to get a lot more out of sonnet than other tools just because you know the full context window
was there and you know i wasn’t bleeding through the nose with api costs uh doing using the the web chat and
just let me get to a place where i was able to get a tool that was able to do like not just like
putting context out but like taking the changes back in and applying them yeah i like to think i’m the
number one user but actually like look at the stats sometimes and i don’t think that’s even true anymore
yeah i mean i really wanted to bring you on after i saw that tweet from uh andre carpathy the day
so andre carpathy he used to be at tesla ai now he’s like one of the best educators about how lms work
and things like that he had his tweet saying noticing myself adopting a certain rhythm in ai
assisted coding i code i actually and professionally care about contrast to vibe code you know he coined
the term vibe code which everyone’s been using and then he basically goes on to talk about like
stuffing everything relevant into context all this i was like he literally he doesn’t know about repo
prompt i’m like how did this like top ai educator in the world top expert everything totally has no idea
about repro prompt i was like okay so i need to get eric on the podcast and we try to help with that
yeah i appreciate that yeah i mean yeah looking at that that tweet you see exactly like that flow that
like got me started like when you start getting serious about coding with ai like you start thinking
like it will how do i get the information to the ai model and like the ux on all these other tools is
just not cutting it you need a tool to just be able to quickly select search for your files like find
things and yeah you know i recently added the context builder i don’t know if you’ve tried that out but
maybe you know if you could explain like try to simplify it yeah and i think we should then just jump into a
demo and we can kind of just go from there sure thing sure thing yeah i mean the first thing you’re
going to do when you’re going to open up repo prompt is pick a folder so i can either open a folder
manually or just go to the last ones used but generally when you’re working with some code base
like this and flutter like this has a lot of like different build targets and things that are not like
relevant to working with flutter so if you’re not familiar with flutter it’s a way of working to
build multi-platform apps and so you can see it’s got like linux mac os web and all that stuff
but yeah like when you’re working in a repo like this you want to think through like what are the files that
are going through and if you’re using a coding agent like with cursor or whatever the first
thing they’re going to do when you ask a question is okay well let me go find what the user’s trying
to do let me search for files and pick those out and if you know what you’re doing with your code base
you tend to know like okay well i’m working on this button toolbar great so i’ll just like
clear the selection out and i’m just working on these these views here great so i’ve selected those
and that’s it so then i can see you know token use for those files it’s pretty small so i’m able to
just get to work type my prompt and paste that in here help me update all the docs pages so if i do
that and then i just do gemini flash quickly to show what that looks like so the context builder the way
that works is it will actually search for files using an llm based on the prompt that you’ve typed
out you know a big part of using repo prompt is that you have to know you know what it is that you’re
trying to select here right right and you know what i noticed a lot of users they were just putting
everything in they would just say like okay just select all and and that’d be it and you’d be
like okay we’ll get the first yeah i mean that’s the easy thing to do you’re like okay well there’s
the code base perfect but you know there’s plenty of tools that can just zip up your code base and
that’s easy but like the power of repo prompt is you can be selective you don’t have to select
everything so i can just hit replace here and then okay well what did that do okay well that actually
found all these files here that are related to my query put them in order of priority of like
importance based on what the llm’s judgment is and of course if you use gemini flash you’re not
going to get the best results compared to like using you know like a bigger model like gemini
2.5 pro but it’ll pick those out it’ll use something called code maps to help with that and you can see
the actual token file selection queries is just 6k tokens working with a code base if you’ve spent some
time you know programming in the past i know a lot of folks they’re not super familiar with all the
technicals there but like vibe coding yeah exactly exactly um so repo prompt has this code map feature
and what this will do is it will basically as you add files it’ll index them and extract what’s called
like um it’s a map but if you’ve used c++ before there’s like a header file and a cpp file and what
that is is basically you’re explaining to the compiler like what is all the definitions in this
file like you’ve got your functions you’ve got your variables and all that stuff and so it’s like a
high level extracted kind of an index like an index of your code base exactly yeah the context builder
uses that data to help you find what the relevant files are based on your query so it has like a kind
of peek inside the files without having all of the details and it’s able to kind of surface that
relevant information for you so that you can use that in a prompt one thing i love about
repo prompt so when i first started using it i had been like using just like a custom script i had
created to like take my code base and like and then like put you know the relevant you know context in
there which a lot of times i was just doing all of it i was literally putting all into a single file
and i’d copy and paste that into chat gbt yeah i think i tweet about this and someone told me like
oh you got to try repo prompt that’s when i tried repo prompt the fact that i could like see how
much context i was sharing yeah with the model was amazing and it seems like that’s super relevant too
because you know at least from the benchmarks i’ve seen you know everyone’s talking about how much
context you can put into their llm you know think of the benchmarks for llama for as soon as you went
over like 128k context like nowhere near the 10 million yeah like the quality just like dropped like
like a rock well until gemini 2.5 came out pretty much all the models you would really want to stay
below 32k tokens in general i find like over that you’re just losing a lot of intelligence so there’s this
concept of effective context you know the effective context window like at what point does the
intelligence stop being like as relevant for that model and for a lot of smaller models and local
models it’s a lot lower and you probably want to stay around 8k tokens but like for bigger models 32k
is a good number it’s only now with gemini that you’re able to kind of use the full package the full
context window but yeah so you’re using this context you’ve picked out your files say you you want
to use as many as you want 100k like what do you do with that so like you have a question like
um help me change how links are handled uh with my docs uh and so i have a question here i’m just
going to paste it to o3 and you’ll see like what is o3 getting out of this so it’s getting basically
this file tree so it’s getting a directory structure of this project it’s getting basically the high level
code maps of the files that i haven’t selected so basically when it’s set to complete everything that i
haven’t selected gets kind of shipped in and then you have the files that i did select and so then the
context is able to go ahead and is able to do that and so this is like a great way to kind of just get
this information into o3 get the most out of this model and o3 is an expensive model if you’re trying
to use it a lot like this is a great way to kind of get more value out of it move fast and get good
responses i think the average person like people who are just using chat tpt or even people who are
coding with cursor they don’t realize that you can do that that you can literally just copy and paste
all of that context in there and that the lm gets that and it understands what to do yes you know
in contrast to chat gpt claude is very good at following instructions like it’s the best model
at following instructions i find and i think this is another thing that repo prompt does quite well is
so it’s got like tools to kind of send information into the lm but it’s also got tools to go ahead
so it’s now it’s going to go ahead and write an xml plan and it’s going to create this theme selector
and it’s going to add these files and and change files for me and what’s cool with this is that i can
just go ahead and use claude with my subscription and then have it modify all these files so it’s
basically creating all these files and it can search and replace parts of files too so i don’t
have to re-update and re-upload the whole thing have it up with the complete code so a lot of models
struggle with you know people are noticing like oh this model is really lazy it’s not giving me the
whole code but like this kind of circumvents that issue because it lets the ai just kind of get an
escape patch and just do what it needs to do here right you know sometimes when i’m coding like this
i’ll iterate like so i pasted this question right with o3 and often what i’ll do is i’ll read through
the answer and then i’ll change my prompt and then paste again into a new chat and try and like see
where the result is different because basically i look at like here’s the output okay i actually don’t
care maybe about this copy link button okay then i’ll put specifically put a mention in my prompt to say
like let’s let’s kind of just focus on this part of the question and kind of reorient it and that’s the
nice thing with this is that i can just hit copy as many times i want if you’re paying for a pro sub
like there’s no cost to trying things there’s no cost to hitting paste again and you know you just try again
you just paste again let the model think again and try things and i think that’s like a really important
way of working with these models is to experiment and try things and and see how does changing the
context what files you have selected your prompt i use these stored prompts that come built in the app
so there’s the architect and engineer and these kind of help focus the model they give them roles
so like if i’m working on something complicated the architect prompt will kind of focus the model
on just the design and have it kind of not think about the code itself whereas the engineer is just the
code like don’t worry about the design just just kind of give me the code uh but just the things
that change maybe you should explain like when you say engineer prompt it’s literally you’re just adding
stuff that you copy and paste into the lm saying like you’re an expert engineer and this is what i
expect from you i expect for you to give me xml that’s your job do it and that’s literally how the lms
work like okay i’ll do it absolutely yeah giving them roles is is crucial telling them who they are
what their job description you know what what do i look for like giving them a performance review
evaluation uh all that stuff like i i find like the more detailed you are with your prompts the
more you can help like they kind of color the responses in an interesting way so just adding
the engineer prompt you see like it spent more time thinking about it so here this time it kind of said
okay this is the file tailwind here’s the change and this is the change that i’m going to do in a code
block so you know for the longest time before i had any of these xml features i was just kind of
using repo prompt and like getting these outputs and then just copying them back into my code base
manually and kind of reviewing them right i was like really the antithesis of vibe coding where
everything’s kind of automated yeah so i showed you a lot of stuff like pasting back seeing this xml
and then kind of putting it back in what’s really nice with repo prompts like chat flow is that all of
that is automated so if you want to vibe code and kind of think about it like just not think about
anything while being kind of cost effective too you can do that kind of work here and basically the way
this works here is i had gpt 4.1 as my main model this is all the context i gave it and then my pro
edit mode what it’ll do is it’ll actually ask a second model to apply the edits so i have gemini flash
that will go ahead and rewrite the file for me and just kind of do that work so i don’t have to manually
kind of incorporate these so if i was looking at here like okay this is the tailwind file i’d have to
open that up and then go ahead and introduce it in but having it kind of just go in the chat having
different models kind of do that work you know it makes a big difference working on repo prompt it’s
really like there’s building your context that’s like the biggest thing just picking what you want
you want to front load that work and you know in contrast to using agents you’re going to have those
agents kind of run off do a lot of work call a bunch of tool calls you see like oh three kind of
thought for 15 seconds thought through some tools to call it didn’t really make sense it just kind of
kept going and and ended up doing this and if you’ve used cursor a lot you know you’ll see like often
using oh three it’ll call tools that will like read this file read that file read this file
but if you just give it the files up front and you just kind of send it off to work with your
prompt you right away you get a response and you’re like okay well does this make sense to me am i able
to use this instead of letting it kind of serve for an hour yeah it’s a little bit more work at least
right now but it’s yeah i think you get a lot better results so it’s yeah yeah just front loading that
context being able to think through and iterate on that and that’s the whole philosophy around it is
just like thinking through like making this easy the context builder helps you find that context
you know eventually i’m going to add mcp support so you can query documentation find find things
related to your query as well and just spend time as an engineer sitting through what do i want the
llm to know and then what do i want it to do and then make that flow as quick and as painless as
possible and like that’s kind of everything and i think you know going forward and you know as you get
serious coding with ai like that’s what the human’s job is in this loop as engineer’s job is
figuring out the context i think that’s the new software engineering job
hey we’ll be right back to the show but first i’m going to tell you about another podcast i know
you’re going to love it’s called marketing against the grain it’s hosted by kip bodner and kieran
flanagan and it’s brought to you by the hubspot podcast network the audio destination for business
professionals if you want to know what’s happening now in marketing especially how to use ai
marketing this is the podcast for you kip and kieran share their marketing expertise unfiltered
in the details the truth and like nobody else will tell it to you they recently had a great episode
called using chat tbt 03 to plan our 2025 marketing campaign it was full of like actual insights as well
as just things i had not thought of about how to apply ai to marketing i highly suggest you check it out
listen to marketing against the grain wherever you get your podcasts
like i said before i was so surprised a lot of people haven’t talked about this because like
for me like right now cursor is good for like something very simple like okay change some
buttons or change some links or change whatever you know but anything complicated repo prompt i got like
way way better results so i’m curious like you know have you ever thought about like this being used
for things outside of coding and do you think would be useful for anything outside of coding yeah i mean
i’ve gotten academics reach out to me telling me they’re using it for their work uh there’s folks
in different fields for sure i think some of the ux has to probably improve a little bit but in general
like you know if you’re working with plain text files um you know repo prompt can service those use
cases for sure it’s all set up to read any kind of file and then apply edits to any kind of file too
like i don’t differentiate if i can read it then i’ll apply edits for you and i think a whole bunch of work
is around just like gathering context and kind of iterating on stuff like even you know in doing
legal work i do think you know a flow that is still missing from this app it’s just that like
kind of collaborative nature i think there’s still some work that needs to kind of be done to kind of
make this a more collaborative tool make this a tool that that kind of syncs a little bit better with
different things like for now like developers use git and like that’s that kind of collaboration
bedrock but i think like lawyers need other things yeah yeah that’s something i think too is like yeah
repo prompts super useful but you have to be a little bit more advanced like an average vibe coder
the average person using an llm and uh yeah you know no offense you can kind of tell one person has
built this you know it’s amazing but you can tell yeah yeah yeah no it’s all good i’m kind of curious
like why did you not go the vc route where’s repo prompt at right now like where is it now and what’s
your plan for it you know i’ve had a lot of folks you know bring that up to me and they’re kind of
thinking through like you know why not vc or whatever and i think it’s not something that the
door’s closed on forever it’s just i think right now it’s it’s work i’m able to build and you know i’m
able to kind of listen to my users and pay attention to what they need and i think it’s just
not super clear to me like where this all goes you know like this is an app that is like super useful
and it’s like helping me and i’m able to build it but like is it something that necessarily makes sense
to like have like you know a hundred million dollars invested into it to grow a huge team to like
build maybe i don’t know but like you know i want to kind of take things as they go as well and
you know right now i’m able to to monetize it a bit you know it’s got some passionate users
you know it’s working well this way but again like it’s all new you know to me like i’ve not
gone through this whole you know vc story myself i’ve had friends who kind of shy me away from it but
you know i i try to like listen to the folks around me too and see where yeah there’s pluses and minuses
to vc like you’ll hear on twitter and things like that like people who are like oh vc is horrible or oh
it’s amazing you know there’s good and bad to all of it yeah you know i feel like everything with ai
right now is like who knows what’s going to happen like yeah in a year everything could be
different in five years who the hell knows right yeah like right now because ai is such a big wave
that’s why we call the show the next wave right it’s such a large wave of transformation happening
that you are going to see the largest investments ever yeah i think in history yeah as well as the
largest acquisitions ever yeah and i think these are have yet to come yeah we’re like in the early
part of this transition i think the best two routes for you in my opinion would be either to go
really big and go the vc route or to go more like hey who knows what’s going to happen with it i just
want to like get my name out there and i can leverage my name for something else in the future
and like open source it that’s my kind of thought on strategically what i would do it’s like either go
really big or open source it and make it free and just put it out there and say yeah you know and get
some reputation benefit from it there is a free tier it’s not open source yeah but there is a feature you
know the thing about open source actually is something i’ve thought about a lot and the big issue with it
right now especially as people are building ai tools is that like it’s never been easier to fork
a project and kind of go off and just build it as a competitor if you’ve looked at client like client’s
a big tool you know that came around actually started around a similar time as me working on repo prompt
and uh if you’re not familiar the client is an ai agent that sits in vs code and it’s pretty cool but
the thing that is not so cool about it is that it eats your tokens for lunch like that thing will
churn through your wallet like faster than any other tool that exists just because it goes off
and reads files stuffs the context as big as possible so a lot of people really enjoy using
it because it has good results for certain things but yeah that cost is very high but the thing that
i was trying to bring up with this is that like so client was actually forked a few months ago by
another team of developers and it’s called bruise the alternative and if you look at open router
and some stats like bruise actually surpassing client and so you know that fork is now overtaking the
original and you know that’s the kind of space that we’re in where like different teams will kind of
take your code take it in their direction and then all of a sudden they’ve overtaken you and
you know you kind of lose track of you know where things are going there so like it’s a crazy space
it’s never been easier to open pull requests with ai you don’t need to understand the code you’re like
oh i have this open source project i’m just going to fork it and add my features and kind of go and
and it’s a tricky thing but like you know having a free version and kind of trying to ship and grow a
community of users who are passionate who like you can talk back to you and you know i mean that’s kind of
the route i’ve taken right now and it’s kind of been working so far i was in beta for a long time
yeah you know it’s still new figuring out where to go next with it and it’s mac only right now is that
correct yeah that’s true it’s mac only and i think a part of that is that i started off you know just
kind of trying to think about like you know how do i build this in a good way and the problem is like
i immediately ran into issues trying to build for different platforms and like i spent a bunch of time
debugging just getting svg icon rendering you know all these little things that are just like rabbit holes
and you’re like okay well you’re so abstracted from the base of like what’s happening and you spend a lot of time
just solving build issues that it’s like well i’m just gonna go ahead and do build native and just run with it
and have better performance doing so like you know if you open an ide like vs code you open up like a huge repo
what actually happens is that it’ll load the file tree and it will just kind of lazy load everything
like not everything needs to load because if you’re opening an ide you know as a coder traditionally
you only have a couple files open at a time maybe you have a dozen right you’re not going to be
processing 50 000 files at the same time but an ai model can you know if you give it to gemini like
gemini will want all those files it will want as much as you can give it because they can read all
of it and so you need a tool that is built different that is kind of organized in a way where it’s kind
of thinking first through that performance of mass data processing that you need to kind of do it’s a
whole different way of working that’s why it’s native because like i want that performance processing all
these files there’s all this concurrency happening where you’re like in parallel editing these files
like processing them and doing all this stuff like it’s very hard to do if you’re just you know using
javascript or typescript when i use repo prompt it seems like you’ve done a really great job of building
it it works really well it is all just you like right now yeah it is just me yeah i’ve been working
on it a lot yeah that’s crazy yeah it’s come a long way i iterated a lot on it you know but that’s
the power of dogfooding too like if you’re not feeling like folks listening dogfooding is when you like
kind of use your own product to iterate on it and build with it and you kind of make it a habit of
making sure that you’re a number one user of your app you know your own product to make sure that you
see all the stuff that sucks about it and for the longest time like you know it’s really sucked and
just that struggle and that that pain of using it and forcing yourself to feel that pain like that’s
what makes it good that’s where you’re able to kind of feel those things that the users using the
app will feel and and that’s when you end up with something that is great in the end so where do you
think repo prompt is going like long term which maybe now maybe long term now means like one year
where’s repo prompt going next year that’s long term it’s hard to say honestly like it’s weird you know
like in december like open ai announces oh three and they’re like oh it beats all the arc agi tests
and you’re like well is this agi like what is this like and then it kind of shifts and it’s like okay i
mean like it’s a better model it lies to you it’s not like uh the messiah you know right so it’s hard to
say like i don’t know like where we go like i have ideas on like where the future is one year from now
i think i’ll have to adapt this product and keep iterating on it to kind of stay relevant so it’s
going to keep changing but like i think that the flow i’m kind of pushing towards of that context
building i think that remains relevant for a while longer and what improves is the layers of automation
around that work yeah so i think like long term i still think that is kind of the vibe that i want
to go towards though i think just like integrating mcp just embracing that like universality of all
of these different tools so for folks listening if they’re not sure what is mcp is another acronym
we got lots of an ai so the idea there is traditionally if you use like claude or open ai they have tools
and those tools you know one of them could be like search the web or one of them could be like read the
files on your thing or look up documentation or these kinds of things and there’s this protocol mcp
that like creates like an abstraction layer so that any client app can implement this protocol
and then users can bring their own tools so if a user comes in and says like oh i want to use and
there’s this new one that’s really cool it’s called context seven where basically they’ve gone ahead
built a server that fetches the latest documentation for whatever programming language you’re using and
we’ll kind of pull that in as context so you can say okay great fetch the latest angular docs or
whatever docs you care about and then you can bring that in so that kind of work where you’re like
doing that context retrieval that’s super important or like stripe has one too where basically all the
docs for their tool is set up and you know you just plug in the stripe mcp and then
all of a sudden if you’re trying to vibe code your way through integrating stripe like that’s
super easy that the work is kind of handled you can plug in your api keys onto it so it can even talk
to the back end for you that whole work is kind of automated so it’s all about having tools for
folks using these models to kind of automate connecting to different services in this like
universe of all these different you know services that exist in the world yeah i kind of think of it
most i mean it’s different than xml but for me i think of it as almost more just kind of xml is like
the information language that ai can understand mcp is like the same thing with any service you want to
use or tool it knows how for the ai to know how to work with those things yeah and funny enough you
mention xml because that’s actually one of the things that i do a lot with rebomb is parsing xml
and i think one strength there that i have that like a lot of other tools are kind of ignoring
so traditionally when you’re working with these language models as a developer and you can see
this if you use chat you’d be like hey like um search the web it’s going to use the search tool
and you’ll see it say you call tool search and it’ll go through but what happens when it’s doing that
is that basically it calls that tool it stops waits for the result and then continues i think a bit like
the robot is kind of being reboot as a new session with that new context because basically every tool
call is a new query so you’re giving back the old information but you’re not necessarily talking to
that same instance it’s like a different ai instance that is answering your question from the
new checkpoint so like that’s like a weird thing so you know as you’re making all of these tool calls if
you use cursor you know it’ll make like 100 tool calls but by the end of it you know you’ve gone
through 25 different instances of these models and then you get a result at the end and you’re like
well you know it’s like weird like what actually happened you know there’s some data loss like weird
stuff you know we don’t know how this is yeah it doesn’t seem like that could create like reliability
issues right because like you know the lms like sometimes they give you amazing results and other
times yeah it’s like oh what is this and so every time you’re doing a new tool it sounds like you’re
almost recreating the chance of it going wrong in a way exactly yeah you’re you’re aggregating these
issues but you don’t even know where that info there could be different servers that are actually
processing all these different tool calls and yeah it’s weird sometimes you’ll have like oh that server
has some like chip issue on its memory and like that actually causes some weird issues where
claude is actually really dumb today um but on the other one it’s it’s a lot smarter because
their chip the memory chip is working fine you know you don’t know right so that kind of thing so
just to close that back you know what i’m doing yeah the way that i’ve kind of gone about this is
the way i call tools is you have your xml and the ai will just answer in one instance and it’ll
just give you the whole thing and it can call a bunch of tools in there it can be like hey like i
want to call this this do this and this and then i just parse that and then bulk call the tools
and then get the results and then we go another instance with the results and you can kind of
go back and forth like that so like not have to wait on each single one you’re actually just
bulk sending them out getting that data it’s a lot more efficient you’re able to process say like 25
queries you know get 2325 we’ll bring them all in you know let’s work from there and see how it goes
and so that kind of thinking so i think there’s a lot to kind of play with in terms of you know how
you’re even getting this data back and forth from the llms because at the end of the day it’s all text
you know text and images maybe um some video in some cases but like really text just for your coding
like that’s that’s the thing that you’re working with and you can do a lot with text manipulating
it and playing with it to kind of get good output so what do you think i’ve heard you know yc and
others i think gary tan said that i can’t remember what’s 80 but i think he said like 80 of the code
for the the startups going through yc right now is ai generated that number could be wrong do you think
in three years from now do we still have like normal engineers who don’t use ai at all is that a real
thing do you still have the holdouts well first of all like i think saying a percent like that of how
much of it is ai generated it’s a bit misleading to be hyped too yeah like i can go ahead and like
every line of code i could basically like type it in pseudocode to the ai model and like have it paste
it back in as like a fleshed out javascript function and say 100 of my code is written by ai
it really depends on how your workflow is what your pipeline looks like i do think fundamentally the job
of an engineer has changed it’s already done it’s already completely different like you can’t you can’t
work the same way but it depends on what point in the stack you’re working on like i i work for some folks
who do some like really low level you know graphics work and i talked to someone about like how they can’t
really use ai models because the ai models just hallucinates everything like it’s just not trained
on anything that they work on so it’s just useless for them but then if you work out someone who’s a
you know web developer well 100 of the code like like 98 of the training code is web code and web
framework code and so it’s like okay well yeah 100 of that work can be done by ai it’s really easy
so it really depends on where you are in the stack what kind of tool you’re working with and you know
how well the ai models can help you in that but i think like as we move forward more and more you’re
going to want to have ai models thinking through hard problems for you because it just happens much
faster as they get better at math at solving like you know connectivity architecture like architecture
something that like these 03 and 01 pro and hopefully 03 pro is just excel at they’re they’re very good
at finding good ways of like organizing your code and helping you plan how you connect things together
and i think that’s a big part of software engineering in general is just organizing your code because the
actual process of writing it like you know that’s not the fun part or even the interesting part it’s
that part of organizing and and i think a human job with this is to like iterate on those plans
iterate on these ideas because that’s like the kernel of what an ai will generate code with yeah so i
think that’s where the work is you know i used to open the editor like when i’m working on repo prompt
i don’t write a ton of code by hand like most of it is done by ai but like i spent a lot of time
thinking about architecture i spent a lot of time thinking about problems and debugging and thinking
through like i won’t just like hit the button and say like solve my problems fix my bug like that’s just
not helpful but like if i read through the code i’ll be like okay like i i think there’s like a
data race going on over here this part connecting to this part like there’s some concurrency issue
i’ll add some logs okay great i’ve got some idea of like what’s going on here perfect then you can
kind of feed that data into the ai model and have it kind of think through you know a resolution and often
once you’ve done those steps of troubleshooting the ai model can solve your problems but you have to sit
down and think through how things are connected and understand what is actually happening so i think
that’s kind of where that work changes that’s a great uh engineering answer yeah i’m looking for
the thing that goes viral on x right that you know like yeah yeah all engineers will be gone next year
this kind of thing you know listen the job is fully changed i think from today on like if you’re not
using these tools you’re not learning how they work like i think that’s like an issue because like i don’t
think you know a traditional engineer who spends his whole career just typing code up like that doesn’t
exist anymore but what does exist is someone who understands code and who can read it and who
understands you know what questions to ask and if you’re prompting like about the code if you
understand you know the connections that’s where you’re going to get the best results and that’s why
like a tool like repo prompt is so helpful because you’re able to do that and feed the right context in
but if you’re just saying like make the button blue or like move it over here i mean that works for to some
extent you know if as long as your instructions are simple enough and you know what you want you can get
there but like at a certain point you fall off and you know that’s when it stops working and maybe that
point where you fall off gets further and further as the models improve but i don’t think that like in
the next 10 years we get to a point where that point stops existing uh one thing that we didn’t talk
about that i was kind of curious to talk about was like what do you do at unity so what i do there is
i’ve been doing uh kind of xr research and xr engineering and so i work on a toolkit called the xr
interaction toolkit and basically it’s a framework for developers to build interactions with xr so if
you’re if you’re putting on an oculus quest or you’re a hololens or you know like uh apple vision
pro you want to basically interact with objects in your scene in your world you know like in ar if
you’re walking up and you want to pick up a virtual cube like how do you process that interaction of
you grabbing the cube and picking it up and looking at it so that’s like i’ve done a lot of research on
that that interaction of like input like i’ve written specs that are adopted for like the industry
in terms of hand interaction so like you know just tracking your hands how do you grasp something what
should you be doing if you want to poke a button that’s like not there like what does that look
like so that kind of stuff that’s that’s what i do there that’s amazing it’s like that’s a really
complicated engineering work how are you doing that doing a repo prompt and then you have a baby like
yeah how are you doing all this i mean i don’t have a lot of free time obviously i uh yeah yeah but
i’m passionate about what i do at work too and and then repo prompts you know this is my other baby
and i just think a big part of it is just you know when folks come to me and there’s like
something that’s like bugging them about the app you know i i just get like an itch and i have to
fix it for them yeah so like i just keep tricking on it and but i try to get some sleep in so i don’t
cut through that too much one thing i was thinking about too is like i have a son 11 year old you’ve
got a baby yeah this actually one reason i even like you know helped start this podcast was i’m
constantly thinking about where ai is going and wanting to stay ahead yeah and also think about what
does it mean for me and my family like quite honestly you know the selfish level and people used to
ask me like when my son was born because he was born in san francisco around tons of like
founders and vcs all the kind of people to be around like the birthday parties right it was all
people from like yc and people like that and it’d be asking me like you know what do you think your
son should do in the future what will his job be you know this is like 11 years ago and i was talking
about drones like he probably needs to be like a drone defense engineer like building anti-drone
systems or something it would be my common line that i would say at parties but now with ai like
because at that point we did not know ai would advance as fast as it has no it’s just happened
so fast right it was all just like a some stuff out of a book it was like oh yeah sure they’re
talking at stanford and they got some cool demos but like nothing’s working yeah now it’s working so
like with your child have you thought about that yet like oh of course what do you think they should
learn i have no idea yeah i have no idea it’s everyone right like like what do you even teach your
children like is it is it important to learn to code is it we teach them logic morals probably all of this
in more yeah flexible and super fluid i think so you know but it is funny on that topic i look at
engineers coming out and learning to code with ai around and i think they’re at a disadvantage you
know it’s unfortunate that like you know if you’re starting to code today and you have ai to lean on
you just don’t have that struggle you just don’t have the pain that like i had to go through when i
started to code when you know when engineers who’ve been in the field for so long that had to struggle
and not get the dopamine hit of a fixed problem right away like to study it and understand how it
works like that just doesn’t exist anymore because the ai just solves it for you and i think that’s
true in the code but it’s going to be more and more true in every field and so i think like there’s
going to be a need for people to have the restraint to kind of put aside these tools to struggle a little
bit i think there’s a ton of value in kind of using them to learn and grow but there’s also like that
restraint that you need to form to kind of have the struggle because that’s where the learning is and
it’s really tricky and i and i don’t know how you you solve that now because it’s it’s too easy not to
struggle now which which is a big problem yeah i’ve heard uh jonathan blow and if you know of him
of course yeah the game designer he talks about exactly what you’re saying that you know it’s in
the future like yeah sure ai could get amazing at coding in the future but it’s also going to create
issue where like just like you said people are not going to learn to properly code he was already
complaining about before ai the code was and then now with ai it’s like okay now we’re kind of
screwed i guess because like we’re gonna have a situation where like no one knows what’s going on
and like yeah you’re entirely dependent on the ai for everything yeah it’s a crutch so easy to reach
for and what do humans do but that’s the thing i think maybe that’s the middle part you know where
where we’re at this point where it’s like the ai is just not quite good enough to kind of solve
all the problems and you still have problems to solve and you still have people that need to kind
of work with the machines to kind of figure out how to go maybe at some point in the future
all of it is moot i know some folks think that and maybe it doesn’t matter but i think you know
there’s going to be some discomfort in the middle where you know the machines are not quite good
enough to solve every problem we lean on them as if they are and then you know we’re kind of atrophying
a lot of skills that we we’ve heard you know i haven’t driven in a tesla with fsd but i’ve heard
folks say the same thing there where like if they’re using it all the time they actually like suck at
driving without it and it’s like right you know like more and more that’s going to kind of be a thing
where we’re like you’re that that is the thing where we start to go like to like we’re like
almost living in like one of those sci-fi novels right like everything being super super safe you
know i live in japan everything you used to live in san francisco everything’s super safe in japan
and there’s one reason i like it but you do lose some freedom in that yeah but do i really want my son
you know driving now like if i really think about it there’s an alternative um yeah not necessarily you
know i agree i mean i have that same debate with my wife you know was saying like i don’t think our
daughter is gonna ever have a driver’s license and she’s like i don’t think so you know like we’ll see but
i don’t know like there is the safety part for sure and i think that’s like really interesting and
and hopefully like that is the case that like ai just does make it safer out yeah right so eric it’s
been awesome and uh maybe we should you know tell people where they can find you and uh where they
can find repo prompt and yeah so i’m uh puncher with a v on x so it’s like pvn ch er on x uh and
most most socialists my handle all over so you can reach out there my dms are open if you have
questions and repo prompts uh repo prompt.com so you can just head over there and uh find the app
free to download and uh nice discord community too if you want to hop over there and send me some
messages and tell me what you think like please do yeah thanks for having me on nathan it’s been
great chatting with you yeah appreciate it it’s been great yeah yeah yeah had a lot of fun cheers
likewise take care all right

Episode 57: Can simply “Vibe coding” with AI really replace the need for deep code context when building real applications? Nathan Lands (https://x.com/NathanLands) is joined by Eric Provencher (https://x.com/pvncher), founder of Repo Prompt and an XR engineer at Unity, to reveal the secret AI prompt tool quietly powering Silicon Valley’s top engineers.

This episode dives deep into why the current trend of “Vibe coding” with tools like Cursor often falls apart for complex tasks — and how Repo Prompt closes the gap by letting you build effective, highly targeted context for AI coding. Eric breaks down the philosophy behind contextual prompting, gives a live demo, and shares how Repo Prompt’s unique features like the context builder and codemaps give power-users real control over LLMs like Gemini and Claude. Beyond coding, they discuss implications for the future of engineering, learning, and the evolution of dev tools in the age of AI.

Check out The Next Wave YouTube Channel if you want to see Matt and Nathan on screen: https://lnk.to/thenextwavepd

Show Notes:

  • (00:00) Vibe Coding Myths Unveiled

  • (03:15) Repo Navigation for Flutter Devs

  • (06:37) Gemini 2.5 Extends Model Context

  • (11:18) Automating File Rewrites with AI

  • (15:33) The Next AI Wave

  • (20:58) MCP: User-Customizable Tool Integration

  • (23:53) Efficient AI Tool Integration

  • (28:32) XR Interaction Toolkit Developer

  • (31:01) AI’s Impact on Coding Learning

Mentions:

Get the guide to build your own Custom GPT: https://clickhubspot.com/tnw

Check Out Matt’s Stuff:

• Future Tools – https://futuretools.beehiiv.com/

• Blog – https://www.mattwolfe.com/

• YouTube- https://www.youtube.com/@mreflow

Check Out Nathan’s Stuff:

The Next Wave is a HubSpot Original Podcast // Brought to you by Hubspot Media // Production by Darren Clarke // Editing by Ezra Bakker Trupiano

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Engine Chatbot
AI Avatar
Hi! How can I help?