AI transcript
0:00:11 work right now and even the guy who coined the term andre carpathy he recently posted that he’s
0:00:15 now trying to provide more context to models because he’s realized that’s what you have to
0:00:20 do to get good results back welcome to the next wave podcast i’m your host nathan lands and today
0:00:25 i’m going to show you the secret weapon that all the top ai coders are using you know everyone’s
0:00:29 talking about vibe coding this vibe code that but what they’re not telling you is that you can’t
0:00:34 vibe code most of anything that’s actually important right now for any important ai coding you want to
0:00:38 give it the proper context to know what it’s doing versus just throwing everything at which is what
0:00:42 cursor and windsurf and a lot of these other tools that everyone’s talking about do today i’ve got the
0:00:46 founder of repo prompt eric proventure on here and he’s going to show you how you can use repo prompt
0:00:54 to take your ai coding to the next level so let’s just jump right in cutting your cell cycle in half
0:00:59 sounds pretty impossible but that’s exactly what sandler training did with hubspot they used
0:01:04 breeze hubspot’s ai tools to tailor every customer interaction without losing their personal touch
0:01:13 and the results were pretty incredible click-through rates jumped 25 and get this qualified leads
0:01:19 quadrupled who doesn’t want that people spent three times longer on their landing pages it’s incredible
0:01:24 go to hubspot.com to see how breeze can help your business grow
0:01:34 hey we’ll be back to the pot in just a minute but first i want to tell you about something very
0:01:39 exciting happening at hubspot it’s no secret in business that the faster you can pivot the more
0:01:45 successful you’ll be and with how fast ai is changing everything we do you need tools that actually
0:01:50 deliver for you in record time enter hubspot spring spotlight where we just dropped hundreds of updates
0:01:56 that are completely changing the game we’re talking breeze agents that use ai to do in minutes what used
0:02:03 to take days workspaces that bring everything you need into one view and marketing hub features that use
0:02:09 ai to find your perfect audience what used to take weeks now happens in seconds and that changes
0:02:15 everything this isn’t just about moving fast it’s about moving fast in the right direction
0:02:22 visit hubspot.com forward slash spotlight and transform how your business grows starting today
0:02:28 thanks for coming on yeah yeah it’s nice uh you know finally put a face to it you know we tried it for a
0:02:33 while and uh it’s cool you’ve been using uh repo prompt for a few months now yeah yeah i’ve been telling
0:02:37 people about repo prompt for like the last you know probably six months or so kind of felt like it’s been
0:02:41 almost like my like ai coding secret weapon you know it’s like yeah everybody talking about cursor
0:02:47 and now windsurf and i do find cursor useful but i was like why is no one talking about repo prompt
0:02:51 because like for me every time i’d get into like a complicated project as soon as the project got a
0:02:55 little bit complicated the code from cursor would just stop working for me like it would just not know what
0:03:00 was going on you could tell it wasn’t like managing the context properly and then when 01 pro came
0:03:04 out that was when i really noticed repo prompt and started using it a lot yeah you had to go to 01 pro to
0:03:08 really get the best out of ai for coding at that point absolutely wouldn’t even work with the 01 pro
0:03:12 and so repo prompt was by far the best and it was just kind of shocking me like only like a few people
0:03:16 on x are talking about this yeah most people don’t know about it yeah i mean like it’s the only tool
0:03:21 that i use to work with ai and you know for a long time it was just sonnet and i would feel like i was
0:03:25 able to get a lot more out of sonnet than other tools just because you know the full context window
0:03:30 was there and you know i wasn’t bleeding through the nose with api costs uh doing using the the web chat and
0:03:34 just let me get to a place where i was able to get a tool that was able to do like not just like
0:03:38 putting context out but like taking the changes back in and applying them yeah i like to think i’m the
0:03:42 number one user but actually like look at the stats sometimes and i don’t think that’s even true anymore
0:03:46 yeah i mean i really wanted to bring you on after i saw that tweet from uh andre carpathy the day
0:03:52 so andre carpathy he used to be at tesla ai now he’s like one of the best educators about how lms work
0:03:56 and things like that he had his tweet saying noticing myself adopting a certain rhythm in ai
0:04:03 assisted coding i code i actually and professionally care about contrast to vibe code you know he coined
0:04:06 the term vibe code which everyone’s been using and then he basically goes on to talk about like
0:04:11 stuffing everything relevant into context all this i was like he literally he doesn’t know about repo
0:04:17 prompt i’m like how did this like top ai educator in the world top expert everything totally has no idea
0:04:21 about repro prompt i was like okay so i need to get eric on the podcast and we try to help with that
0:04:26 yeah i appreciate that yeah i mean yeah looking at that that tweet you see exactly like that flow that
0:04:29 like got me started like when you start getting serious about coding with ai like you start thinking
0:04:33 like it will how do i get the information to the ai model and like the ux on all these other tools is
0:04:38 just not cutting it you need a tool to just be able to quickly select search for your files like find
0:04:41 things and yeah you know i recently added the context builder i don’t know if you’ve tried that out but
0:04:45 maybe you know if you could explain like try to simplify it yeah and i think we should then just jump into a
0:04:49 demo and we can kind of just go from there sure thing sure thing yeah i mean the first thing you’re
0:04:53 going to do when you’re going to open up repo prompt is pick a folder so i can either open a folder
0:04:57 manually or just go to the last ones used but generally when you’re working with some code base
0:05:00 like this and flutter like this has a lot of like different build targets and things that are not like
0:05:03 relevant to working with flutter so if you’re not familiar with flutter it’s a way of working to
0:05:07 build multi-platform apps and so you can see it’s got like linux mac os web and all that stuff
0:05:12 but yeah like when you’re working in a repo like this you want to think through like what are the files that
0:05:16 are going through and if you’re using a coding agent like with cursor or whatever the first
0:05:20 thing they’re going to do when you ask a question is okay well let me go find what the user’s trying
0:05:25 to do let me search for files and pick those out and if you know what you’re doing with your code base
0:05:29 you tend to know like okay well i’m working on this button toolbar great so i’ll just like
0:05:33 clear the selection out and i’m just working on these these views here great so i’ve selected those
0:05:38 and that’s it so then i can see you know token use for those files it’s pretty small so i’m able to
0:05:44 just get to work type my prompt and paste that in here help me update all the docs pages so if i do
0:05:50 that and then i just do gemini flash quickly to show what that looks like so the context builder the way
0:05:56 that works is it will actually search for files using an llm based on the prompt that you’ve typed
0:06:00 out you know a big part of using repo prompt is that you have to know you know what it is that you’re
0:06:04 trying to select here right right and you know what i noticed a lot of users they were just putting
0:06:07 everything in they would just say like okay just select all and and that’d be it and you’d be
0:06:10 like okay we’ll get the first yeah i mean that’s the easy thing to do you’re like okay well there’s
0:06:14 the code base perfect but you know there’s plenty of tools that can just zip up your code base and
0:06:17 that’s easy but like the power of repo prompt is you can be selective you don’t have to select
0:06:22 everything so i can just hit replace here and then okay well what did that do okay well that actually
0:06:26 found all these files here that are related to my query put them in order of priority of like
0:06:30 importance based on what the llm’s judgment is and of course if you use gemini flash you’re not
0:06:34 going to get the best results compared to like using you know like a bigger model like gemini
0:06:39 2.5 pro but it’ll pick those out it’ll use something called code maps to help with that and you can see
0:06:45 the actual token file selection queries is just 6k tokens working with a code base if you’ve spent some
0:06:48 time you know programming in the past i know a lot of folks they’re not super familiar with all the
0:06:54 technicals there but like vibe coding yeah exactly exactly um so repo prompt has this code map feature
0:06:59 and what this will do is it will basically as you add files it’ll index them and extract what’s called
0:07:05 like um it’s a map but if you’ve used c++ before there’s like a header file and a cpp file and what
0:07:08 that is is basically you’re explaining to the compiler like what is all the definitions in this
0:07:12 file like you’ve got your functions you’ve got your variables and all that stuff and so it’s like a
0:07:17 high level extracted kind of an index like an index of your code base exactly yeah the context builder
0:07:21 uses that data to help you find what the relevant files are based on your query so it has like a kind
0:07:25 of peek inside the files without having all of the details and it’s able to kind of surface that
0:07:29 relevant information for you so that you can use that in a prompt one thing i love about
0:07:33 repo prompt so when i first started using it i had been like using just like a custom script i had
0:07:38 created to like take my code base and like and then like put you know the relevant you know context in
0:07:42 there which a lot of times i was just doing all of it i was literally putting all into a single file
0:07:47 and i’d copy and paste that into chat gbt yeah i think i tweet about this and someone told me like
0:07:51 oh you got to try repo prompt that’s when i tried repo prompt the fact that i could like see how
0:07:56 much context i was sharing yeah with the model was amazing and it seems like that’s super relevant too
0:07:59 because you know at least from the benchmarks i’ve seen you know everyone’s talking about how much
0:08:04 context you can put into their llm you know think of the benchmarks for llama for as soon as you went
0:08:10 over like 128k context like nowhere near the 10 million yeah like the quality just like dropped like
0:08:15 like a rock well until gemini 2.5 came out pretty much all the models you would really want to stay
0:08:21 below 32k tokens in general i find like over that you’re just losing a lot of intelligence so there’s this
0:08:25 concept of effective context you know the effective context window like at what point does the
0:08:29 intelligence stop being like as relevant for that model and for a lot of smaller models and local
0:08:34 models it’s a lot lower and you probably want to stay around 8k tokens but like for bigger models 32k
0:08:38 is a good number it’s only now with gemini that you’re able to kind of use the full package the full
0:08:43 context window but yeah so you’re using this context you’ve picked out your files say you you want
0:08:46 to use as many as you want 100k like what do you do with that so like you have a question like
0:08:54 um help me change how links are handled uh with my docs uh and so i have a question here i’m just
0:08:59 going to paste it to o3 and you’ll see like what is o3 getting out of this so it’s getting basically
0:09:04 this file tree so it’s getting a directory structure of this project it’s getting basically the high level
0:09:08 code maps of the files that i haven’t selected so basically when it’s set to complete everything that i
0:09:12 haven’t selected gets kind of shipped in and then you have the files that i did select and so then the
0:09:17 context is able to go ahead and is able to do that and so this is like a great way to kind of just get
0:09:21 this information into o3 get the most out of this model and o3 is an expensive model if you’re trying
0:09:26 to use it a lot like this is a great way to kind of get more value out of it move fast and get good
0:09:30 responses i think the average person like people who are just using chat tpt or even people who are
0:09:34 coding with cursor they don’t realize that you can do that that you can literally just copy and paste
0:09:40 all of that context in there and that the lm gets that and it understands what to do yes you know
0:09:44 in contrast to chat gpt claude is very good at following instructions like it’s the best model
0:09:48 at following instructions i find and i think this is another thing that repo prompt does quite well is
0:09:53 so it’s got like tools to kind of send information into the lm but it’s also got tools to go ahead
0:09:57 so it’s now it’s going to go ahead and write an xml plan and it’s going to create this theme selector
0:10:02 and it’s going to add these files and and change files for me and what’s cool with this is that i can
0:10:07 just go ahead and use claude with my subscription and then have it modify all these files so it’s
0:10:11 basically creating all these files and it can search and replace parts of files too so i don’t
0:10:15 have to re-update and re-upload the whole thing have it up with the complete code so a lot of models
0:10:19 struggle with you know people are noticing like oh this model is really lazy it’s not giving me the
0:10:23 whole code but like this kind of circumvents that issue because it lets the ai just kind of get an
0:10:26 escape patch and just do what it needs to do here right you know sometimes when i’m coding like this
0:10:31 i’ll iterate like so i pasted this question right with o3 and often what i’ll do is i’ll read through
0:10:37 the answer and then i’ll change my prompt and then paste again into a new chat and try and like see
0:10:41 where the result is different because basically i look at like here’s the output okay i actually don’t
0:10:46 care maybe about this copy link button okay then i’ll put specifically put a mention in my prompt to say
0:10:50 like let’s let’s kind of just focus on this part of the question and kind of reorient it and that’s the
0:10:54 nice thing with this is that i can just hit copy as many times i want if you’re paying for a pro sub
0:10:59 like there’s no cost to trying things there’s no cost to hitting paste again and you know you just try again
0:11:03 you just paste again let the model think again and try things and i think that’s like a really important
0:11:07 way of working with these models is to experiment and try things and and see how does changing the
0:11:11 context what files you have selected your prompt i use these stored prompts that come built in the app
0:11:15 so there’s the architect and engineer and these kind of help focus the model they give them roles
0:11:21 so like if i’m working on something complicated the architect prompt will kind of focus the model
0:11:26 on just the design and have it kind of not think about the code itself whereas the engineer is just the
0:11:30 code like don’t worry about the design just just kind of give me the code uh but just the things
0:11:34 that change maybe you should explain like when you say engineer prompt it’s literally you’re just adding
0:11:39 stuff that you copy and paste into the lm saying like you’re an expert engineer and this is what i
0:11:44 expect from you i expect for you to give me xml that’s your job do it and that’s literally how the lms
0:11:49 work like okay i’ll do it absolutely yeah giving them roles is is crucial telling them who they are
0:11:53 what their job description you know what what do i look for like giving them a performance review
0:11:57 evaluation uh all that stuff like i i find like the more detailed you are with your prompts the
0:12:01 more you can help like they kind of color the responses in an interesting way so just adding
0:12:05 the engineer prompt you see like it spent more time thinking about it so here this time it kind of said
0:12:09 okay this is the file tailwind here’s the change and this is the change that i’m going to do in a code
0:12:14 block so you know for the longest time before i had any of these xml features i was just kind of
0:12:18 using repo prompt and like getting these outputs and then just copying them back into my code base
0:12:22 manually and kind of reviewing them right i was like really the antithesis of vibe coding where
0:12:27 everything’s kind of automated yeah so i showed you a lot of stuff like pasting back seeing this xml
0:12:32 and then kind of putting it back in what’s really nice with repo prompts like chat flow is that all of
0:12:36 that is automated so if you want to vibe code and kind of think about it like just not think about
0:12:40 anything while being kind of cost effective too you can do that kind of work here and basically the way
0:12:47 this works here is i had gpt 4.1 as my main model this is all the context i gave it and then my pro
0:12:53 edit mode what it’ll do is it’ll actually ask a second model to apply the edits so i have gemini flash
0:12:58 that will go ahead and rewrite the file for me and just kind of do that work so i don’t have to manually
0:13:03 kind of incorporate these so if i was looking at here like okay this is the tailwind file i’d have to
0:13:08 open that up and then go ahead and introduce it in but having it kind of just go in the chat having
0:13:12 different models kind of do that work you know it makes a big difference working on repo prompt it’s
0:13:16 really like there’s building your context that’s like the biggest thing just picking what you want
0:13:20 you want to front load that work and you know in contrast to using agents you’re going to have those
0:13:25 agents kind of run off do a lot of work call a bunch of tool calls you see like oh three kind of
0:13:29 thought for 15 seconds thought through some tools to call it didn’t really make sense it just kind of
0:13:33 kept going and and ended up doing this and if you’ve used cursor a lot you know you’ll see like often
0:13:37 using oh three it’ll call tools that will like read this file read that file read this file
0:13:42 but if you just give it the files up front and you just kind of send it off to work with your
0:13:45 prompt you right away you get a response and you’re like okay well does this make sense to me am i able
0:13:49 to use this instead of letting it kind of serve for an hour yeah it’s a little bit more work at least
0:13:54 right now but it’s yeah i think you get a lot better results so it’s yeah yeah just front loading that
0:13:58 context being able to think through and iterate on that and that’s the whole philosophy around it is
0:14:03 just like thinking through like making this easy the context builder helps you find that context
0:14:08 you know eventually i’m going to add mcp support so you can query documentation find find things
0:14:14 related to your query as well and just spend time as an engineer sitting through what do i want the
0:14:18 llm to know and then what do i want it to do and then make that flow as quick and as painless as
0:14:22 possible and like that’s kind of everything and i think you know going forward and you know as you get
0:14:27 serious coding with ai like that’s what the human’s job is in this loop as engineer’s job is
0:14:30 figuring out the context i think that’s the new software engineering job
0:14:36 hey we’ll be right back to the show but first i’m going to tell you about another podcast i know
0:14:41 you’re going to love it’s called marketing against the grain it’s hosted by kip bodner and kieran
0:14:46 flanagan and it’s brought to you by the hubspot podcast network the audio destination for business
0:14:50 professionals if you want to know what’s happening now in marketing especially how to use ai
0:14:56 marketing this is the podcast for you kip and kieran share their marketing expertise unfiltered
0:15:01 in the details the truth and like nobody else will tell it to you they recently had a great episode
0:15:09 called using chat tbt 03 to plan our 2025 marketing campaign it was full of like actual insights as well
0:15:16 as just things i had not thought of about how to apply ai to marketing i highly suggest you check it out
0:15:20 listen to marketing against the grain wherever you get your podcasts
0:15:26 like i said before i was so surprised a lot of people haven’t talked about this because like
0:15:31 for me like right now cursor is good for like something very simple like okay change some
0:15:37 buttons or change some links or change whatever you know but anything complicated repo prompt i got like
0:15:44 way way better results so i’m curious like you know have you ever thought about like this being used
0:15:48 for things outside of coding and do you think would be useful for anything outside of coding yeah i mean
0:15:52 i’ve gotten academics reach out to me telling me they’re using it for their work uh there’s folks
0:15:57 in different fields for sure i think some of the ux has to probably improve a little bit but in general
0:16:02 like you know if you’re working with plain text files um you know repo prompt can service those use
0:16:07 cases for sure it’s all set up to read any kind of file and then apply edits to any kind of file too
0:16:12 like i don’t differentiate if i can read it then i’ll apply edits for you and i think a whole bunch of work
0:16:16 is around just like gathering context and kind of iterating on stuff like even you know in doing
0:16:21 legal work i do think you know a flow that is still missing from this app it’s just that like
0:16:25 kind of collaborative nature i think there’s still some work that needs to kind of be done to kind of
0:16:29 make this a more collaborative tool make this a tool that that kind of syncs a little bit better with
0:16:33 different things like for now like developers use git and like that’s that kind of collaboration
0:16:38 bedrock but i think like lawyers need other things yeah yeah that’s something i think too is like yeah
0:16:43 repo prompts super useful but you have to be a little bit more advanced like an average vibe coder
0:16:48 the average person using an llm and uh yeah you know no offense you can kind of tell one person has
0:16:52 built this you know it’s amazing but you can tell yeah yeah yeah no it’s all good i’m kind of curious
0:16:56 like why did you not go the vc route where’s repo prompt at right now like where is it now and what’s
0:17:00 your plan for it you know i’ve had a lot of folks you know bring that up to me and they’re kind of
0:17:04 thinking through like you know why not vc or whatever and i think it’s not something that the
0:17:10 door’s closed on forever it’s just i think right now it’s it’s work i’m able to build and you know i’m
0:17:14 able to kind of listen to my users and pay attention to what they need and i think it’s just
0:17:20 not super clear to me like where this all goes you know like this is an app that is like super useful
0:17:25 and it’s like helping me and i’m able to build it but like is it something that necessarily makes sense
0:17:29 to like have like you know a hundred million dollars invested into it to grow a huge team to like
0:17:34 build maybe i don’t know but like you know i want to kind of take things as they go as well and
0:17:38 you know right now i’m able to to monetize it a bit you know it’s got some passionate users
0:17:43 you know it’s working well this way but again like it’s all new you know to me like i’ve not
0:17:47 gone through this whole you know vc story myself i’ve had friends who kind of shy me away from it but
0:17:51 you know i i try to like listen to the folks around me too and see where yeah there’s pluses and minuses
0:17:56 to vc like you’ll hear on twitter and things like that like people who are like oh vc is horrible or oh
0:18:01 it’s amazing you know there’s good and bad to all of it yeah you know i feel like everything with ai
0:18:04 right now is like who knows what’s going to happen like yeah in a year everything could be
0:18:11 different in five years who the hell knows right yeah like right now because ai is such a big wave
0:18:15 that’s why we call the show the next wave right it’s such a large wave of transformation happening
0:18:21 that you are going to see the largest investments ever yeah i think in history yeah as well as the
0:18:25 largest acquisitions ever yeah and i think these are have yet to come yeah we’re like in the early
0:18:30 part of this transition i think the best two routes for you in my opinion would be either to go
0:18:35 really big and go the vc route or to go more like hey who knows what’s going to happen with it i just
0:18:39 want to like get my name out there and i can leverage my name for something else in the future
0:18:43 and like open source it that’s my kind of thought on strategically what i would do it’s like either go
0:18:48 really big or open source it and make it free and just put it out there and say yeah you know and get
0:18:52 some reputation benefit from it there is a free tier it’s not open source yeah but there is a feature you
0:18:56 know the thing about open source actually is something i’ve thought about a lot and the big issue with it
0:19:01 right now especially as people are building ai tools is that like it’s never been easier to fork
0:19:07 a project and kind of go off and just build it as a competitor if you’ve looked at client like client’s
0:19:11 a big tool you know that came around actually started around a similar time as me working on repo prompt
0:19:16 and uh if you’re not familiar the client is an ai agent that sits in vs code and it’s pretty cool but
0:19:21 the thing that is not so cool about it is that it eats your tokens for lunch like that thing will
0:19:25 churn through your wallet like faster than any other tool that exists just because it goes off
0:19:30 and reads files stuffs the context as big as possible so a lot of people really enjoy using
0:19:33 it because it has good results for certain things but yeah that cost is very high but the thing that
0:19:37 i was trying to bring up with this is that like so client was actually forked a few months ago by
0:19:41 another team of developers and it’s called bruise the alternative and if you look at open router
0:19:45 and some stats like bruise actually surpassing client and so you know that fork is now overtaking the
0:19:49 original and you know that’s the kind of space that we’re in where like different teams will kind of
0:19:52 take your code take it in their direction and then all of a sudden they’ve overtaken you and
0:19:57 you know you kind of lose track of you know where things are going there so like it’s a crazy space
0:20:01 it’s never been easier to open pull requests with ai you don’t need to understand the code you’re like
0:20:05 oh i have this open source project i’m just going to fork it and add my features and kind of go and
0:20:10 and it’s a tricky thing but like you know having a free version and kind of trying to ship and grow a
0:20:15 community of users who are passionate who like you can talk back to you and you know i mean that’s kind of
0:20:18 the route i’ve taken right now and it’s kind of been working so far i was in beta for a long time
0:20:22 yeah you know it’s still new figuring out where to go next with it and it’s mac only right now is that
0:20:26 correct yeah that’s true it’s mac only and i think a part of that is that i started off you know just
0:20:30 kind of trying to think about like you know how do i build this in a good way and the problem is like
0:20:35 i immediately ran into issues trying to build for different platforms and like i spent a bunch of time
0:20:40 debugging just getting svg icon rendering you know all these little things that are just like rabbit holes
0:20:44 and you’re like okay well you’re so abstracted from the base of like what’s happening and you spend a lot of time
0:20:49 just solving build issues that it’s like well i’m just gonna go ahead and do build native and just run with it
0:20:54 and have better performance doing so like you know if you open an ide like vs code you open up like a huge repo
0:21:00 what actually happens is that it’ll load the file tree and it will just kind of lazy load everything
0:21:04 like not everything needs to load because if you’re opening an ide you know as a coder traditionally
0:21:08 you only have a couple files open at a time maybe you have a dozen right you’re not going to be
0:21:13 processing 50 000 files at the same time but an ai model can you know if you give it to gemini like
0:21:17 gemini will want all those files it will want as much as you can give it because they can read all
0:21:23 of it and so you need a tool that is built different that is kind of organized in a way where it’s kind
0:21:29 of thinking first through that performance of mass data processing that you need to kind of do it’s a
0:21:33 whole different way of working that’s why it’s native because like i want that performance processing all
0:21:38 these files there’s all this concurrency happening where you’re like in parallel editing these files
0:21:42 like processing them and doing all this stuff like it’s very hard to do if you’re just you know using
0:21:45 javascript or typescript when i use repo prompt it seems like you’ve done a really great job of building
0:21:51 it it works really well it is all just you like right now yeah it is just me yeah i’ve been working
0:21:56 on it a lot yeah that’s crazy yeah it’s come a long way i iterated a lot on it you know but that’s
0:21:59 the power of dogfooding too like if you’re not feeling like folks listening dogfooding is when you like
0:22:04 kind of use your own product to iterate on it and build with it and you kind of make it a habit of
0:22:08 making sure that you’re a number one user of your app you know your own product to make sure that you
0:22:13 see all the stuff that sucks about it and for the longest time like you know it’s really sucked and
0:22:18 just that struggle and that that pain of using it and forcing yourself to feel that pain like that’s
0:22:22 what makes it good that’s where you’re able to kind of feel those things that the users using the
0:22:26 app will feel and and that’s when you end up with something that is great in the end so where do you
0:22:31 think repo prompt is going like long term which maybe now maybe long term now means like one year
0:22:35 where’s repo prompt going next year that’s long term it’s hard to say honestly like it’s weird you know
0:22:40 like in december like open ai announces oh three and they’re like oh it beats all the arc agi tests
0:22:44 and you’re like well is this agi like what is this like and then it kind of shifts and it’s like okay i
0:22:50 mean like it’s a better model it lies to you it’s not like uh the messiah you know right so it’s hard to
0:22:55 say like i don’t know like where we go like i have ideas on like where the future is one year from now
0:22:59 i think i’ll have to adapt this product and keep iterating on it to kind of stay relevant so it’s
0:23:04 going to keep changing but like i think that the flow i’m kind of pushing towards of that context
0:23:09 building i think that remains relevant for a while longer and what improves is the layers of automation
0:23:14 around that work yeah so i think like long term i still think that is kind of the vibe that i want
0:23:19 to go towards though i think just like integrating mcp just embracing that like universality of all
0:23:23 of these different tools so for folks listening if they’re not sure what is mcp is another acronym
0:23:29 we got lots of an ai so the idea there is traditionally if you use like claude or open ai they have tools
0:23:32 and those tools you know one of them could be like search the web or one of them could be like read the
0:23:37 files on your thing or look up documentation or these kinds of things and there’s this protocol mcp
0:23:43 that like creates like an abstraction layer so that any client app can implement this protocol
0:23:48 and then users can bring their own tools so if a user comes in and says like oh i want to use and
0:23:51 there’s this new one that’s really cool it’s called context seven where basically they’ve gone ahead
0:23:56 built a server that fetches the latest documentation for whatever programming language you’re using and
0:23:59 we’ll kind of pull that in as context so you can say okay great fetch the latest angular docs or
0:24:03 whatever docs you care about and then you can bring that in so that kind of work where you’re like
0:24:08 doing that context retrieval that’s super important or like stripe has one too where basically all the
0:24:12 docs for their tool is set up and you know you just plug in the stripe mcp and then
0:24:15 all of a sudden if you’re trying to vibe code your way through integrating stripe like that’s
0:24:20 super easy that the work is kind of handled you can plug in your api keys onto it so it can even talk
0:24:24 to the back end for you that whole work is kind of automated so it’s all about having tools for
0:24:28 folks using these models to kind of automate connecting to different services in this like
0:24:32 universe of all these different you know services that exist in the world yeah i kind of think of it
0:24:36 most i mean it’s different than xml but for me i think of it as almost more just kind of xml is like
0:24:41 the information language that ai can understand mcp is like the same thing with any service you want to
0:24:46 use or tool it knows how for the ai to know how to work with those things yeah and funny enough you
0:24:50 mention xml because that’s actually one of the things that i do a lot with rebomb is parsing xml
0:24:54 and i think one strength there that i have that like a lot of other tools are kind of ignoring
0:24:58 so traditionally when you’re working with these language models as a developer and you can see
0:25:03 this if you use chat you’d be like hey like um search the web it’s going to use the search tool
0:25:08 and you’ll see it say you call tool search and it’ll go through but what happens when it’s doing that
0:25:15 is that basically it calls that tool it stops waits for the result and then continues i think a bit like
0:25:20 the robot is kind of being reboot as a new session with that new context because basically every tool
0:25:25 call is a new query so you’re giving back the old information but you’re not necessarily talking to
0:25:29 that same instance it’s like a different ai instance that is answering your question from the
0:25:33 new checkpoint so like that’s like a weird thing so you know as you’re making all of these tool calls if
0:25:37 you use cursor you know it’ll make like 100 tool calls but by the end of it you know you’ve gone
0:25:41 through 25 different instances of these models and then you get a result at the end and you’re like
0:25:45 well you know it’s like weird like what actually happened you know there’s some data loss like weird
0:25:48 stuff you know we don’t know how this is yeah it doesn’t seem like that could create like reliability
0:25:52 issues right because like you know the lms like sometimes they give you amazing results and other
0:25:56 times yeah it’s like oh what is this and so every time you’re doing a new tool it sounds like you’re
0:26:00 almost recreating the chance of it going wrong in a way exactly yeah you’re you’re aggregating these
0:26:04 issues but you don’t even know where that info there could be different servers that are actually
0:26:08 processing all these different tool calls and yeah it’s weird sometimes you’ll have like oh that server
0:26:12 has some like chip issue on its memory and like that actually causes some weird issues where
0:26:16 claude is actually really dumb today um but on the other one it’s it’s a lot smarter because
0:26:20 their chip the memory chip is working fine you know you don’t know right so that kind of thing so
0:26:23 just to close that back you know what i’m doing yeah the way that i’ve kind of gone about this is
0:26:29 the way i call tools is you have your xml and the ai will just answer in one instance and it’ll
0:26:32 just give you the whole thing and it can call a bunch of tools in there it can be like hey like i
0:26:37 want to call this this do this and this and then i just parse that and then bulk call the tools
0:26:41 and then get the results and then we go another instance with the results and you can kind of
0:26:45 go back and forth like that so like not have to wait on each single one you’re actually just
0:26:49 bulk sending them out getting that data it’s a lot more efficient you’re able to process say like 25
0:26:54 queries you know get 2325 we’ll bring them all in you know let’s work from there and see how it goes
0:26:57 and so that kind of thinking so i think there’s a lot to kind of play with in terms of you know how
0:27:01 you’re even getting this data back and forth from the llms because at the end of the day it’s all text
0:27:05 you know text and images maybe um some video in some cases but like really text just for your coding
0:27:09 like that’s that’s the thing that you’re working with and you can do a lot with text manipulating
0:27:14 it and playing with it to kind of get good output so what do you think i’ve heard you know yc and
0:27:18 others i think gary tan said that i can’t remember what’s 80 but i think he said like 80 of the code
0:27:23 for the the startups going through yc right now is ai generated that number could be wrong do you think
0:27:28 in three years from now do we still have like normal engineers who don’t use ai at all is that a real
0:27:32 thing do you still have the holdouts well first of all like i think saying a percent like that of how
0:27:37 much of it is ai generated it’s a bit misleading to be hyped too yeah like i can go ahead and like
0:27:42 every line of code i could basically like type it in pseudocode to the ai model and like have it paste
0:27:47 it back in as like a fleshed out javascript function and say 100 of my code is written by ai
0:27:52 it really depends on how your workflow is what your pipeline looks like i do think fundamentally the job
0:27:56 of an engineer has changed it’s already done it’s already completely different like you can’t you can’t
0:28:00 work the same way but it depends on what point in the stack you’re working on like i i work for some folks
0:28:05 who do some like really low level you know graphics work and i talked to someone about like how they can’t
0:28:09 really use ai models because the ai models just hallucinates everything like it’s just not trained
0:28:14 on anything that they work on so it’s just useless for them but then if you work out someone who’s a
0:28:18 you know web developer well 100 of the code like like 98 of the training code is web code and web
0:28:23 framework code and so it’s like okay well yeah 100 of that work can be done by ai it’s really easy
0:28:28 so it really depends on where you are in the stack what kind of tool you’re working with and you know
0:28:33 how well the ai models can help you in that but i think like as we move forward more and more you’re
0:28:37 going to want to have ai models thinking through hard problems for you because it just happens much
0:28:41 faster as they get better at math at solving like you know connectivity architecture like architecture
0:28:47 something that like these 03 and 01 pro and hopefully 03 pro is just excel at they’re they’re very good
0:28:52 at finding good ways of like organizing your code and helping you plan how you connect things together
0:28:56 and i think that’s a big part of software engineering in general is just organizing your code because the
0:29:00 actual process of writing it like you know that’s not the fun part or even the interesting part it’s
0:29:05 that part of organizing and and i think a human job with this is to like iterate on those plans
0:29:09 iterate on these ideas because that’s like the kernel of what an ai will generate code with yeah so i
0:29:13 think that’s where the work is you know i used to open the editor like when i’m working on repo prompt
0:29:19 i don’t write a ton of code by hand like most of it is done by ai but like i spent a lot of time
0:29:24 thinking about architecture i spent a lot of time thinking about problems and debugging and thinking
0:29:28 through like i won’t just like hit the button and say like solve my problems fix my bug like that’s just
0:29:32 not helpful but like if i read through the code i’ll be like okay like i i think there’s like a
0:29:36 data race going on over here this part connecting to this part like there’s some concurrency issue
0:29:41 i’ll add some logs okay great i’ve got some idea of like what’s going on here perfect then you can
0:29:45 kind of feed that data into the ai model and have it kind of think through you know a resolution and often
0:29:50 once you’ve done those steps of troubleshooting the ai model can solve your problems but you have to sit
0:29:55 down and think through how things are connected and understand what is actually happening so i think
0:30:00 that’s kind of where that work changes that’s a great uh engineering answer yeah i’m looking for
0:30:04 the thing that goes viral on x right that you know like yeah yeah all engineers will be gone next year
0:30:10 this kind of thing you know listen the job is fully changed i think from today on like if you’re not
0:30:13 using these tools you’re not learning how they work like i think that’s like an issue because like i don’t
0:30:17 think you know a traditional engineer who spends his whole career just typing code up like that doesn’t
0:30:22 exist anymore but what does exist is someone who understands code and who can read it and who
0:30:26 understands you know what questions to ask and if you’re prompting like about the code if you
0:30:30 understand you know the connections that’s where you’re going to get the best results and that’s why
0:30:34 like a tool like repo prompt is so helpful because you’re able to do that and feed the right context in
0:30:39 but if you’re just saying like make the button blue or like move it over here i mean that works for to some
0:30:43 extent you know if as long as your instructions are simple enough and you know what you want you can get
0:30:47 there but like at a certain point you fall off and you know that’s when it stops working and maybe that
0:30:51 point where you fall off gets further and further as the models improve but i don’t think that like in
0:30:56 the next 10 years we get to a point where that point stops existing uh one thing that we didn’t talk
0:31:00 about that i was kind of curious to talk about was like what do you do at unity so what i do there is
0:31:05 i’ve been doing uh kind of xr research and xr engineering and so i work on a toolkit called the xr
0:31:11 interaction toolkit and basically it’s a framework for developers to build interactions with xr so if
0:31:16 you’re if you’re putting on an oculus quest or you’re a hololens or you know like uh apple vision
0:31:20 pro you want to basically interact with objects in your scene in your world you know like in ar if
0:31:24 you’re walking up and you want to pick up a virtual cube like how do you process that interaction of
0:31:28 you grabbing the cube and picking it up and looking at it so that’s like i’ve done a lot of research on
0:31:32 that that interaction of like input like i’ve written specs that are adopted for like the industry
0:31:37 in terms of hand interaction so like you know just tracking your hands how do you grasp something what
0:31:40 should you be doing if you want to poke a button that’s like not there like what does that look
0:31:44 like so that kind of stuff that’s that’s what i do there that’s amazing it’s like that’s a really
0:31:49 complicated engineering work how are you doing that doing a repo prompt and then you have a baby like
0:31:56 yeah how are you doing all this i mean i don’t have a lot of free time obviously i uh yeah yeah but
0:32:01 i’m passionate about what i do at work too and and then repo prompts you know this is my other baby
0:32:04 and i just think a big part of it is just you know when folks come to me and there’s like
0:32:09 something that’s like bugging them about the app you know i i just get like an itch and i have to
0:32:14 fix it for them yeah so like i just keep tricking on it and but i try to get some sleep in so i don’t
0:32:17 cut through that too much one thing i was thinking about too is like i have a son 11 year old you’ve
0:32:22 got a baby yeah this actually one reason i even like you know helped start this podcast was i’m
0:32:26 constantly thinking about where ai is going and wanting to stay ahead yeah and also think about what
0:32:31 does it mean for me and my family like quite honestly you know the selfish level and people used to
0:32:34 ask me like when my son was born because he was born in san francisco around tons of like
0:32:39 founders and vcs all the kind of people to be around like the birthday parties right it was all
0:32:43 people from like yc and people like that and it’d be asking me like you know what do you think your
0:32:48 son should do in the future what will his job be you know this is like 11 years ago and i was talking
0:32:52 about drones like he probably needs to be like a drone defense engineer like building anti-drone
0:32:57 systems or something it would be my common line that i would say at parties but now with ai like
0:33:02 because at that point we did not know ai would advance as fast as it has no it’s just happened
0:33:05 so fast right it was all just like a some stuff out of a book it was like oh yeah sure they’re
0:33:10 talking at stanford and they got some cool demos but like nothing’s working yeah now it’s working so
0:33:14 like with your child have you thought about that yet like oh of course what do you think they should
0:33:20 learn i have no idea yeah i have no idea it’s everyone right like like what do you even teach your
0:33:25 children like is it is it important to learn to code is it we teach them logic morals probably all of this
0:33:30 in more yeah flexible and super fluid i think so you know but it is funny on that topic i look at
0:33:36 engineers coming out and learning to code with ai around and i think they’re at a disadvantage you
0:33:40 know it’s unfortunate that like you know if you’re starting to code today and you have ai to lean on
0:33:44 you just don’t have that struggle you just don’t have the pain that like i had to go through when i
0:33:48 started to code when you know when engineers who’ve been in the field for so long that had to struggle
0:33:53 and not get the dopamine hit of a fixed problem right away like to study it and understand how it
0:33:57 works like that just doesn’t exist anymore because the ai just solves it for you and i think that’s
0:34:01 true in the code but it’s going to be more and more true in every field and so i think like there’s
0:34:06 going to be a need for people to have the restraint to kind of put aside these tools to struggle a little
0:34:12 bit i think there’s a ton of value in kind of using them to learn and grow but there’s also like that
0:34:17 restraint that you need to form to kind of have the struggle because that’s where the learning is and
0:34:21 it’s really tricky and i and i don’t know how you you solve that now because it’s it’s too easy not to
0:34:26 struggle now which which is a big problem yeah i’ve heard uh jonathan blow and if you know of him
0:34:31 of course yeah the game designer he talks about exactly what you’re saying that you know it’s in
0:34:35 the future like yeah sure ai could get amazing at coding in the future but it’s also going to create
0:34:39 issue where like just like you said people are not going to learn to properly code he was already
0:34:45 complaining about before ai the code was and then now with ai it’s like okay now we’re kind of
0:34:48 screwed i guess because like we’re gonna have a situation where like no one knows what’s going on
0:34:54 and like yeah you’re entirely dependent on the ai for everything yeah it’s a crutch so easy to reach
0:34:59 for and what do humans do but that’s the thing i think maybe that’s the middle part you know where
0:35:02 where we’re at this point where it’s like the ai is just not quite good enough to kind of solve
0:35:06 all the problems and you still have problems to solve and you still have people that need to kind
0:35:10 of work with the machines to kind of figure out how to go maybe at some point in the future
0:35:14 all of it is moot i know some folks think that and maybe it doesn’t matter but i think you know
0:35:17 there’s going to be some discomfort in the middle where you know the machines are not quite good
0:35:22 enough to solve every problem we lean on them as if they are and then you know we’re kind of atrophying
0:35:26 a lot of skills that we we’ve heard you know i haven’t driven in a tesla with fsd but i’ve heard
0:35:30 folks say the same thing there where like if they’re using it all the time they actually like suck at
0:35:34 driving without it and it’s like right you know like more and more that’s going to kind of be a thing
0:35:38 where we’re like you’re that that is the thing where we start to go like to like we’re like
0:35:42 almost living in like one of those sci-fi novels right like everything being super super safe you
0:35:46 know i live in japan everything you used to live in san francisco everything’s super safe in japan
0:35:51 and there’s one reason i like it but you do lose some freedom in that yeah but do i really want my son
0:35:57 you know driving now like if i really think about it there’s an alternative um yeah not necessarily you
0:36:01 know i agree i mean i have that same debate with my wife you know was saying like i don’t think our
0:36:05 daughter is gonna ever have a driver’s license and she’s like i don’t think so you know like we’ll see but
0:36:09 i don’t know like there is the safety part for sure and i think that’s like really interesting and
0:36:15 and hopefully like that is the case that like ai just does make it safer out yeah right so eric it’s
0:36:19 been awesome and uh maybe we should you know tell people where they can find you and uh where they
0:36:27 can find repo prompt and yeah so i’m uh puncher with a v on x so it’s like pvn ch er on x uh and
0:36:30 most most socialists my handle all over so you can reach out there my dms are open if you have
0:36:36 questions and repo prompts uh repo prompt.com so you can just head over there and uh find the app
0:36:40 free to download and uh nice discord community too if you want to hop over there and send me some
0:36:44 messages and tell me what you think like please do yeah thanks for having me on nathan it’s been
0:36:48 great chatting with you yeah appreciate it it’s been great yeah yeah yeah had a lot of fun cheers
0:36:50 likewise take care all right
Episode 57: Can simply “Vibe coding” with AI really replace the need for deep code context when building real applications? Nathan Lands (https://x.com/NathanLands) is joined by Eric Provencher (https://x.com/pvncher), founder of Repo Prompt and an XR engineer at Unity, to reveal the secret AI prompt tool quietly powering Silicon Valley’s top engineers.
This episode dives deep into why the current trend of “Vibe coding” with tools like Cursor often falls apart for complex tasks — and how Repo Prompt closes the gap by letting you build effective, highly targeted context for AI coding. Eric breaks down the philosophy behind contextual prompting, gives a live demo, and shares how Repo Prompt’s unique features like the context builder and codemaps give power-users real control over LLMs like Gemini and Claude. Beyond coding, they discuss implications for the future of engineering, learning, and the evolution of dev tools in the age of AI.
Check out The Next Wave YouTube Channel if you want to see Matt and Nathan on screen: https://lnk.to/thenextwavepd
—
Show Notes:
-
(00:00) Vibe Coding Myths Unveiled
-
(03:15) Repo Navigation for Flutter Devs
-
(06:37) Gemini 2.5 Extends Model Context
-
(11:18) Automating File Rewrites with AI
-
(15:33) The Next AI Wave
-
(20:58) MCP: User-Customizable Tool Integration
-
(23:53) Efficient AI Tool Integration
-
(28:32) XR Interaction Toolkit Developer
-
(31:01) AI’s Impact on Coding Learning
—
Mentions:
- Want Matt’s favorite Coding AI tools? Get em’ here: https://clickhubspot.com/tbv
-
Eric Provencher: https://www.linkedin.com/in/provencher/
-
Repo Prompt: https://repoprompt.com/
-
Unity: https://unity.com/ai
-
Cursor: https://www.cursor.com/en
-
Gemini: https://gemini.google.com/
-
Claude: https://claude.ai/
Get the guide to build your own Custom GPT: https://clickhubspot.com/tnw
—
Check Out Matt’s Stuff:
• Future Tools – https://futuretools.beehiiv.com/
• Blog – https://www.mattwolfe.com/
• YouTube- https://www.youtube.com/@mreflow
—
Check Out Nathan’s Stuff:
-
Newsletter: https://news.lore.com/
-
Blog – https://lore.com/
The Next Wave is a HubSpot Original Podcast // Brought to you by Hubspot Media // Production by Darren Clarke // Editing by Ezra Bakker Trupiano