AI transcript
0:00:20 today we want to dive deep into what chat GPT’s 01 models are capable of. We believe
0:00:25 that a lot of people really misunderstand what they’re actually capable of. And most
0:00:31 people are probably using 01 wrong. And in this episode, I think we might actually blow
0:00:35 your mind with what they can do. I’m going to show you some simple workflows that I
0:00:41 use to actually create short form content from my long form content using 01. But then
0:00:46 Nathan, he’s going to break down his flow for actually building a game. And he’s going
0:00:50 to give you a little sneak peek at the game that he’s showing you and, you know, stick
0:00:54 with them when we get to it, because it does take a couple of minutes to understand it.
0:00:58 Once you have that aha moment, which you’re going to have that aha moment when he’s showing
0:01:03 it to you, you’re going to have your mind blown by what 01 pro is capable of. And Nathan’s
0:01:07 game is looking killer. And you’re going to see what I’m talking about if you’re watching
0:01:13 this one on video. So stick around for this one. We’re going to dive in and show you what
0:01:21 01 from open AI is really capable of. Look, if you’re curious about custom GPT’s or our
0:01:26 pro that’s looking to up your game, listen up, I’ve actually built custom GPT’s that
0:01:31 helped me do the research and planning for my YouTube videos and building my own custom
0:01:36 GPT’s has truly given me a huge advantage. Well, I want to do the same for you. HubSpot
0:01:41 has just dropped a full guide on how you can create your own custom GPT and they’ve taken
0:01:46 the guesswork out of it. We’ve included templates and a step-by-step guide to design and implement
0:01:51 custom models so you can focus on the best part actually building it. If you want it,
0:01:56 you can get it at the link in the description below. Now back to the show.
0:02:02 Yeah, I feel like it’s the first model where like there’s a major disconnect between like
0:02:07 how good the model is and how people perceive it to be. Like they think it’s just like a
0:02:10 slower version of a chat botter’s and they’re like, why, you know, how is this an upgrade?
0:02:14 And so I think there’s a major disconnect and it feels like it’s also the first time
0:02:18 there’s a model where people are kind of the limit and your knowledge that you bring to
0:02:22 the table when you use it are kind of a limit, right? Because like, if you don’t know how
0:02:27 to properly use the tool, the results you’ll get back will be way bad and it takes several
0:02:31 minutes to get the response back and it’s a horrible experience. You’re like, I waited
0:02:35 for a few minutes and you give me some crap back. Why am I ever going to use that again?
0:02:38 And so apparently a lot of people have like tried that one time and just never use it
0:02:41 again. The people who like figure out how to actually properly use it are like kind of
0:02:45 blown away by what you can do with it. Yeah. Yeah. And in a few minutes here, we’ll actually
0:02:50 dig in a little bit and show off exactly how we’re using it. But you know, that’s, that’s
0:02:54 totally right. Like I use regular chat GPT and I’ll, I’ll dump like a transcript from
0:03:00 a video in with the timestamps in it and tell it to find me like clips from that video.
0:03:04 And with regular chat GPT, it’ll find like moments from the transcripts, but the timestamps
0:03:09 will be like way off. Like it can’t seem to like line up the timestamp with what that
0:03:14 clip it was that found. When I do it with 01, 01 does it’s like double checking, triple
0:03:17 checking. And you know, that’s kind of what it’s doing when it’s doing all of this processing
0:03:23 and it’s taking so much longer. It’s basically prompting behind the scenes, getting a response
0:03:28 and then double triple quadruple checking and sort of reevaluating its response over
0:03:32 and over and over again before it finally goes, okay, we think we got this right. Here’s
0:03:33 our response.
0:03:37 Well, we think that’s what’s going on. I mean, we think you know, like open eyes be not not
0:03:40 being entirely transparent about what’s going on. They’re kind of alluding to them having
0:03:44 some kind of other secret sauce, but yeah, that’s probably the majority of what’s going
0:03:48 on. But, but yeah, man, I think that’s the key is, and we will show later, like giving
0:03:52 these models the proper context with like Claude or like, are any of the other like
0:03:55 just regular chat models, you know, the LLMs, they don’t have a reasoning model attached
0:03:58 to them. Like you can just kind of go back and forth. You can just like ask it a simple
0:04:02 question and instantly responds back and you just kind of go back and forth. But with like
0:04:06 01 or one pro, you know, you can throw so much contacts at them, you can like, you can
0:04:11 copy and paste in, you know, paid like 10 pages, 100 pages, maybe even 1000, not sure
0:04:20 the exact it’s like token limit is on 01, 01 pro, I believe is 125, 125,000 125, but people
0:04:26 who’ve tested it have said that kind of stick to like 50 to 75 K, like there’s some kind
0:04:29 of thing it’s like almost like a ram back in the day or memory like, yeah, you can hold
0:04:33 that much, but don’t like fill it up. Well, according to perplexity here, I actually did
0:04:40 a quick search on it. According to perplexity, the 01 model is a 200,000 token context window
0:04:46 with a maximum output token window of 100,000. So you could put up to that’s about 75,000
0:04:51 words into like a single prompt. The results you get, like at least with coding that you
0:04:55 get from like sharing it that much context versus just like asking it a question. It’s
0:04:59 like, it’s like a night and day difference. And I found the same thing with writing and
0:05:03 I’ve heard other people sharing examples too that they’ve tried to 01 for writing and they’re
0:05:06 like, Oh, this kind of sucks are made slightly better than Claude or like it’s like about
0:05:11 the same as Claude. But like actually, like with 01 pro, if you give it tons of examples
0:05:16 of like, here’s good writing, here’s a good newsletter, or here’s my best newsletter issues
0:05:21 and some people who write newsletters I really respect and wish I could be like, do that
0:05:26 like the stuff that gives you back even just like for like editing is so good. And actually
0:05:32 before 01 pro, I almost never used AI for my newsletter at all. And so I’ve been using
0:05:37 whisper flow. And basically where I can just like press a button and just talk to the computer.
0:05:42 And then it just uses AI to transcribe what I said. I think it wasn’t the one that Riley
0:05:44 was talking about on that episode we did with Riley, I think he might have brought that
0:05:48 up and said he was actually using that to code with. I believe so. I’m sure I’ve learned
0:05:52 so many things from this show like subconsciously like we’re like, oh yeah, I’m going to try
0:05:56 that out. I don’t know why I’m trying it out, but I am well, I have to subtly slip that
0:06:00 in so people go, oh, they did an episode with Riley Brown. I got to go listen to that one.
0:06:04 I’ve been using it that way. And lately for my newsletter, I will use whisper flow and
0:06:09 just talk to it. And you know, and I’ll talk for like five to 10 minutes about whatever
0:06:13 I want my newsletter to be about for that day. And then I’ll like hand it off to 01 pro.
0:06:18 And I gave 01 pro, you know, I’m copying in examples of my favorite, my best newsletter
0:06:23 issues, but also newsletters that I like and things I don’t like. And it’s doing an incredible
0:06:28 job at editing my, what I said to make it like a really presentable and professional.
0:06:31 And the previous models were not, we’re nowhere near that caliber.
0:06:35 Well, and I think, I think what you’re saying too is like getting to the root of like why
0:06:40 we mentioned that most people are probably using 01 wrong. Like the regular chat GPT,
0:06:46 the GPT 40, it’s designed to be conversational, right? It’s designed for you to ask it a one
0:06:51 or two sentence question. It gives you a response. You give a follow up question, and it’s designed
0:06:56 to do that sort of back and forth, back and forth. And you get into this long conversation
0:07:00 and it sort of ideally remembers the context of the previous conversation.
0:07:05 01 models, on the other hand, are not really designed to do that. Oh, one models are really
0:07:11 kind of designed to dump as much information as you can into that very first prompt, like
0:07:15 you mentioned, right? Dump in the newsletters you like dumping your own newsletters, dumping
0:07:19 maybe the information that you want included in the newsletter that it’s about to write
0:07:24 for you and give it all of that information, all of that context from the very first prompt,
0:07:29 hit submit, and then let it go to his town doing his processing, right? Let it spend
0:07:34 10 minutes, 15 minutes, however long it takes processing all of that information. And then
0:07:38 it’s going to give you a nice detailed output with all of that information.
0:07:43 If you try to use it to like chat and be conversational with it, ask it a one sentence question
0:07:48 and then wait for a reply. Ask it a follow up question. You’re going to hate it, right?
0:07:50 Because it’s going to take forever with each question.
0:07:54 A lot of people have noticed too that it seems like it’s really great at like one-shotting
0:07:58 things, like in terms of you want to give that huge context right up front. And often
0:08:03 after that, you’re kind of done with the conversation. You kind of like when you want to use it again,
0:08:08 you open a new, you know, 01 pro chat, right? Like you can kind of continue, but I found
0:08:11 like, you know, the more and more you throw at it, like eventually it kind of gets more
0:08:15 confused. And like the first one, if you just get tons of context, it’s able to reason about
0:08:19 all of it and give you a great, you know, response. And I think most people don’t realize
0:08:22 that they, like you said, they’re like, they’re waiting for several minutes and they’re like,
0:08:25 okay, that kind of sucks. Let me talk to it some more and then they wait more and just,
0:08:26 it never goes anywhere.
0:08:30 I think you pulled up some tweets about some of the interesting ways that some like like
0:08:35 scientists and things like that are using 01. So let’s, let’s maybe talk about those
0:08:38 real quick first and then we’ll show them some of the ways that you and I have actually
0:08:40 been playing around with it.
0:08:43 You know, this is something that I’ve been noticing too, like online is there’s a huge
0:08:47 disconnect between like the people who are just trying to chat with 01, 01 pro and the
0:08:52 people who are like trying to like solve really hard problems, you know, with it. For example,
0:08:56 here’s this doctor on X who’s been sharing really great stuff. You know, and he’s talking
0:08:59 about how, you know, he thinks people don’t realize how good this model is. Like they’ve
0:09:04 been using it, been using it to help create an incredibly complex composite biological
0:09:07 framework, which, you know, there’s a lot of technical stuff in here, but it sounds like
0:09:11 basically this is something that’s actually helping them to identify target drugs that
0:09:15 you could create and even give them good information about how to possibly create the drugs, how
0:09:20 to do, to run tests on them, things that before you would need a whole staff of people to
0:09:25 help you do. And he’s saying that now, instead of having that staff, he’s basically be able
0:09:28 to do it himself, which means, you know, if you gave this kind of technology to every
0:09:33 doctor, like how fast we discover new drugs, it’s going to go up dramatically. I think it’s
0:09:38 not a surprise that you’re seeing it more in like complicated areas like, you know, engineering,
0:09:42 the medical field, things like that. You’re noticing that people in those fields are understanding
0:09:45 how good these models are because their needs are more than like the average person who’s
0:09:49 asking you, like, you know, I’m shopping for this or whatever, you know, right. And here’s
0:09:54 another tweet from a DD who’s a venture capitalist at Mineral of Ventures. We’re the well-known
0:10:01 Silicon Valley Venture Fund. And he’s saying that based on the data that already AI like
0:10:05 oh, like oh, one like reasoning models are doing better than doctors on solving hard to
0:10:10 diagnose diseases. So as of right now, and I think the numbers was 80% versus 30%. What
0:10:16 are those numbers mean? Like hard to diagnose diseases that when you were testing a doctor,
0:10:21 if the doctor could diagnose the disease, the doctors got it right 30% of the time. Wow.
0:10:27 The AI got it right 80% of the time. And this is not the new models. This is the very first
0:10:32 preview of 01. And from a lot of stuff, I’ve seen 01 pro is probably in the ballpark of
0:10:37 three times smarter than that model. So probably when the new data comes out, it’s going to
0:10:43 be like, okay, it’s not 80% like it’s 95% or 90% and the doctors are 30%. It’s a huge difference.
0:10:48 I mean, so this just shows this one reason we started the podcast, right? Like, I think
0:10:52 most people don’t realize like this is society change and stuff like this should be where
0:10:56 we restructure society where, you know, you still need doctors, but you have doctors who
0:11:02 are highly relying on AI to help diagnose diseases. Ideally you have the doctors that
0:11:06 sort of understand all of the different diseases and the ways to cure them and things like
0:11:11 that. But, you know, maybe they’re not always the best at actually diagnosing what the disease
0:11:16 is, but they’re the best at probably telling you how to like handle and, you know, work
0:11:19 with you on the treatment of it. So I think we’re going to get to a point and I know I’m
0:11:24 probably already going to start doing this where if I have like a checkup or a doctor’s
0:11:27 appointment or like something’s bothering me and I’m going to go to the doctor, I’m
0:11:33 going to put all of my symptoms into something like 01 and basically see if 01 can tell me
0:11:38 what it thinks is the problem first, but then go and use that information and bring it to
0:11:39 a doctor.
0:11:45 I’ve been hearing stories lately about people who basically get to like the root problem
0:11:50 of their various ailments by using 01 and then going to the doctor and the doctors essentially
0:11:55 confirming it for them and then helping them with a course of treatment. So it’s not like
0:12:00 eliminating doctors. It’s just sort of, all right, let’s, let’s get an opinion here and
0:12:04 let’s get a second opinion from a real doctor and then let’s sort of overlap the two to
0:12:08 figure out the best course of action. And I really think that’s probably going to be
0:12:11 the smartest way for people moving forward. And I think it’s going to get to a point where
0:12:16 like any doctors that refuse to also leverage AI to sort of help with some of the diagnosis
0:12:22 and stuff, it’s like, that’s borderline, like going to be unethical to not sort of get a
0:12:26 second opinion for it from AI or at least, or, you know, get a first opinion from AI and
0:12:29 then have a doctor confirm the opinion, right?
0:12:33 Right. I think the best thing we should do now is, you know, you talked about using it
0:12:37 for your newsletter, you’ve talked about using it for coding. I’ve talked about using it
0:12:44 for doing shorts for some of my videos. So I think we can jump into one of those. I actually
0:12:49 have chat GPT 01 pro running right now in the background. Cause I knew it was going to take
0:12:53 a while to like process the transcript. So what I’m going to do right now is I’ll go
0:13:00 ahead and jump in and we’ll run this through chat GPT 01, see what kind of clips it finds
0:13:05 for us. And then we can compare it with what 01 pro gave us and see if we can spot any
0:13:10 differences. Cause I think this whole strategy of letting it find viral clips for us. I don’t
0:13:14 necessarily think you’re going to need the pro mode. I bet to the regular 01 will probably
0:13:20 do it just as good. So I’m going to go ahead and share my screen here. So if you are listening
0:13:27 on audio, you can check out the YouTube version of this and actually see it in action. So
0:13:33 I’ve got chat GPT 01 open and here’s one of our recent YouTube episodes, AI predictions
0:13:39 that will completely change life in 2025. And if I go down to the bottom of the description
0:13:44 here on YouTube, there’s actually a button that says show transcript. So what I like
0:13:48 to do, and I do this on a lot of the live streams that I do on my YouTube channel is
0:13:53 once the live streams over, I go and click the show transcript button and it puts these
0:13:58 transcripts over on the right side of your YouTube window. So we’ve got the entire transcript
0:14:02 of this recent podcast episode that we did. And I’m going to go ahead and just select
0:14:08 it all, including the timestamps. You can see I’m selecting the actual times as well
0:14:13 as the transcripts, because open AI 01 is going to need those, those times as well to
0:14:16 know, you know, where to tell us to pull those clips from. So I’m going to go ahead
0:14:20 and copy this whole thing here. And then I’m going to jump into 01. And I’m just going
0:14:25 to paste this whole thing into 01. So you can see I’ve got the entire transcript loaded
0:14:29 in here right now. And if I just kind of add a couple of lines, let me get all the way
0:14:34 up to the top of our transcript here. And I’m going to add a couple of lines here. And
0:14:40 I’m going to say below is the transcript from a recent podcast episode. Please review it
0:14:50 and find clips that have the potential to go viral clips should be roughly 60 seconds
0:14:54 and make or make for good a short form video, right? So I’m just giving it this like little
0:14:58 prompt up here and then I’m pasting in the entire transcript. And I’ll go ahead and submit
0:15:03 it. And you can see it’s going to it’s going to take a minute or so to like process this
0:15:08 whole thing. So I’ll scroll down, you can see it’s thinking right now. So it actually
0:15:14 responded in less than a minute. So this is regular 01. And it’s actually it’s responding.
0:15:20 But I forgot to give it a little extra context here. I forgot to tell it to tell me the actual
0:15:26 timestamps. So you can see right here, it’s giving us the clips is 03 actually a GI and
0:15:31 then it actually gives me a little transcript section. But what I would typically do is
0:15:35 let me actually start over real quick. I’m just going to go ahead and copy and paste the
0:15:38 entire original prompt. And I’m going to do it one more time. So I want to tell it to
0:15:43 actually give me the timestamps that just makes life easier. I kind of forgot to put
0:15:49 that in. So let’s go ahead and copy all of this. I’m going to create a new chat here,
0:15:55 paste the same thing in here. And then at the end of this prompt, say, give me the timestamps
0:15:59 for each clip. And then we’ll go ahead and run this one more time.
0:16:02 Yeah, you gotta give it the right context. You know, it’s kind of like a, I think of
0:16:06 like a in hit checkers, got the galaxy where it’s like, what’s the meaning of life? And
0:16:11 it comes back, you know, how many years was later 1000 years or whatever it was. And yeah,
0:16:18 it’s 42. Okay, you know, you got to give it the right context like tell it what you expect
0:16:19 to get back.
0:16:24 So 01 is actually quite a bit faster than 01 pro. That’s definitely something I’m noticing
0:16:29 just kind of comparing them side by side, because I actually ran 01 pro while you were
0:16:35 talking earlier, just to like, let it start going. This one takes maybe you can see here
0:16:40 is thought for 38 seconds. And now it’s actually giving me the timestamps. So we got clip one
0:16:48 is from zero to one minute 2025 will be wild for AI plus 03 IQ levels. And then you can
0:16:53 see it actually kind of gave us a little transcript of this section is telling us to clip. Next
0:16:59 one is one minute and two seconds to two minutes and two seconds. This one actually
0:17:02 feels like it’s kind of going very linear where it’s taking like the first handful of
0:17:06 minutes and giving us those clips. So you could see clip three is from three minutes
0:17:10 to four minutes basic AGI might be here already and then it gives us a transcript. But you
0:17:16 can see it gave us a handful of clips here. And then the final one is from 2759 to 2859.
0:17:23 The rise of AI email agents for everyone. So these are all potential short form clips
0:17:29 that we can clip out and then use as YouTube shorts. And this was using the basic 01. And
0:17:34 again, let’s take a look at the time. You can see it thought about it for 38 seconds.
0:17:38 So actually pretty quick, but also it’s kind of weird that the very first clip starts at
0:17:43 the zero seconds and goes to one minute. Like that’s our intro basically, I think there’s
0:17:47 probably something there where we could give it even more context was like what’s what’s
0:17:51 a good clip and you know, and what we and like rank order them to don’t just give us
0:17:56 like in a sequential order, like tell us like what are the top five viral clips from this
0:17:57 episode.
0:18:02 That’s actually usually my follow up prompt to this is like give me an order of like which
0:18:07 one is most likely to least likely go viral. But now if we look over here, so I ran the
0:18:14 same thing through the 01 pro model. And this time I definitely did tell it to give me the
0:18:19 time stamps the first time around. But if I scroll down, let’s see if it tells me how
0:18:25 long it thought for. So this one thought for five minutes and 20 seconds. So, you know,
0:18:30 about 10 times as long as the last one, but you can see here it’s the time stamps are a
0:18:41 little bit more dialed in. So 55 seconds to 155 is 03 already AGI from 322 to 422 AGI
0:18:48 agents and societal shifts by 2025 516 to 616 AI video is about to get wild. It’s actually
0:18:55 kind of suggesting a lot of the same exact clips that 01 regular gave us. It just seems
0:19:01 a little bit more accurate on its time stamps. Oh, three’s IQ is near Einstein levels. So
0:19:06 and then look, we see one much, much deeper in the podcast than what the regular 01 gave
0:19:13 us, which is from 45 55 to 46 55 one person startups and multimodal mastery. That’s kind
0:19:19 of hard to say, but that’s how I’ve been using it. And I’ve been plugging this was only 40.
0:19:24 What was this? A 49 minute podcast episode. I’ve been plugging in transcripts from three
0:19:29 hour live streams and it’s been finding clips throughout the entire live stream. Like it’s
0:19:35 finding clips at two hours and 20 minutes in and clips at, you know, two hours and 48
0:19:40 minutes and 37 seconds and another clip at, you know, three minutes in and just kind of
0:19:45 all over the podcast, but you’re right. A probably better prompt to use in the future
0:19:52 would tell it right up front, find these time stamps and then give me a rank order of the
0:19:56 one most likely to go viral and just put that in the first prompt. I’ve actually been doing
0:20:00 it as a follow up, but, but I think you’ve got a good point. It’s probably going to actually
0:20:03 work better if you just include that in the original prompts.
0:20:06 I believe so. Cause it does, it definitely does with coding. So I assume that probably
0:20:11 applies to, to everything. It is interesting. Like as of a year ago, I definitely was on
0:20:15 the side of like, Oh, don’t learn about prompting is not important. Like these models are just
0:20:19 going to handle all that for you. And even the open AI has kind of said that they think
0:20:24 that’ll be the case eventually, but it’s like, as of right now, we’ve kind of went to where
0:20:26 even the prompt is even more important than it was before.
0:20:32 Well, I think it depends on the use case, right? Like I think for like 95% of your use
0:20:36 cases for AI, you don’t have to stress too much about the prompts, right? Like if you’re
0:20:41 using perplexity to get some quick information or you’re asking chat GPT about like, you
0:20:45 know, one of the things I’ve used chat GPT and Claude and things like that for are like,
0:20:49 you know, I’m on this medication. I’m thinking about, you know, taking this supplement. Are
0:20:54 there any like interactions between the two? I don’t really need to write up a complex
0:20:58 prompt. It’s going to get what I’m asking for, right? So for the most part, I would
0:21:03 say like 95% of the time, like prompt engineering or getting crazy with your prompts is not
0:21:09 that necessary. But when you’re asking it, like the, the, the higher the level of complexity
0:21:14 of what you’re asking for, the more it’s necessary to be more detailed than your prompt. I think
0:21:18 there is this thing I’m using called repo prompt, but basically what it does is, you
0:21:22 know, before when I was working on any kind of coding project, like if you want to use
0:21:27 01 pro, the only way you can get the context for your project is to literally copy and
0:21:31 paste everything into it. And so what this does is like, you know, you can see here,
0:21:35 I’ve got all these different files here, tons of files, different directories, all the files
0:21:41 inside the directories, and it has, it lists them all here and tells you how much context
0:21:44 it’s currently taken up, how many context tokens. So you kind of know right now it’s
0:21:50 51, 51.9, right now that’s for anybody listening, it’s like there’s on the screen, there’s like
0:21:56 a left side that’s got like a folder structure. And that’s like the folder structure of, is
0:21:59 that like your entire computer? Or is that like just this software?
0:22:02 That’s a game I’m working on. But the interesting thing is, even though they’re calling it repo
0:22:06 prompt, you totally could use this for other things. Like you could use this for writing
0:22:11 or for whatever use case, and just have files with text in them that you want to copy and
0:22:12 paste every time.
0:22:15 So if you were trying to like write a book or something, right, you can probably have
0:22:20 different folders for like each chapter of your book. And then it can like help understand
0:22:24 the context of previous chapters you’ve already written in the book as well, right?
0:22:28 Yeah, totally. Or, or you can have that plus you can have a thing on like style. If you’re
0:22:31 a writer, like what’s the favorite things you’ve written before? Or like, what are the
0:22:35 things you’d like, right? You can provide all that context as well to make 01 pro to give
0:22:37 you a better response back.
0:22:41 Oh, okay. So let me, let me clarify something. So when you were mentioning earlier, when
0:22:45 you said that you use this for your newsletter and when you’re doing the newsletters, you
0:22:51 might have like a folder that’s like, here’s the style of newsletters that I really like.
0:22:56 And those might be text files within a folder called like newsletters I like or something,
0:23:00 right? And then you might have another folder that’s like my newsletter. And then it has
0:23:06 a whole bunch of entries in it of your newsletter. And so now when you go to prompt 01 pro, it
0:23:10 can actually look at all of that. And it’s in the sort of a nice clean structured way,
0:23:11 right?
0:23:15 Yeah, with notes, like kind of in the text itself, like saying, here’s your, here’s things
0:23:19 I like, here’s things I don’t like are usually more detailed than that, but I’m like kind
0:23:20 of simplifying it.
0:23:26 Oh, wait, so real quick. So the, the repo prompt, it doesn’t actually do the prompting
0:23:31 for you. You have to like copy and paste something from repo prompt into GPT. Yeah. Yeah. You’re
0:23:33 just copying a massive amount of data.
0:23:38 Oh, okay. I thought it was like tap into chat GPT and like submitting prompts. Okay. Okay.
0:23:42 No, but let me show you though, why it’s so good for coding. Before when I would give
0:23:47 a 01 pro something for code, what I was doing was I was creating a script that would look
0:23:52 at my entire code base. And I would run the script, just a simple node script, and it
0:23:56 would take all the code and then put it into a single text file and then like put like,
0:23:59 here’s where this file is just so the, so the model knew where everything was so it
0:24:03 could help, you know, if it was pointing to a file or whatever. But with this, with XML,
0:24:07 it’s basically giving it to you in a format where then you can just apply it to your code
0:24:12 base and it’ll automatically change everything for you, which is incredible. Like it’s, it’s
0:24:16 so much faster coding with this now. Let me show you the other stuff that’s cooler too
0:24:20 that I think is not well explained and when you use this tool, like for example, here’s
0:24:24 a thing where if you’re wanting it to architect something and not actually code it yet, you
0:24:30 can add this and then you can dive into what that text is. Like it’ll say, you are a senior
0:24:35 software architect, specializing in co-design implementation planning. Your role is to,
0:24:38 and then it tells you all the stuff that you’re expected to do.
0:24:41 It acts like a system prompt right there. That’s essentially a system prompt, right?
0:24:47 Yes. It acts dramatically different if you do this versus telling it the engineer one
0:24:52 where you’re a, you know, you’re an engineer. Your job is to execute on the plans and all
0:24:56 this kind of stuff. And the interesting thing that engineers are starting to discover is
0:25:03 it seems that it’d be even better if you do both and then say, you’re an architect and
0:25:07 then first do your architect work. And then when you’re done with your architect work,
0:25:08 go in and do your engineer work.
0:25:13 Okay. Yeah. Yeah. So this is, this is like borderline getting into like agentic stuff,
0:25:15 right? Because it’s almost like,
0:25:18 It’s better than people realize. Like, and when you, when you, when you tell it, it’s
0:25:22 different roles and what order to do its roles. It’s so much better. Like what you get back
0:25:25 from this is so much better. Like if you tell it just to give you code, it’ll give you some
0:25:29 good code back. But if you tell it to take its time to think as an architect first and
0:25:33 to really, to map out the feature and how it’s going to interact with all the other
0:25:36 parts of your code base, which, you know, like I’m saying, this is for code, but this
0:25:39 could apply to so many different things. And then you can do other things too. Like for
0:25:43 example, I have just my own, my own rules. I don’t tell stuff works or not, but people
0:25:48 say like, tell it not to be lazy. Cause sometimes it’ll like try to be lazy and not give you
0:25:52 all the code. I’ve got, I’ve got my own little notes in here of that kind of stuff, like,
0:25:56 don’t be lazy and do this. And this is how the kind of responses I like, you know, kind
0:26:00 of like custom instructions kind of stuff. Right. I’m working on a feature for my game
0:26:06 to add better special effects to the orbs when they’re matched. And so when I click copy,
0:26:12 it’s going to give me this giant mega prompt. And, and then I’m going to get a mega prompt
0:26:17 here. Let me see. And so, you know, I go, I go in chat to BT and you’re going to see
0:26:22 this prompt is just, is just nuts. I’m using O one pro here. And so you can see here, okay,
0:26:26 you can see at the bottom, you know, and I’m not sure why it puts it in this order. I’m
0:26:30 not sure if this is actually better or not. I kind of think maybe it’s not, but it seems
0:26:35 to work. Okay. It puts literally the instructions at the very end and like puts user instructions,
0:26:39 you know, is like, I’m working and then it just, whatever I said, I’m working on a feature
0:26:44 for my game, blah, blah, right. But you can see, like if I start to scroll, you can look
0:26:50 at the bar over here, that as I’m scrolling, as I’m scrolling, yeah, it’s like the bars
0:26:54 barely even moving. It’s not even moving. I mean, we’re talking so much information
0:27:01 here. It is just, it is just wild. How much information I’m sharing here with it. I think
0:27:05 most people would not realize that you can do this and hand it so much information and
0:27:10 it understands it. It is just, it is mind blowing that you could pass. I mean, I don’t
0:27:13 know how many pages this is, but like, God, this is like probably over a thousand pages
0:27:23 of stuff. Yeah, that’s wild. And then you, and then you press it and, you know, if you’re
0:27:28 listening to the audio right now, like it just scrolled for like a minute to get to
0:27:34 the bottom of the prompt. Yeah, yeah, yeah. This is why Sam Altman like tweeted that they’re
0:27:41 losing money on O1 Pro. It’s people like me. Because I’m literally doing this when I’m
0:27:47 working. I’m doing this every five or 10 minutes. That’s wild. That’s crazy. And when I’m done,
0:27:51 I get the response back. It gives me the code. And then my typical workflow, like I said,
0:27:56 it’ll give me XML back. It’ll give me XML back. And the XML, they have a thing in repo,
0:28:00 repo prompt now where you basically can actually copy and paste that in. And then it helps merge
0:28:04 it into your code so you can review all the changes and press, yeah, I’m okay with this
0:28:11 bit, this bit, not sure about, or if you’re lazy, just press accept it all. And using XML,
0:28:15 it actually works all that out for you because it tags the files in the directories and it
0:28:20 knows exactly where they are and exactly what code was changed. And you press a button.
0:28:24 It’s all done. Oh, wow. It’s mind blowing. But you see here, it’s like, okay, green check
0:28:27 market shows you the different files that are going to be, you know, like here’s a turn
0:28:32 manager and here’s like the enemy UI and stuff like that. Right. So it shows you the, and
0:28:37 if you click merge changes, you would then see the actual code in the different ones.
0:28:41 And then line by line, you can accept or deny the code. And you, if you wanted to just accept
0:28:46 it all, you can. And sometimes I do that. Sometimes I just like, major change back up
0:28:51 my prop, you know, project back it up and then accept all. Yeah, that’s like kind of the
0:28:56 best workflow for engineering right now using AI, in my opinion, is to use O and pro for
0:29:00 any new feature, anything that’s like a new thing that doesn’t currently exist in your
0:29:05 product or your website or whatever. Use O and pro because it’s way better to figure
0:29:09 out how to properly architect it and make sure it all performs well and everything else.
0:29:12 And thinking about thinking through every possible scenario where something could go wrong, it’s
0:29:16 way better at that. And then once you’ve got the, once you’ve got the feature implemented,
0:29:20 small changes, you totally can just use cursor or something like that. Like changing the color
0:29:26 of a button, you’re like changing the name or anything super small, you know, obviously
0:29:29 you could do it yourself. But if you, if you want to use AI, you could use cursor or something
0:29:33 like that. Yeah. Yeah. So you wouldn’t use this just to make like small bug fixes or
0:29:38 small tweaks. You’d kind of more use it to build like the overall like bones of the product.
0:29:41 Right. Yeah. If I’m building a software application
0:29:44 and there’s a new feature, let’s kind of figure out what that feature looks like. It’s not
0:29:50 going to look good design wise. Like none of these are good at design yet. But in terms
0:29:55 of it actually working, like O and pro like often gets it right the first time. And if
0:29:59 it doesn’t, it’s usually very minor bugs. I suggest like anyone who has a company who
0:30:03 has engineers right now, like you’re like really missing out if you’re not like paying
0:30:08 for like O one pro for your engineering team and having them use something like repo prompt.
0:30:12 Right. Right. Missing out. That’s super cool. Yeah. Now, is there something you can show
0:30:16 us of like what you’ve generated? Yeah. So this is the Godot editor, which is
0:30:21 like a open source game engine, kind of like final fantasy style. Right. But I mean, it
0:30:25 still looks really, really good. If anybody’s just like listening on audio, it’s a very
0:30:31 like colorful visual game that you’ve built here. And AI helped me make all of that. And
0:30:34 so like the background right now, there’s like, I don’t know, like a cathedral looking
0:30:39 thing, but it’s got like a wavy animation. Yeah. I’m sitting there thinking like, okay,
0:30:42 this is something where I really didn’t even think I was going to do it. I may, who knows,
0:30:45 I may not do it eventually, but it’s, I thought it’d be great for the show for me to be really
0:30:50 hands on with all the different parts of AI from like, from I’m using it for writing.
0:30:54 I’m using it for coding. I’m using it for the art. I’m even using videos. I’m trying
0:30:57 to think about like, okay, in a year from now, what’s AI going to be really good at?
0:31:01 Yeah. Yeah. And so if this takes me a year to build as a hobby, it’ll only get better
0:31:06 from here. Cause like, oh, three will come out. The AI video models would get better.
0:31:09 The AI art is going to get better. And so that’s kind of what I’ve been doing is like
0:31:13 seeing what’s currently possible, but then trying to set up in a way where as those things
0:31:16 get better, I possibly could actually turn this into a real game. But yeah, I’ve just
0:31:20 been shocked by how good, I mean, like AI can do all of this, like 01 pro, especially
0:31:24 like 01 was not able to do this, by the way. Like I’ve tried 01 for like hard coding stuff
0:31:29 and it just, it fails a lot more often than 01 pro. So that tells me that once we get
0:31:34 like 03 and once you get 03 plus, you know, apparently the next version of mid journey,
0:31:37 they’re saying that it’s going to be way better at being consistent with characters and things
0:31:41 like that. Apparently that’s the big next thing coming. And then you get AI video better
0:31:45 and so you can have cool cut scenes and stuff like that. You’ll be able to make like amazing
0:31:46 experiences entirely with AI.
0:31:50 Well, yeah. And if we can get like consistent characters in video, I know there’s some tools
0:31:55 out there that claim they can do that too, but it’s still a little wonky, but I mean,
0:31:59 by the end of this year, we’ll be able to have consistent characters in images really,
0:32:03 really good, probably pull those characters into videos and have consistent characters
0:32:08 in videos. Yeah, 03 model will be out at some point, which is going to like really, really
0:32:12 improve the code that you’re able to do. Right. It’s also going to really improve the writing
0:32:16 and the any sort of like storytelling elements in there. And it’s like, yeah, we’re, we’re
0:32:20 kind of running out of stuff for the humans to actually do when it comes to making these
0:32:24 games, but, but, but as a creator, you still get to be the, I mean, like I’m piloting all
0:32:29 of this. I mean, like for me, this is like, like so fun to like think that maybe it will
0:32:32 be the thing of the future where like it’s so easy to make these games.
0:32:36 It’s like in the past, you have the different coders that specialize at different things.
0:32:40 And then you have like the override, the project manager who’s kind of like telling them each
0:32:43 what to do. And then I always like, I don’t know, for whatever reason, I always use the
0:32:46 like symphony analogy of like, now you’re going to become the conductor where you’re
0:32:50 just sort of like telling all the instruments what to do, but you’re standing there conducting
0:32:53 them. Right. That’s where it’s going, you know, different people are going to be able
0:32:57 to use this to like make their dreams come true. Cause like when I was a kid, my dream
0:33:01 was to like work at Blizzard. Like I want to be like one of the top people at Blizzard.
0:33:04 And then the weirdest thing happened where I was one of the top players in the game ever
0:33:10 quest when I was a kid, the number, the number one player, Rob Prado, who used to run Blizzard,
0:33:13 at that time he was on EverQuest and he was running Legacy of Steel, the top guild on
0:33:18 EverQuest. I ended up like raising money for a startup. We raised several million for
0:33:23 a startup called GameStreamer. And the combination of having that gaming background and having
0:33:27 that startup at E3 had like a huge corner at E3. I ended up getting to like hang out
0:33:30 with Rob and get to know him very well. And a lot of top people in the game industry.
0:33:35 So I had this weird situation where I never really got to fulfill my dream. I was hanging
0:33:40 out with all those people as like good friends. I got to see that, you know, it wasn’t really
0:33:43 the life I wanted, like going to work for one of those companies. It was not. I wanted
0:33:46 to do my own things. But then I still never got to do what I wanted in terms of making
0:33:51 a game. And it’s so wild to me that now that AI is getting so good, that a lot of people
0:33:54 are probably going to have the same kind of like, you know, awakening that I’m having
0:33:58 or it’s like, I can do those things now. You know, it doesn’t matter if I’m 40. I can still
0:34:02 do it because AI is getting so good that I can, it doesn’t take as much time as it used
0:34:08 to. And that’s going to get, it’s going to get even better. Like when 03 comes out, you’ll
0:34:13 probably be able to do any new feature without any bugs, you know, one shot. Yeah. One shot.
0:34:17 It’s already close to one shot in many things now. And it’ll only become more so. And so
0:34:21 that’s just, it’s exciting. Like for a lot of people who like creating things, like the
0:34:25 next 10 years is going to be like a revolution in terms of creating things, not only art,
0:34:29 but like, you know, even companies, like if you wanted to create a company now, definitely,
0:34:34 it’s going to be easier. Yeah. 100%. No, it’s, it’s super, super exciting. And you know,
0:34:38 I, I feel the same way. Like I, I’ve, one of my things when I was a kid was I always
0:34:43 wanted to be like a game designer, a game developer, like work in the gaming space.
0:34:49 And I feel like now we kind of have like a way to sort of live that childhood fantasy
0:34:54 a little bit, but without all the negatives. Right. Right. Right. Yeah. I mean, it’s like
0:34:57 me and you had even talked about it. And I was like, I’m just going to start playing
0:35:00 with stuff and just see what’s possible. And I was just after a week, I was like, wow,
0:35:05 it’s actually possible. Like it can, it can do the whole thing. And then just like learning
0:35:09 that you can do all of this and you can control most of it with your voice. It’s just been,
0:35:12 I’m hoping that people will listen to this podcast and kind of like think bigger about
0:35:14 what they could be accomplishing with these tools.
0:35:19 I couldn’t say it better myself. And so with that being said, I’m not going to try to say
0:35:23 it better myself. We’ll just go ahead and wrap this one up. It’s a really, really exciting
0:35:28 time right now. You can pretty much do build anything you can imagine. And it’s only getting
0:35:32 better and easier. And we’re going to keep on exploring and diving deeper into these
0:35:36 rabbit holes to figure out what we can build. And as we learn, we’re going to share with
0:35:41 you. So make sure if you’re not already subscribed to the shows, subscribe on YouTube, subscribe
0:35:46 wherever you listen to your podcast, you can find us at all of those places. And thank
0:35:49 you so much for tuning in. Hopefully we’ll see you in the next one. Thank you.
0:35:50 .
0:35:57 [inaudible]
0:36:01 [inaudible]
0:36:05 [inaudible]
Episode 42: Are you truly unlocking the full potential of OpenAI’s 01 models? Matt Wolfe (https://x.com/mreflow) and Nathan Lands (https://x.com/NathanLands) dive deep into the capabilities of ChatGPT01 and GPT01 Pro, offering insights to ensure you’re not overlooking these powerful tools.
In this episode, Matt showcases how to create short-form content from long-form transcripts, while Nathan discusses using 01Pro to build a game from scratch. With specific workflows, practical examples, and mind-blowing insights, you won’t want to miss how these advanced models can revolutionize your content creation and coding endeavors.
Check out The Next Wave YouTube Channel if you want to see Matt and Nathan on screen: https://lnk.to/thenextwavepd
—
Show Notes:
- (00:00) Model Perception Disconnect
- (05:50) Understanding ChatGPT and 01 Models
- (07:58) AI Transforming Complex Problem Solving
- (10:19) AI-Assisted Medical Diagnoses
- (15:37) “01 Outpaces 01 Pro”
- (18:35) Efficient Podcast Clip Identification
- (22:57) “Efficient Coding with XML Format”
- (24:34) Optimize Instructions for Better Output
- (28:07) AI Workflow Optimization: O1 Pro & Cursor
- (32:17) Unexpected Gaming Industry Connections
- (32:56) AI Empowering Creative Pursuits
—
Mentions:
- OpenAI: https://openai.com
- Whisper: https://openai.com/research/whisper
- Perplexity: https://www.perplexity.ai
Get the guide to build your own Custom GPT: https://clickhubspot.com/tnw
—
Check Out Matt’s Stuff:
• Future Tools – https://futuretools.beehiiv.com/
• Blog – https://www.mattwolfe.com/
• YouTube- https://www.youtube.com/@mreflow
—
Check Out Nathan’s Stuff:
- Newsletter: https://news.lore.com/
- Blog – https://lore.com/
The Next Wave is a HubSpot Original Podcast // Brought to you by The HubSpot Podcast Network // Production by Darren Clarke // Editing by Ezra Bakker Trupiano