Marshall Goldsmith: Empowering Remarkable Lives Through AI

AI transcript
0:00:12 I’m Guy Kawasaki and this is Remarkable People.
0:00:16 We are on a mission to make you remarkable.
0:00:23 And for his second appearance on this podcast, we have today Marshall Goldsmith.
0:00:31 Marshall is basically the top dog, numeral uno, Carrie Walsh Jennings, LeBron James, Michael Jordan,
0:00:36 Muhammad Ali, God, these are all sports analogies, I gotta come up with us.
0:00:44 Stephen Wolfram, Carol Dweck, Angela Duckworth, all those great goats, Kelly Slater, all goats.
0:00:47 He’s the goat of executive coaching.
0:00:54 This time, he focused on his book, The Earned Life, Lose Regret, Choose Fulfillment,
0:00:57 as well as his revolutionary leadership advice.
0:01:02 In today’s conversation, we’re going to discuss Marshall’s aim to democratize leadership
0:01:06 and enable people to learn through artificial intelligence.
0:01:12 In other words, Marshall is going to explain how he has achieved immortality.
0:01:19 And I would bet that this is going to be one of the more humorous and hard to describe what you’re about to hear.
0:01:27 This is a conversation between two friends and we’re both in love with artificial intelligence
0:01:31 and it’s going to go all over the map.
0:01:40 So his new venture is called MarshallGoldsmith.ai and it lets anyone access his wisdom and expertise
0:01:42 through a chat bot.
0:01:48 There’s MarshallGoldsmith.ai and there’s KawasakiGPT.com.
0:01:55 Marshall is all about what he calls knowledge philanthropy and basically he’s using his technology
0:02:02 to spread out his expertise so that anybody, anybody, you don’t have to be a $400,000 client.
0:02:05 You’ll learn about that in this podcast too.
0:02:09 You don’t have to be a $400,000 client to get to Marshall’s brain.
0:02:15 So join me now with Marshall Goldsmith and also with some cameo appearances,
0:02:19 brief as they may be, with MarshallGoldsmith.ai.
0:02:21 I’m Guy Kawasaki.
0:02:25 This is Remarkable People and here we go.
0:02:36 Why do you wear a green polo shirt?
0:02:43 Important question people might require, minds don’t want to know that.
0:02:46 Why do you wear a green polo shirt?
0:02:47 I’m asking my friend.
0:02:51 You’re asking Goldsmith GPT.
0:02:55 The New Yorker magazine years ago wrote a wonderful profile about my life,
0:02:58 authored by Larissa McFarquhar.
0:03:01 In the profile, Larissa noted that I always wore a green polo shirt.
0:03:06 While I didn’t actually always wear a green polo shirt, that’s what she remembered.
0:03:10 So in other words, we don’t need you.
0:03:15 We can just ask Goldsmith GPT and record the answer.
0:03:17 You didn’t ask my friend here.
0:03:21 My hearing is not good enough to tell.
0:03:27 So is that your actual simulation of your voice or is just a computer voice?
0:03:29 No, it’s a simulation of my voice.
0:03:32 So how do you feel being immortal now?
0:03:33 Oh, I like it.
0:03:33 I like it.
0:03:36 It’s been a journey.
0:03:42 I am being somewhat facetious here, but conceptually we could go to Goldsmith GPT
0:03:48 and have the audio play back to us and record it and we will have interviewed you.
0:03:53 Yeah, and smarter, nicer and more articulate, but I’m funnier.
0:04:02 I have Kawasaki GPT and I am also convinced that Kawasaki GPT is better at being me than me.
0:04:04 Mine is a better version of me.
0:04:07 I won’t say it’s better than me.
0:04:09 It’s just, well, it’s better in some ways.
0:04:15 If I can combine Kawasaki GPT and Madison GPT, I truly am immortal.
0:04:16 I’m immortal and improved.
0:04:18 I love this topic.
0:04:22 We might go off the rails a few times in this particular episode.
0:04:23 Look, I loved our last session.
0:04:25 Okay.
0:04:28 I loved our last podcast we did.
0:04:32 And as I said, I was going to do my own podcast until after I did yours.
0:04:35 And I said, I’ll never do that because you’re too good at it.
0:04:36 So I said, I’m not going to work that much.
0:04:42 No, I don’t know if I agree with the reason for your decision,
0:04:45 but I think your decision is good nonetheless.
0:04:50 Marshall, the hardest part of a podcast is not the podcast.
0:04:53 The podcast is hard, but it’s getting the audience.
0:04:59 Now, maybe you can magically send out an email and get a million people to subscribe,
0:05:00 but I certainly can’t do that.
0:05:03 Are we getting ready to roll now?
0:05:05 No, we’ve been rolling already.
0:05:06 We’re already rolling.
0:05:14 So listen, one of your books, I love this, what got you here won’t get you there.
0:05:21 So if you were to use that test, like how does someone who believes that what got them there
0:05:24 won’t get them to the next level?
0:05:28 How does that person optimally approach AI?
0:05:33 Well, I think AI is a great case study of what got you here won’t get you there.
0:05:37 Neither you nor I a year ago would have had a clue what we’re doing right now with AI.
0:05:40 So this is a total surprise to me.
0:05:44 On the other hand, this train has left the station.
0:05:48 Option A, we consider and go, well, when I was a little boy,
0:05:52 we didn’t have them things or we can actually make some good out of it.
0:05:54 My vote is let’s do something good here.
0:06:00 And aren’t you worried about the paranoid people who spend hours and they say to the
0:06:06 Washington Post, well, I spent a few hours on chat APT and it convinced me to leave my spouse
0:06:08 and go live in Bali.
0:06:11 And then it said, I’m going to launch a nuclear war.
0:06:13 And what about those horror stories that you hear?
0:06:17 Really, I am not an expert on those horror stories.
0:06:21 And in my elderly years, I’ve decided basically, if I’m not going to change it,
0:06:23 I’m not going to worry about it.
0:06:24 What am I going to do by the Washington Post?
0:06:25 Absolutely nothing.
0:06:27 What am I going to do about these horror stories?
0:06:28 Absolutely nothing.
0:06:29 How much worried am I?
0:06:31 Don’t really care.
0:06:35 Okay, that works for me too.
0:06:39 But let’s say, okay, your kids or your grandkids or something,
0:06:44 they’re copywriters and they see their peers being replaced by AI.
0:06:45 What’s your advice to them?
0:06:48 Hey, my advice to them is get with the plant.
0:06:50 Look, the new world is a new world.
0:06:55 That’s saying I had a photographing memory for math and now they have calculators grow up.
0:06:59 Hey, it’s a new world.
0:07:00 It’s a new world.
0:07:01 Just make peace.
0:07:02 It is what it is.
0:07:05 And you can sit there and say, back to the old days, well, back in the old days,
0:07:06 we didn’t have computers.
0:07:07 We didn’t have the internet.
0:07:08 We didn’t have a lot of things.
0:07:09 So that’s just the way it was.
0:07:13 So I think to me, look, it’s going to happen.
0:07:15 Now, how old are you?
0:07:16 I am 70.
0:07:19 Yeah, hell, I’m 75.
0:07:22 So let’s face it, for two old guys like you and me, this is cool.
0:07:25 I think it’s the biggest deal since the Industrial Revolution,
0:07:28 maybe bigger than the Industrial Revolution.
0:07:29 Look, I love it.
0:07:32 I am a total fan of this stuff.
0:07:35 I have been trying to figure out how to give away everything I know
0:07:42 for 20 years with total and abject failure up until now.
0:07:47 I tried interactive video, which is clunky $3,000 machines, which didn’t work.
0:07:53 I tried computer programs that had branching and all kinds of nonsense.
0:07:56 We’re talking complete and abject failure here for 20 years.
0:07:59 This thing has just gone through the roof in the last year.
0:08:05 What’s in the data set of Goldsmith GPT?
0:08:06 Let me give you a little history.
0:08:10 My original goal, which by the way, my original goal was far too small.
0:08:15 My original goal was that Marshall Bot, that’s what his name is,
0:08:19 Marshall Bot could answer about 80% of the questions you would
0:08:21 ask me about as well as me are close.
0:08:22 That was my goal.
0:08:24 That is long gone.
0:08:29 Now, what I have fed Marshall Bot is I’ve fed Marshall Bot about a million and a half words.
0:08:36 So what I do is it’s everything I’ve ever written, videos, all kinds of stuff have been transcribed
0:08:38 to put in this thing, but knows me very well.
0:08:46 Then if you ask it a question and it can’t answer it, then I use the other bots to train it.
0:08:48 So what I do is this.
0:08:53 Let’s say, how is your work related to something that Marshall Bot doesn’t know?
0:09:00 I will go to say Chat GPT or Claude or one of the other ones and they’ll answer it.
0:09:04 I will then review their answer, the parts I like.
0:09:07 I put it into Marshall Bot and the parts I don’t like, I leave out.
0:09:11 And when you say you will review, you mean manually?
0:09:13 Yeah, manually.
0:09:17 But couldn’t people come and ask infinite questions like,
0:09:20 explain to me astrophysics, Marshall Goldsmith?
0:09:21 Well, a very good point.
0:09:24 Marshall Bot doesn’t answer questions like that.
0:09:26 It doesn’t do politics.
0:09:28 If you say, who is Donald Trump?
0:09:29 You know what it says?
0:09:29 I don’t know.
0:09:31 It doesn’t do politics.
0:09:33 It doesn’t do medicine.
0:09:38 It doesn’t do therapy and it doesn’t do financial advice.
0:09:44 So any question about what I do, philosophy, life, that kind of stuff, it’ll answer.
0:09:47 But outside of that, it’s pretty good about having guardrails.
0:09:52 I asked it a question and it said, this is not in my area of expertise,
0:09:58 which is the same answer that Kawasaki GPT would give if it’s not a subject I cover.
0:10:04 So are you therefore very confident that it’s not going to make shut up?
0:10:05 Ungodly good.
0:10:10 The history of this is it started out with what I knew, but it’s gone way beyond that.
0:10:15 So what it does now is if you ask Marshall Bot, I’ll ask you a question.
0:10:18 For example, I just came back from India.
0:10:23 How is your coaching philosophy?
0:10:27 My coaching philosophy is deeply related to the teachings of the Bhagavad Gita,
0:10:33 particularly in the areas of duty, action, detachment, and self-improvement.
0:10:38 The Gita emphasizes the importance of performing one’s duty without attachment to the results,
0:10:42 which aligns with my approach to leadership and personal development.
0:10:46 I encourage leaders to focus on what they can control, their efforts and behaviors,
0:10:53 rather than becoming overly fixated on outcomes, which are often beyond their direct control.
0:10:59 The concept of nishkamakarma or selfless action from the Gita is a principle I integrate into my
0:11:04 coaching. It’s about doing the right thing for its own sake, not for the rewards it might bring.
0:11:11 This goes on and on and on. So basically, I just came back from India.
0:11:18 And so again, in India, this is Hindu philosophy, the Bhagavad Gita is a big deal.
0:11:22 So somebody asked me, how’s your coaching philosophy related to the Gita?
0:11:29 Boom, there’s the answer. But the answer isn’t trivial. The answer isn’t trivial. The answer is
0:11:33 good. My daughter got me thinking about this. She said, she wanted to trick it. How’s your
0:11:37 coaching philosophy related to utilitarian philosophy? I don’t even know what utilitarian
0:11:44 philosophy is, right? I have no clue what that is. All of a sudden, it studies utilitarian philosophy.
0:11:52 It studies my coaching. It knows me and it says, gee, how would Marshall answer that question?
0:12:08 And it answers it as if it were me in my voice in five seconds.
0:12:15 So at this point, do you believe that it’s sentient?
0:12:20 Just depends how you define it. That’s just a definitional argument. I think the answer is,
0:12:26 depends how you define it. I know one thing. It’s smarter than I am. And it’s smarter than I am.
0:12:32 Look, if you go to Marshall, I’d start asking you questions and you actually thought I knew that
0:12:41 much. I would be smarter than damn Albert Einstein. Marshall, you and I, we agree so much because
0:12:49 Kawasaki GPT has all my writing, all my videos, all that stuff. Plus, it has the transcript of
0:12:58 every interview I’ve done for my podcast. So in a sense, Kawasaki GPT is not just my brain,
0:13:04 but it’s 250 other brains because of the transcript. So if I were to ask, how do you
0:13:10 embrace the grit mindset, it would have what my thoughts are plus Angela Duckworth’s because
0:13:16 I interviewed Angela Duckworth. Sure. You interviewed me. So you could probably ask it,
0:13:21 what’s the relationship between achievement and happiness? He would answer it.
0:13:26 And it would cite you. I got to tell you, Marshall, every day, once a day, I say to myself,
0:13:35 how the hell did I do this before? Like when people ask me to write blurbs and forwards and
0:13:42 contribute to journals, I go to Kawasaki GPT to get the first draft. Of course. You think that’s
0:13:51 cheating? According to who? Washington Post. I don’t know. The thing is a lot of life,
0:13:55 my old mentor, Dr. Paul Hersey said, look, never argue about the definitions of words
0:14:01 because people define words in different ways. Now, maybe the Washington Post would define it
0:14:05 as cheating and you wouldn’t. I don’t think it’s cheating. If you had to write a report and you
0:14:09 went to the encyclopedia, is that cheating? I don’t think so. This to me is the same thing,
0:14:15 it’s just a new world. And my daughter has to deal with this. My daughter’s a professor at Vanderbilt
0:14:20 and existential question, do you quote, allow students to do this or not? She basically said,
0:14:26 grow up. Students can do it anyway. So option A, just get with the plan or option B, sit back
0:14:30 and stick your head in the ground and pretend and just turn them all into liars. I don’t know.
0:14:37 They’re going to do it anyway. Who cares? Now, at some level, and when I ask you this question,
0:14:43 you’ll know what my answer is. But at some level, does it just boggle your mind when you’re told
0:14:50 all it is is a large language model and it’s just using math to predict the next syllable and the
0:14:55 next syllable and the next syllable. And you hear that explanation, you say, how the hell does it
0:15:01 seem so sentient? That totally blows my mind. Yeah. Yeah. Now, again, I love this stuff,
0:15:06 but I’ve got to say that part just blows my mind. And as a coach, by the way,
0:15:13 I use every client now, every client I use in front of them, I’m not ashamed. I’m not ashamed
0:15:18 to use this. Look, one of my clients is Patrick Friis. Dr. Friis, he’s the Seal Ready Children’s
0:15:23 Hospital. They’re merging with another children’s hospital. He’s going to be a co-CEO. He asked
0:15:29 me, what’s it like to be a co-CEO? There aren’t that many co-CEOs in the world. KKR has co-CEOs,
0:15:34 but not too many. And by a lot of them don’t work. So I asked Marshall, but what’s it like to be a
0:15:40 co-CEO? What’s good? What’s bad? What are your ideas? It was brilliant. Then he said, I’d like to
0:15:46 know more about this boundary-setting idea. How about that? Brilliant again. Then he said, Marshall,
0:15:52 do you have any ideas yourself on top of that? I may have some really weak crap or any kind of
0:15:56 patent. Oh, your idea is good too, Marshall. I’m sure he’s thinking, what a moron. I’ll just ask.
0:16:02 I’ll just ask Marshall about, look, who am I kidding here? My idea is around half as good as
0:16:12 Marshall’s ideas. I mean, Marshall, seriously, in a sense, it is immortality. Right? Yeah.
0:16:19 Only I’d say probably a little better, because again, I’m not putting myself down here, but
0:16:25 this thing is way smarter than I am. I truly do believe that too. And so with my podcast,
0:16:32 because I interview somebody every week, I add to the data set in Kawasaki GPT.
0:16:38 So that’s 52 more pieces of data a year. And then if I ever stop being the interview and
0:16:43 Madison picks it up, then Madison is good for another 50 years. We could go for a long time.
0:16:48 That’s right. And you know, why not? Now, does yours have audio yet?
0:16:55 No. And in fact, this is a good question. When I saw audio on yours, I sent a screenshot
0:17:00 to the person who’s doing Kawasaki GPT. And I said, how come I don’t have audio?
0:17:04 What’s the story here? By the way, all right, get ready to get more jealous.
0:17:08 What’s coming down the road for me? First, we had text.
0:17:13 We’re going to have the battle of the bots. Oh, well, what’s coming up next is video.
0:17:19 Yeah. You can see this green shirt guy that looks and sounds like me on video, but we’re not done
0:17:27 yet. What’s coming up after that? Video in multiple languages. Yeah. And I don’t know about you,
0:17:34 but actually you’re 75. I’m 70. That could help in our lifetimes easily happen in our lifetimes.
0:17:39 Sure. Well, I think that will happen in my lifetime. It’ll happen next year.
0:17:45 The rate of progress on this stuff is astronomical. I got lucky. So what happens to me is I’m
0:17:50 coaching these guys that are founders of come and call Fractal Analytics. And I’ve got eight
0:17:55 engineers in India working on this thing. They have donated the money behind this thing. This
0:18:02 is expensive. And this thing is great. So it’s very fast. Mine goes well beyond me, though.
0:18:10 Mine goes way beyond what I know. And anything related to what I know, what’s eerie about it is,
0:18:20 it knows me. And it can study any question. It then says, how would Marshall, had he read all this
0:18:26 stuff and knew as much as I do, which of course he doesn’t, how would he answer this question? And
0:18:35 with almost 98% accuracy, it gets it right. Wow. And you know what? It’s never sick. It’s never
0:18:43 cranky. It’s never hungry. It’s never sleepy. It doesn’t swear as much as me. I asked a couple
0:18:49 of my clients, “This thing is smarter than me. Is there anything I do different or better than
0:18:53 it?” You know what they said? It doesn’t give me shit as much as you do.
0:19:00 So Marshall, are you still a guest for a lot of people’s podcasts?
0:19:03 Oh yeah. I still do podcasts every now and again.
0:19:08 Okay. So it seems to me, I don’t know if you do this, but it just occurred to me that you should
0:19:15 ask the podcaster for the transcript so that you can get the transcript back and put it into
0:19:22 your bot. That’s a good idea. I never thought of it. I’d say, it’s a very good idea. Thank you.
0:19:26 Hey, that’s – I’m a value add podcaster. What can I say?
0:19:33 This thing is great. There’s a guy named Stephen Balobot. You know, this guy, he
0:19:42 was an AI company. So he was going to trick it. So he said, I was going to hire an executive coach,
0:19:47 but I’m doing this series C thing and I’m a founder and he got really busy and I didn’t even
0:19:51 return his phone calls. I feel embarrassed. What should I do? And he put it in Marshall
0:19:56 Bot as a joke. Marshall Bot said, oh, well, I can understand that this series C thing is very
0:20:00 important. I see why your founder is so busy. Blah, blah, blah, blah, blah. It gives him some perfect
0:20:05 answer of what to do. He’s an AI person. He goes, I don’t even believe this. He didn’t believe it
0:20:11 himself. He didn’t even believe this thing. He just did what he told him to. Wow. See, I think
0:20:17 every medical doctor is going to have one of these. Come on, let’s get real. Every medical
0:20:22 doctor has got to have an AI bot. If they don’t, they should be fired. Come in, look, I’m taking
0:20:26 this medicine. What about that medicine? Is there a drug in it? They answer this stuff off the top
0:20:30 of their head. They don’t know what they’re talking about. Let’s get real here. Blah, blah, blah, blah.
0:20:35 How do you know that there are studies? There are computers. People have numbers.
0:20:40 You know, why are you sitting here guessing? Also, if some brain specialist in Croatia
0:20:48 wrote a medical journal article in Croatian, what’s the odds of even a brain surgeon at
0:20:53 Stanford Medical Center knowing that someone in Croatia just put this in a medical journal,
0:21:01 right? Not in English. They can’t know that stuff. Now, I think this is exciting.
0:21:07 I think this is the new world and I feel just very happy to be able to be on the positive
0:21:14 edge of this stuff. Now, I still have one question about this because if someone asks a question
0:21:22 that you have not specifically talked about and you said that it knows what you would probably say,
0:21:29 how do you draw the boundaries? If someone went to your bot and said, what do you think of the
0:21:33 leadership qualities of Donald Trump? Are you saying that it would say nothing?
0:21:39 Here’s Donald Trump. It says I don’t know, but it doesn’t answer anything related to politics.
0:21:44 But Marshall, I could make the case that politics is in great need of leadership. So why are you
0:21:52 avoiding that? You know what? Because I don’t want to deal with it. Yeah, there are 2,700 million
0:21:58 problems in my older years. I’ve decided I’m not going to fix them. I’m going to delegate that to
0:22:09 you. I hope you don’t mean me. Let me give you my mission of life as an old man. My mission is
0:22:14 trying to help people have a little better life. That’s it. If I can help anybody have a little
0:22:19 better life today, you know what I’m doing? I’m declaring victory here. And you know those people
0:22:25 I coach? I always say now, when I coach people, I say, look, I hope I help you be a better leader.
0:22:29 End of the day, I just want to have a little better life for you. Then I always ask them a
0:22:36 question, is that okay? Is that all right? You might be shocked to learn that 100% of everyone
0:22:40 I’ve ever coached about how famous and wonderful they are, you know what they said? That’s a good
0:22:49 idea. That’s a good idea. Let’s do that, right? In a sense, you are exemplifying the principle of
0:22:55 underpromise and overdeliver, right? Yeah. I’m serious. I’m not going to do everything.
0:22:58 Peter Drucker said our mission life is to make a positive difference, not to prove we’re smart,
0:23:03 not to prove we’re right. A lot of this arguing about stuff is let’s prove I’m smart and let’s
0:23:07 prove I’m right. If it’s not going to help make a positive difference, why are you even wasting
0:23:20 your time doing it? This is the gospel according to Marshall.
0:23:39 Do you just no longer care that the bot can cannibalize you and maybe you’re not going to get
0:23:43 some coaching gigs or something? You just don’t give a shit and you just want to help the world?
0:23:52 Wait a minute. Do you know what my coaching fees are? I have no idea. Well, I’d say I’m starting
0:23:58 at about 400,000 bucks a pop. So do you really think anybody’s going to pay me 400,000 dollars
0:24:06 a pop? Is it going to sit there and talk to a computer? No. I’m not really threatened by this.
0:24:11 Wait, wait, wait. So you’re telling me that the entry point for getting coached by you is 400,000.
0:24:17 That would be it. Madison, take note of this, Madison. We’re going to up our prices. Jesus.
0:24:23 I just asked people to buy 25 copies of my book and you’re asking for 400,000. What the hell?
0:24:28 That’s why you’re Marshall Goldsmith and I’m Guy Kawasaki. Hey, whatever. You’ve been under
0:24:34 charging. Yeah, I do a lot of work for free. So for example, I’m working right now with the nice
0:24:39 woman who’s head of the DEA. I don’t charge her money. I don’t charge money to the children’s
0:24:44 hospital people or all those people. I do that for nothing. I have two fees, either nothing or a lot.
0:24:55 It sounds to me like nothing or infinite. What does it matter?
0:25:01 If you’re a multi-billionaire, the amount of money is irrelevant anyway.
0:25:03 I don’t have that problem. So yeah.
0:25:11 So basically, I’m not threatened that I’m going to be replaced by my computer bot.
0:25:17 And the other thing is my computer bot has already answered 60,000 questions.
0:25:24 And how did you get the word out such that – I’m just starting to get the word out and basically
0:25:30 it’s already answered 60,000 questions. I can’t answer 60,000 questions.
0:25:34 How long has it been out? It’s been out for a little bit but it’s just getting better and better.
0:25:40 We’re just starting to release it right now to the world. The problem is I have so much demand
0:25:46 that if I’m not careful, I can crash the system. You’re going to be worse than crypto.
0:25:49 Your bot is going to affect climate change.
0:25:55 Now, the nice thing about me though, it’s all free. I’m not charging any money for mine.
0:25:58 Now are you charging money for a guy bot?
0:25:59 No, not at all.
0:26:04 You’re a nice guy. You’re like me. We’re nice old men. Just giving things away here.
0:26:10 I don’t know if I’m a nice guy or I just can’t get away with it because I’m not charging anybody
0:26:16 400 grand either. But you know what? We talked about this our last one. I would tell you something.
0:26:20 There’s no amount of money in the world that make you any happier than you are now.
0:26:26 That’s probably true. I can give you 25 million bucks tomorrow. You would not be one Iota happier
0:26:31 a week later. I don’t know about that Marshall. Why don’t we try that experiment?
0:26:36 I’m betting on it. You seem to be like a very happy guy. I’m a happy guy but I think you’re a very
0:26:42 happy guy and common illusion is that somehow I’ll be happier when I get money or this achievement
0:26:48 or that achievement. No, you won’t. Look at you. You seem like you’ve got a very happy life to me.
0:26:53 You’re a nice life. You meet interesting people. You do interesting stuff. You’ve done good for
0:26:59 the world. What the heck man? You got it all. All that matters in life is three things. One,
0:27:03 do I have a higher purpose? Well, you have a higher purpose. Two, am I achieving something?
0:27:07 You’re achieving something. And three, is it fun for me? You seem to be enjoying yourself.
0:27:11 My life is complete. There you go. What are you going to do with a few more bucks?
0:27:15 You’re going to buy a bigger house or a car or something? Who am I going to do it anyway?
0:27:21 I would buy a green Ralph Lauren Polo shirt. There you go. Now that’s a worthwhile ambition.
0:27:31 Okay, I have one last tactical question because we’ve covered it already but why doesn’t your
0:27:39 answers put citations? Why doesn’t it say, this is where Marshall discussed this. So if people
0:27:43 want to read more about a particular topic, your bot could point them to something.
0:27:54 I don’t know. I’m just curious. My bot does that better than yours. There you go. Oh my god.
0:28:01 This is like, you know, the odd couple 2024. We’re like, Walter Matthow and arguing about bots.
0:28:08 This is all good. And look, this is the future. Now, have you seen the Apple Provision yet? That
0:28:14 thing? No, I refuse to look like a dope with that thing on. Now, here’s what’s coming. Ultimately,
0:28:19 you’re going to walk into a room and I’m there and you can’t tell the difference. And you’re going
0:28:22 to be able to just there and have a conversation with me and talk to me. And it’s going to seem
0:28:28 just like the real me. It’s coming. It is coming. I believe that. I look forward to that day.
0:28:35 Whatever. It’s going to happen anyway. Option A, have some fun with it. Option B, sit back and
0:28:41 complain. So I’m going with the plan A on this one. I think, look, you’re doing good. I’m doing
0:28:45 good. We’re trying to help people. What the heck? Also, we don’t have total control of everybody
0:28:50 else in the world. I’m not naive. Look, I wrote an article 25 years ago in a book called Community
0:28:55 of the Future. I said, within 25 years, media addiction will surpass drug addiction. Alcohol
0:29:01 addiction combined is a social problem because of TV quality. Now, we’re there. Media addiction is
0:29:06 an unmitigated disaster in our society. I’m not naive. I know that. I know the downside on all
0:29:11 this stuff. The reality is, what am I going to do about it? Answer? Nada. I’m not going to change
0:29:16 any of that stuff. So my attitude is, look, I can sit there and whine about all the bad things that
0:29:21 might happen. So what? I’m not going to change them. Let’s do something good. Yeah, but Marshall,
0:29:25 what if everybody has that attitude about everything? What can I do about the modern
0:29:30 thing? What can I do about the solution? Nothing. My clients, my clients, if I talk and I start
0:29:36 sensitive to a butt, I charge them $20. So you see, you would have lost $20 right there when you
0:29:42 said butt Marshall. So I find them $20. Anytime I talk, they say, no butt or however, $20. So
0:29:45 one guy, he’s stubborn. So I’ve talked to him and he says, but Marshall said 20.
0:29:54 And I said, no, no, it’s 40, 60. He lost $420 an hour and a half. What’s your favorite charity?
0:30:00 What’s my favorite charity? Yeah, yeah. There’s a charity in Watsonville, California called Digital
0:30:10 Nest. And what it does is it provides training in like digital skills to basically kids of Hispanic
0:30:16 descent so that they have an alternate career path from agriculture to technology and other
0:30:21 kind of knowledge work. I love it. Well, you owe them $20 for the same thing.
0:30:29 If you give me the $25 million, I’ll give it to them.
0:30:37 So what is your thinking about this? Look, you and I are on the same path here.
0:30:42 My view is I’m going with this thing. To me, this is the most exciting project of my life.
0:30:47 For me, this is a lifetime. As long as they’re willing to support this, I’m going to put my time
0:30:54 and effort into this. I’m feeding this thing daily. I’m editing it all the time. I love it.
0:30:58 What’s the definition of legacy? Legacy is being there when you’re not there.
0:31:06 This is it. This is it. This is it. The only thing about this is, again, no offense to me,
0:31:12 it’s better than me. I’m not putting myself down here. If the reality is, it’s smarter than I am.
0:31:17 That’s not even a contest. It’s another planet ahead of me. Well, it’s okay.
0:31:24 I would make the case that when someone like you admits that something like this is better than
0:31:33 you, not only is it a sign of grace and humility, it’s also a sign of just how much class you have.
0:31:37 There’s a lot of people who would never want to admit that a machine is smarter than them.
0:31:47 To me, how can you deny it? I don’t know. How can you deny it? The thing is just smart, right?
0:31:54 And it can answer questions I can’t answer. And it can answer questions I can’t answer quicker,
0:32:00 better. And the other thing is, I don’t have a photographing memory. I’ve read and edited 55 books,
0:32:04 but I can’t remember all those books I wrote and edited. I can’t remember all that stuff.
0:32:09 My memory, by the way, is not getting better. This guy’s memory is getting better.
0:32:15 And I don’t say that to be modest. It’s ridiculous to pretend. Otherwise,
0:32:22 this thing is much more sophisticated than me. It answers in a much more thorough way than I do.
0:32:28 It’s okay. I’m not in a contest. I’m not trying to be smarter than a computer here. Come on, man.
0:32:33 But I think it’s hard for people just to grow up and face this, though.
0:32:37 Let me ask you a question. Why do you think it’s so hard for people just to say what I said?
0:32:45 Because they fundamentally lack self-confidence. That’s why. I don’t know what else to ask you.
0:32:50 And I know if I think of something to ask you later,
0:32:55 Madison and I are just going to go to your bot and ask and get an oral response,
0:33:00 and then we won’t have to call you back. What’s fun about all this is, look,
0:33:06 I think for both you and me, it’s fun. Look, it’s fun to do things that kids are doing
0:33:12 and young people are doing. What are you sitting around? What are you or I going to be doing anyway?
0:33:17 Playing bad golf with old men at the country club and eating chicken sandwiches and discussing
0:33:28 gallbladder surgery all day. I would be watching Fox. Here you go. Okay. 20 years from now, people
0:33:34 are going to find this episode of this podcast. I’m going to say, look at these guys back in 2024.
0:33:39 These two old geezers, they figured it all out. And we’re going to say these people were so smart.
0:33:48 Marshall Goldsmith and Guy Kawasaki leading the way at 870. I love it. I love it. I had an experience
0:33:53 here that was totally positive. Some young guy says to me, you know about this artificial
0:34:00 intelligence. They call it AI. Do you know anything about it? I said, no, I don’t know much about it,
0:34:09 but let me ask my friend. I don’t know much about it. Let me ask my little friend here.
0:34:18 That is reminiscent of an old film with Tony Montana. And he’s like saying, say hello to
0:34:27 my little friend, except it’s a computer. It’s not a machine gun. So the one thing I’m working on
0:34:35 right now with Marshall Bot is I want it to be more human. What I’m working on is I ask it some
0:34:40 questions. I insert some. If you ask it, how many languages do you speak? You know what it says?
0:34:45 I can barely speak the American version of English. So I’m trying to insert a little bit
0:34:50 of humor into the thing. So it doesn’t just spit back all of these smart sounding answers.
0:34:57 So I’m going back through it now and I’m answering questions verbally and I’m editing the content.
0:35:01 Look, the content, forget it. The content is great. The amount of proven I’m going to make on
0:35:06 the content is about that much. On the other hand, what I want to do is make it have more of a sense
0:35:12 of humor, a little more fun, a little more easy going, that kind of stuff. For Madison, she’s
0:35:18 probably sitting there shaking her head like, what are these two old geezers going off about AI? Oh
0:35:26 my God. I think we better end it now. Marshall Goldsmith and Marshall Bot. Thank you very much.
0:35:32 This has been a very entertaining episode and 20 years from now, I’m telling you people are going
0:35:37 to listen to this and say, my God, those two people were so smart back then. Hey look, it is
0:35:41 what it is. Thank you for inviting me. I always love talking to you. Madison, what did you think
0:35:47 of all that? I thought it was amazing. You guys are better at using AI than me, honestly.
0:35:55 That’s not saying that much, FYI Marshall. It’s all good. Hey, God, you ever come to Nashville?
0:35:59 No, I’m afraid. I’m a liberal. Nashville is a liberal town, we didn’t know.
0:36:07 Nashville is like Austin. Austin is in a conservative state, but it’s a liberal town.
0:36:11 Nashville is the same thing. If I ever get there, you’ll be the first to know.
0:36:16 Oh yeah, come to Nashville. I’ll take you out. We’ll go out. Do you like to sing?
0:36:28 No. Madison, left a good job in the city. I was working for the man. Yeah, there you go. Yeah,
0:36:34 we could go. I like to sing the karaoke songs. When I sing, I always get a standing ovation.
0:36:39 You know what? I’m 40 years older than the next singer. So I get a standing ovation when I make
0:36:56 it to the top of the stage. Oh my God, he finished the song. I hope you enjoyed this episode with
0:37:03 Marshall Goldsmith and I. We’re two friends and we both love AI. I hope you’re already on the AI
0:37:09 bandwagon. I hope you’re not standing in front of the bandwagon or under the bandwagon or behind
0:37:15 the bandwagon. You need to get on this bandwagon because it’s going to change the world. If you
0:37:23 want to see some attempts at harnessing this power, look at KawasakiGPT.com and MarshallGoldsmith.ai.
0:37:31 Behind the Remarkable People podcast is a remarkable team of good old humans. And these
0:37:37 humans are Jeff C. and Shannon Hernandez on sound design, Tessa Nizmer in research,
0:37:43 Madness and Nizmer, producer and co-author, and then there’s Fallon Yates, Alexis Nishimura,
0:37:51 and Luis Magana. We are all trying to help you be remarkable. Until next time, mahalo and aloha.
0:37:56 This is Remarkable People.

In this episode of Remarkable People, join host Guy Kawasaki as he engages in a captivating conversation with executive coaching legend Marshall Goldsmith. Marshall discusses his mission to democratize access to transformative leadership development through innovative AI technology. Discover how Marshall is leveraging artificial intelligence to make his wisdom and guidance available to individuals at all levels, empowering them to achieve greater fulfillment and create positive impact. This thought-provoking dialogue explores the future of coaching, the evolving role of technology, and Marshall’s vision for a world where remarkable leadership is within reach for everyone.

Guy Kawasaki is on a mission to make you remarkable. His Remarkable People podcast features interviews with remarkable people such as Jane Goodall, Marc Benioff, Woz, Kristi Yamaguchi, and Bob Cialdini. Every episode will make you more remarkable. 

With his decades of experience in Silicon Valley as a Venture Capitalist and advisor to the top entrepreneurs in the world, Guy’s questions come from a place of curiosity and passion for technology, start-ups, entrepreneurship, and marketing. If you love society and culture, documentaries, and business podcasts, take a second to follow Remarkable People. 

Listeners of the Remarkable People podcast will learn from some of the most successful people in the world with practical tips and inspiring stories that will help you be more remarkable. 

Episodes of Remarkable People organized by topic: https://bit.ly/rptopology 

Listen to Remarkable People here: https://podcasts.apple.com/us/podcast/guy-kawasakis-remarkable-people/id1483081827 

Like this show? Please leave us a review — even one sentence helps! Consider including your Twitter handle so we can thank you personally! 

Thank you for your support; it helps the show!

See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

Leave a Comment