Category: Uncategorized

  • #233 Outliers: Anna Wintour – Vogue

    AI transcript
    0:00:05 Anna Wintour once looked at photos from a $300,000 fashion shoot
    0:00:09 and killed the entire story without explanation.
    0:00:14 The photographer was Stephen Meisel, now one of fashion’s legends.
    0:00:18 He was so furious, he refused to work with her for years.
    0:00:21 Today, he credits her with making him better.
    0:00:24 This is the Anna Wintour paradox.
    0:00:27 She’s fired assistants for poor clothing choices.
    0:00:31 She’s made editors stand during meetings because sitting wastes time.
    0:00:34 When asked what job she wanted once, she replied, yours.
    0:00:37 And the meeting ended abruptly.
    0:00:39 She got the job anyway.
    0:00:43 For 40 years, people have been predicting her downfall.
    0:00:47 She’s too harsh, too demanding, too unwilling to compromise.
    0:00:49 Meanwhile, she keeps getting promoted.
    0:00:53 At 75, she now runs every magazine at Condé Nast.
    0:00:56 Because Anna figured out something most leaders never learn.
    0:01:02 In a world awash in mediocrity, maintaining standards looks unreasonable.
    0:01:05 But standards are also the only moat that matters.
    0:01:08 And if you want to understand how a British girl who couldn’t type
    0:01:11 built the most bulletproof career in media,
    0:01:13 and what that means for your own ambitions,
    0:01:15 you need to hear this story.
    0:01:30 Welcome to the Knowledge Project.
    0:01:32 I’m your host, Shane Parrish.
    0:01:34 In a world where knowledge is power,
    0:01:38 this podcast is your toolkit for mastering the best of what other people have already figured out.
    0:01:45 Anna Wintour got fired for refusing to compromise her vision.
    0:01:47 The magazine that fired her?
    0:01:47 It’s dead.
    0:01:48 Anna?
    0:01:52 She runs every magazine at Condé Nast, including Vogue at age 75.
    0:02:00 This is the story of how a British girl who couldn’t type or so built the most powerful position in global media,
    0:02:05 then made it impossible for anyone else to take it away by continuously reinventing herself.
    0:02:08 Here’s what a lot of people get wrong about power.
    0:02:10 They think it’s about climbing ladders.
    0:02:13 Anna understood it’s about building the ladder itself.
    0:02:16 While her competitors fought for promotions, she built infrastructure.
    0:02:19 While they protected magazines, she created platforms.
    0:02:23 While they pleased bosses, she made bosses need her.
    0:02:24 The result?
    0:02:29 Four decades at the top of an industry that reinvents itself every five years.
    0:02:33 She survived the death of print, the digital revolution, the great financial crisis,
    0:02:38 the social media transformation, and a pandemic that killed most of her competitors.
    0:02:38 How?
    0:02:43 By mastering the principles that sound simple, but almost nobody executes.
    0:02:50 First, she figured out that being fired for your uncompromising standards is very different than being fired for your performance.
    0:02:51 One is failure.
    0:02:52 The other is intelligence.
    0:03:02 Second, she learned that in creative industries, speed beats perfection because perfection without deadlines is just procrastination with better excuses.
    0:03:08 Third, she discovered that real power comes from making yourself essential to multiple systems simultaneously.
    0:03:10 Even if one fails, you survive.
    0:03:19 This episode draws from Amy O’Dell’s definitive biography to reveal how Anna transformed from fashion assistant to cultural kingmaker.
    0:03:28 But more importantly, it extracts the repeatable lessons and strategies that she used, strategies that you can apply whether you’re building a career, a company, or an empire.
    0:03:31 Her greatest insight wasn’t about fashion.
    0:03:35 It was understanding how to get the best out of herself and others.
    0:03:38 It’s time to listen and learn.
    0:03:53 When Anna was two, her 10-year-old brother, Gerald, died in a cycling accident.
    0:03:56 Her mother installed window bars and never spoke of him again.
    0:03:58 The family moved forward.
    0:03:59 No pictures, no mentions.
    0:04:00 Just forward.
    0:04:02 This is how the Wintors operated.
    0:04:07 Her father, Charles, edited the evening standard with surgical precision.
    0:04:09 Staff writers froze when he passed.
    0:04:14 They called him Chili Charlie, though they would work themselves to exhaustion for his approval.
    0:04:16 Anna never understood the nickname.
    0:04:19 It had nothing to do with the person he was, she’d insist.
    0:04:22 The same words would follow Anna her entire career.
    0:04:27 In a household that prized academic achievement, Anna chose a different education.
    0:04:31 Her siblings devoured political theory at Oxford and Cambridge.
    0:04:37 Anna devoured fashion magazines, eight newspapers every weekend, every fashion publication she could find.
    0:04:45 While her siblings prepared for careers in law and social causes, she studied hemlines and cultural shifts with scholarly intensity.
    0:04:51 I was so desperate to get out in the world and get on with things, she explained about leaving school at 16.
    0:04:55 Her family found her fascination with fashion incomprehensible.
    0:05:01 I’ve always been a joke in my family, Anna later admitted, they’d always thought I’m deeply unserious.
    0:05:07 The irony is that Anna’s unserious pursuit required more discipline than any degree.
    0:05:09 She wasn’t avoiding rigor, but applying it differently.
    0:05:14 Fashion was cultural anthropology, business strategy, and visual communication.
    0:05:15 Full stop.
    0:05:17 Every magazine was a textbook.
    0:05:19 Every trend was data.
    0:05:25 In the face of my brother’s and sister’s academic success, I felt I was rather a failure, she recalled.
    0:05:31 Anna, like so many of our outliers, and perhaps like you yourself, felt overlooked and underestimated.
    0:05:33 She would turn it into rocket fuel.
    0:05:38 While her siblings shaped policy and law, she would shape how power itself presented to the world.
    0:05:45 What her family couldn’t see, because it didn’t look like what they expected, was that Anna was a learning machine.
    0:05:50 This commitment to vacuuming up everything about fashion, it was building mastery.
    0:05:53 Anna’s father, Charles, believed in his daughter.
    0:05:57 When 16-year-old Anna needed to list her career objectives, he didn’t hesitate.
    0:06:01 Well, you write, you want to be the editor of Vogue, of course.
    0:06:05 Not work in fashion, not try magazine.
    0:06:06 Editor of Vogue.
    0:06:07 The apex.
    0:06:11 Named with the same certainty that you’d write your own address.
    0:06:13 Most people aim for realistic.
    0:06:17 The exceptional, name their destination, and work backwards.
    0:06:21 Anna had two advantages most pretend don’t matter.
    0:06:22 Connections and cash.
    0:06:30 Her grandparents’ trust fund paid out $120,000 in today’s money over six years, exactly what her siblings spent on university.
    0:06:32 Anna invested that tuition differently.
    0:06:35 While they bought credentials, she bought time.
    0:06:38 Time to take an unpaid internship.
    0:06:40 Time to say no to the wrong opportunities.
    0:06:42 Time to wait for the right ones.
    0:06:44 Her father made one phone call.
    0:06:47 The Evening Standards fashion editor took Anna to lunch.
    0:06:50 Barbara Griggs expected to mentor an eager teenager.
    0:06:53 Instead, she met someone who already knew exactly where she was going.
    0:06:56 All she wanted from me was some information.
    0:07:01 What she didn’t want at all was any guidance or tips on how to manage her career.
    0:07:02 That certainty at 16?
    0:07:05 Most people don’t even have that at 40.
    0:07:10 Willie Landells at Harper’s Bazaar hired Anna because of her father’s reputation.
    0:07:15 Anyone connected to such a respected newspaper was worth a shot, he figured.
    0:07:17 Here’s where privilege meets performance.
    0:07:19 Yes, her name opened the door.
    0:07:21 But what happened once she walked through?
    0:07:22 That was all Anna.
    0:07:25 The real advantage isn’t the door that opens.
    0:07:28 It’s knowing exactly what to do once you’re inside.
    0:07:30 Harper’s operated on a skeleton crew.
    0:07:33 Three people running the fashion pages.
    0:07:35 No budget for coffee fetchers.
    0:07:36 Everyone did everything.
    0:07:39 I was thrown into my career, frankly, with ignorance.
    0:07:42 I knew nothing, Anna later admitted.
    0:07:43 Perfect.
    0:07:48 While her peers at bigger magazines were filling in expense reports and grabbing coffee,
    0:07:50 Anna was learning the entire business.
    0:07:53 I learned how to go into market and choose clothes.
    0:07:54 I learned how to choose talent.
    0:07:56 I learned how to collaborate.
    0:07:57 I learned how to do a layout.
    0:07:59 I learned how to write a caption.
    0:08:02 She wasn’t afforded the luxury of specializing.
    0:08:05 She had to learn every job in detail.
    0:08:07 Anna had three qualities that mattered.
    0:08:10 Taste, organization, and certainty.
    0:08:11 She never forgot her dress.
    0:08:13 Never lost jewelry.
    0:08:14 Never second-guessed decisions.
    0:08:16 People may not always like it,
    0:08:20 but they knew exactly what she thought and what was expected from them.
    0:08:23 Sounds a lot like Steve Jobs and Elon Musk.
    0:08:25 Her editor noticed something else.
    0:08:27 Anna could spot talent before it had a name.
    0:08:31 She’d book unknown photographers who’d become famous.
    0:08:33 She’d champion designers others ignored.
    0:08:35 This wasn’t luck.
    0:08:37 Remember those eight newspapers every weekend?
    0:08:39 Those thousands of magazine pages?
    0:08:40 She wasn’t just reading.
    0:08:43 She was building a mental repository.
    0:08:45 When Anna saw a new photographer’s work,
    0:08:49 her brain compared it against millions of images she’d studied.
    0:08:50 When she met a designer,
    0:08:54 she measured them against every trend she’d ever tracked.
    0:08:56 Pattern recognition can’t be taught.
    0:09:00 It can only be earned through obsessive accumulation of high-quality inputs.
    0:09:04 Most people want to trust their gut without feeding it the right things first.
    0:09:07 Then came the moment that defined her aesthetic forever.
    0:09:09 Christmas, 1971.
    0:09:16 Anna styled a shot mixing a $2,000 white fox coat with a $29 wicker chair.
    0:09:18 Diamonds with democracy.
    0:09:20 Luxury with accessibility.
    0:09:23 Everyone else segregated high and low.
    0:09:25 Anna smashed them together.
    0:09:30 The insight she had was that aspiration without accessibility is just snobbery.
    0:09:33 Accessibility without aspiration is just a commodity.
    0:09:36 The magic lives in the tension between the two.
    0:09:41 That high-low mix would become her signature and eventually fashion’s default language.
    0:09:45 But first she had to survive long enough to impose it on the world.
    0:09:48 Anna’s assistant, Claire Hastings,
    0:09:52 got a front row seat to what extreme standards actually look like in practice.
    0:09:54 Anna wasn’t warm.
    0:09:56 She didn’t explain much.
    0:09:57 But Hastings noticed something.
    0:10:01 Anna was obsessively invested in her success.
    0:10:05 Not through pep talks, but through the intolerance for mediocrity.
    0:10:08 Every borrowed item needed to be returned perfect.
    0:10:10 Down to the original tissue paper.
    0:10:11 A missing button.
    0:10:12 Unacceptable.
    0:10:13 A wrinkled collar.
    0:10:15 Career ending.
    0:10:17 This wasn’t about the clothes.
    0:10:22 It was about proving you could be trusted with the details before being trusted with the decisions.
    0:10:25 Outliers share unreasonable standards.
    0:10:28 Standards aren’t what you accept from others.
    0:10:30 They’re what you demand from yourself when no one’s watching.
    0:10:33 Then there was the lunch table incident.
    0:10:35 Eight people ordered wine and steaks.
    0:10:37 Anna, just a yogurt, please.
    0:10:39 The table froze.
    0:10:41 Everyone was suddenly questioning their orders.
    0:10:43 She wasn’t performing discipline.
    0:10:49 She’d internalized it so completely that normal behavior looked like an excuse by comparison.
    0:10:51 Her steak had to be perfectly rare.
    0:10:54 She’d send it back three times and then eat two bites.
    0:11:01 Not because she was being difficult, but accepting good enough in small things trains you to accept it in big things.
    0:11:04 Steve Jobs sent back sushi at his own birthday party.
    0:11:06 Elon sleeps on the floor of the factory.
    0:11:09 Outliers don’t have work-life balance.
    0:11:11 They have standards that follow them everywhere.
    0:11:14 The talent scout in Anna was brutal.
    0:11:16 Photographers lined up for go-sees.
    0:11:16 Bad work?
    0:11:18 She’d look away mid-sentence.
    0:11:19 Thank you.
    0:11:20 No feedback.
    0:11:21 No false hope.
    0:11:23 Just cutthroat rejection.
    0:11:26 But when she spotted genius, total commitment.
    0:11:31 One photographer showed up and staff described him as some madman with boxes of shoes.
    0:11:36 Anna saw what others missed, gave him his first major endorsement.
    0:11:41 James Wedge, a hat maker trying photography, Anna booked him repeatedly until he had a career.
    0:11:48 By 1974, Anna was doing half the major shoots, maintaining standards that made everyone else look casual.
    0:11:53 When they fired her superior for someone with writing background, Anna expected the promotion.
    0:11:59 The lesson she was about to learn, having the higher standards doesn’t guarantee recognition.
    0:12:01 It only guarantees you’ll deserve it.
    0:12:04 Are you crushing your bills?
    0:12:06 Defeating your monthly payments.
    0:12:09 Sounds like you’re at the top of your financial game.
    0:12:14 Rise to it with the BMO Eclipse Rise Visa Card.
    0:12:17 The credit card that rewards your good financial habits.
    0:12:22 Earn points for paying your credit card bill in full and on time every month.
    0:12:25 Level up from bill payer to reward slayer.
    0:12:26 Terms and conditions apply.
    0:12:31 Hit pause on whatever you’re listening to and hit play on your next adventure.
    0:12:35 Stay three nights this summer at Best Western and get $50 off a future stay.
    0:12:36 Life’s the trip.
    0:12:38 Make the most of it at Best Western.
    0:12:41 Visit bestwestern.com for complete terms and conditions.
    0:12:46 They gave Anna’s promotion to Min Hogg, a textiles expert who wrote features.
    0:12:49 Anna had been doing the actual job.
    0:12:51 Min had been writing about fabrics.
    0:12:52 Anna got a new title.
    0:12:54 Deputy fashion editor.
    0:12:55 Corporate translation.
    0:12:56 Please don’t quit.
    0:12:59 Here’s where character reveals itself.
    0:13:00 Anna didn’t complain.
    0:13:01 She didn’t confront.
    0:13:03 She didn’t leak to gossip columns.
    0:13:06 Instead, she let her work create unbearable contrast.
    0:13:11 Every shoot she produced made Min’s inadequacy more visible.
    0:13:16 Min would have realized pretty soon that Anna didn’t think much of her work, Hastings observed.
    0:13:20 When you maintain exceptional standards, you don’t need to attack mediocrity.
    0:13:22 It exposes itself.
    0:13:26 After months of this passive war, Anna pulled Hastings aside.
    0:13:28 It’s outrageous I haven’t been made fashion editor.
    0:13:29 I’m resigning.
    0:13:31 Are you going to stay?
    0:13:36 No, said Hastings, who quit in solidarity despite having no backup plan.
    0:13:38 Study that moment for a sec.
    0:13:40 Anna did not negotiate.
    0:13:41 She didn’t threaten.
    0:13:43 She didn’t give them time to counteroffer.
    0:13:44 She just left.
    0:13:49 While others might not have believed in her as much as she believed in herself, it didn’t matter.
    0:13:53 She was going to bet on herself and she was going to go all in.
    0:13:59 Five years of vacuuming up every detail of the fashion business from inside.
    0:14:00 Building relationships.
    0:14:01 Developing her aesthetic.
    0:14:03 Proving results.
    0:14:07 When the system failed to reward merit, she didn’t try to fix the system.
    0:14:09 She rejected it.
    0:14:14 With passport in hand, she aimed for New York, where talent mattered more than tenure, where
    0:14:18 hunger beat hierarchy, and where results spoke louder than words.
    0:14:23 The girl who couldn’t type was about to teach Manhattan how power really works.
    0:14:26 New York, 1975.
    0:14:28 25, no job.
    0:14:30 Just confidence in herself.
    0:14:36 I felt quite isolated growing up in England, with it being such a class-driven culture, Anna
    0:14:36 explained.
    0:14:41 Everyone in New York is from somewhere else, and that creates a very positive force.
    0:14:44 America promised meritocracy.
    0:14:46 Anna would test that promise.
    0:14:49 Harper’s hired her as junior editor.
    0:14:52 Day one, she broke every rule of the American fashion authority.
    0:14:54 The industry equation was simple.
    0:14:55 Drama equals competence.
    0:14:58 Polly Mellon at Vogue cried of her photos she loved.
    0:15:02 Gloria Monker at Bazaar threw shoes at assistance.
    0:15:03 Emotion was currency.
    0:15:05 Anna stayed quiet.
    0:15:06 She watched.
    0:15:06 She processed.
    0:15:08 It was a smart move.
    0:15:12 A special projects editor observed the new hire during a photo shoot in Jamaica and wondered,
    0:15:14 have we hired the wrong person?
    0:15:19 Anna’s job was to command the set with authority, but she just stayed in the background.
    0:15:24 This confused magazine’s leadership, but not the crew, who preferred working with someone
    0:15:27 who didn’t feel the need to interfere with every detail.
    0:15:29 For them, she was a breath of fresh air.
    0:15:34 Years later, she would say, I’m a big believer in hiring talented people and giving them the
    0:15:34 freedom to work.
    0:15:37 People work better when they have responsibility.
    0:15:41 There are two kinds of power, the kind that announces itself and the kind that doesn’t
    0:15:41 need to.
    0:15:44 And as we’ll see, Anna actually has a bit of both.
    0:15:48 Her real revolution wasn’t style, it was substance.
    0:15:50 Her assistant discovered something unprecedented.
    0:15:56 She’d really had no qualms about being completely focused to the point of being very abrupt, seemingly
    0:15:58 rude because she just didn’t have the time.
    0:16:03 She was on her path to what she needed to do, period, the end.
    0:16:06 No small talk, no office politics, just work.
    0:16:12 In an industry built on relationships and feelings, Anna introduced something radical, pure efficiency.
    0:16:15 She wasn’t cold, she was clear.
    0:16:18 Every interaction had a purpose or it didn’t happen.
    0:16:23 In addressing her tough reputation much later, she would say, if one comes across sometimes
    0:16:28 as being cold or brusque, it’s simply because I’m striving for the best.
    0:16:32 Most people organize their entire lives around being liked by nearly everyone.
    0:16:34 They don’t want to offend anyone.
    0:16:37 Outliers have the courage to be disliked.
    0:16:41 The British girl who couldn’t break through England’s class ceiling was about to crack
    0:16:45 America’s code, not by playing the game better, but by refusing to play at all.
    0:16:47 Anna was devoted to work.
    0:16:51 In an office, people kind of clown around and they take breaks and they gossip.
    0:16:53 And she never did any of that.
    0:16:55 She wasn’t in it for fun and games.
    0:16:56 She was in it to work.
    0:17:01 While colleagues treated jobs as social clubs with deadlines, Anna treated the office like
    0:17:07 a laboratory every day, impeccably dressed, not vanity, but strategy in fashion.
    0:17:09 Your appearance is your argument.
    0:17:15 Her boss, Tony Missoula wanted traditional shoots, advertiser friendly, text heavy, safe.
    0:17:17 Anna wanted revolution.
    0:17:20 When they clashed, she didn’t argue, at least not directly.
    0:17:25 Instead, she would meet photographers in the lobby, select only the best shots, hide the rest.
    0:17:28 When Tony asked for alternatives, Anna would shrug.
    0:17:30 Sorry, there aren’t any more.
    0:17:33 Tony’s choice, accept her vision or pay for expensive reshoots.
    0:17:35 Budgets were tight.
    0:17:36 Anna knew this.
    0:17:37 Anna won.
    0:17:40 When Tony berated her, Anna stayed silent.
    0:17:41 A colleague noticed.
    0:17:43 She knew she could go on to other things.
    0:17:45 She knew damn well she wanted to run Vogue.
    0:17:50 Harper’s wasn’t the end of the line, but more importantly, she also didn’t treat it as a
    0:17:51 stepping stone either.
    0:17:52 She was present.
    0:17:53 She was all in.
    0:17:54 She gave it her all.
    0:17:58 While everyone else fought daily battles, Anna was mapping the entire war.
    0:18:00 They were playing for Friday.
    0:18:02 She was playing for Vogue.
    0:18:03 And then Paris happened.
    0:18:07 Anna returned with photos that broke every rule.
    0:18:13 When Tony later fired Anna for being too European, he was essentially firing her for having a point
    0:18:13 of view.
    0:18:18 Years later, after Anna conquered fashion, Tony would deny that he ever happened.
    0:18:21 History, as they say, is written by the winners.
    0:18:24 At the time, I didn’t know what he meant, Anna reflected.
    0:18:28 But in retrospect, I think he meant I was obstinate, that I wouldn’t take direction.
    0:18:32 Years later, Anna conquered fashion and Tony denied firing her.
    0:18:36 Anna would reflect on this years later and say everyone should get sacked at least once in their
    0:18:39 career because perfection doesn’t exist.
    0:18:43 The lesson, getting fired for your standards is different than getting fired for your performance.
    0:18:44 One is failure.
    0:18:46 The other is reconnaissance.
    0:18:51 Despite the setback, she was confident in her vision and confident in herself.
    0:18:53 Anna took a job at Viva magazine.
    0:18:54 The owner?
    0:18:55 The Penthouse publisher.
    0:18:58 A porn king trying to make feminist fashion content.
    0:19:03 Stores hid Viva behind the counters next to the adult magazines.
    0:19:05 Anna didn’t care.
    0:19:09 I needed a job and Viva offered me an enormous amount of freedom.
    0:19:12 The editor, Alma Moore, saw Anna clearly.
    0:19:14 This woman knows what she wants, but she’s going to be difficult.
    0:19:16 She hired her anyway.
    0:19:21 Smart leaders know that difficult people often produce the best work in the right environment.
    0:19:26 At Viva, Anna spent hours studying French L, Italian Vogue.
    0:19:27 No one questioned her vision.
    0:19:30 No committees, no interference, just pure freedom.
    0:19:33 The receptionist observed she was always her own person,
    0:19:38 didn’t really listen to any structure because, whether she was or not, she was the boss.
    0:19:39 Here’s what everyone missed.
    0:19:43 Working at a disreputable place meant no one was watching.
    0:19:45 With no one watching, Anna could do anything.
    0:19:47 She could experiment.
    0:19:48 She could push the limits.
    0:19:49 She could play.
    0:19:53 She’d promise boutiques front page placement for lending clothes,
    0:19:56 promise advertisers their pieces would be shot.
    0:19:57 The clothes kept coming.
    0:19:58 The ads kept selling.
    0:20:02 While her peers fought for assistant positions at respectable magazines,
    0:20:06 Anna was running her own fashion laboratory at a publication funded by porn.
    0:20:09 Sometimes the worst address can have the best classroom.
    0:20:12 At Viva, Anna developed her signature aesthetic.
    0:20:16 Photographs that made you want to become the person wearing the clothes.
    0:20:21 Models in country settings with chunky sweaters and inexplicably bows and arrows.
    0:20:23 It shouldn’t have worked, but it did.
    0:20:25 She pushed farther than anyone dared.
    0:20:28 One spread featured S&M-inspired photography.
    0:20:30 It was way out there, her colleagues said.
    0:20:32 Nobody did anything like that.
    0:20:37 When you’re already at a controversial magazine, you can push farther than anyone thought possible.
    0:20:38 She was playing.
    0:20:40 Her process was military precise, though.
    0:20:42 Everything was planned in advance.
    0:20:43 Rapid fire fittings.
    0:20:45 The model put on an outfit.
    0:20:47 Anna says, okay, next.
    0:20:48 No deliberation.
    0:20:48 No committee.
    0:20:50 Just decisions.
    0:20:51 Then came the test.
    0:20:56 Her publisher wanted to save money by using penthouse centerfolds as fashion models.
    0:20:57 Anna’s response?
    0:20:58 No.
    0:20:59 She walked away.
    0:21:02 Cheryl Rickson, one of those models, understood.
    0:21:04 Working with centerfolds didn’t serve her ambition.
    0:21:07 We all know she wanted to be fashion editor of Vogue.
    0:21:10 Standards aren’t standards if they’re negotiable.
    0:21:12 They’re absolute or they’re not standards.
    0:21:13 The miracle?
    0:21:15 People started paying attention.
    0:21:21 Alexander Lieberman, who ran Condé Nast and controlled Vogue, mentioned to Vogue’s editor,
    0:21:23 I love Vogue.
    0:21:25 I noticed you have an Englishwoman on the masthead.
    0:21:31 For three years, Anna transformed Vogue’s fashion pages into the required reading at Vogue and Harper’s.
    0:21:37 The porn magazine nobody respected was teaching the fashion establishment how to shoot.
    0:21:39 Excellence is excellence.
    0:21:41 The platform is just context.
    0:21:46 November 17th, 1978, Viva announces it’s closing tomorrow.
    0:21:50 Anna starts sobbing, shocking colleagues who thought she didn’t care.
    0:21:51 She wasn’t crying for the magazine.
    0:21:56 She was mourning the loss of her laboratory, the first place she had had total control.
    0:22:00 For 18 months, Anna disappeared into what fashion people called the wilderness,
    0:22:05 jet-setting with her boyfriend from Paris to Jamaica to the south of France,
    0:22:07 her only real break from work since age 16.
    0:22:10 But Ambition doesn’t like vacations.
    0:22:15 When she returned to New York in 1980, Savvy magazine called the magazine for executive women.
    0:22:17 Anna needed work.
    0:22:18 She took it.
    0:22:19 The problem was immediate.
    0:22:24 Savvy appealed to women who’d fought through the 70s to make partner at law firms,
    0:22:27 women who hid their femininity like a liability.
    0:22:32 But Anna had built her entire career making femininity a superpower.
    0:22:38 Editor Judith Daniels wanted real people instead of models, practical office clothes, reasonable prices.
    0:22:42 Anna nodded in meetings and then shot exactly what she wanted.
    0:22:47 Anna was very strong-minded and she just did whatever she wanted, the executive editor recalled.
    0:22:50 Daniels tried to fire her, but Anna had learned something.
    0:22:51 How to talk her way out of trouble.
    0:22:54 She bought some time to job hunt while getting paid.
    0:22:56 Then came humiliation.
    0:22:58 March 18th, 1981.
    0:23:04 Anna pitches interview magazine, Andy Warhol’s glamorous publication, an idea she spent three months developing.
    0:23:08 The editor looked at it for one second and said,
    0:23:12 Anna cries right there in his office.
    0:23:14 And then she leaves for her next appointment.
    0:23:19 When you believe in yourself completely, rejection is data, not a verdict.
    0:23:25 That persistence paid off when Laurie Jones at New York Magazine called in early 1981.
    0:23:31 Jones was desperate to fill a fashion editor position that required someone to basically run a one-person fashion department
    0:23:38 attending shows, selecting clothes, booking photographers, managing shoots, do everything, deliver weekly.
    0:23:45 Anna shows up to the interview with storyboards, complete with Polaroids, layouts, fully realized ideas, not hopes, but plans.
    0:23:47 Anna, this is fabulous.
    0:23:50 I like every one of these story ideas, Jones said.
    0:23:52 She rushed editor-in-chief Edward Cosner.
    0:23:54 Ed, this woman is amazing.
    0:23:56 We’re all going to be working for her someday.
    0:23:59 Cosner laughed and hired her.
    0:24:02 Most people prepare for interviews by thinking about answers.
    0:24:04 Outliers show up with solutions.
    0:24:06 It’s the same with cold emails today.
    0:24:09 Don’t tell someone how you would solve their problems.
    0:24:12 If you want to get noticed, just solve their problem.
    0:24:16 At New York Magazine, Anna finally had a budget to match ambition.
    0:24:18 Want to shoot a $20,000 sable coat?
    0:24:19 Approved.
    0:24:22 Need the best photographers to compete with the times?
    0:24:22 Done.
    0:24:26 They recognized talent and they gave her room and an environment to execute.
    0:24:30 Her first story, Summer Dresses, on a tilted Manhattan rooftop,
    0:24:34 making the Empire State Building appear to dance behind the models.
    0:24:37 Every fashion editor in the city was shooting straight.
    0:24:39 Anna was tilting reality.
    0:24:41 But her standards remained brutal.
    0:24:45 When one of her assistants styled her for shoot with photographer Stephen Meisel,
    0:24:47 Anna killed it without explanation.
    0:24:50 Didn’t matter was her assistant’s big break.
    0:24:51 Didn’t matter Meisel was talented.
    0:24:53 Standards were standards.
    0:24:56 And Anna’s standards to nearly everyone appeared unreasonable.
    0:25:01 And also, just like Steve Jobs, that made everyone work harder and be better
    0:25:04 and pulled out the best version of themselves.
    0:25:08 Meisel was so enraged, he would refuse to work with Anna for years.
    0:25:10 He’d become one of fashion’s greatest photographers.
    0:25:11 Anna didn’t care.
    0:25:13 She wasn’t there to collect friends.
    0:25:14 She was there to win.
    0:25:18 This reminds me so much of Michael Jordan, who said in the last dance,
    0:25:21 I pulled people along when they didn’t want to be pulled.
    0:25:24 I challenged people when they didn’t want to be challenged.
    0:25:30 And I earned that right because my teammates who came after me didn’t endure all the things
    0:25:30 that I endured.
    0:25:34 Once you joined the team, you lived at a certain standard that I played the game,
    0:25:36 and I wasn’t going to take anything less.
    0:25:40 Now, if that meant I had to go in there and get on you a bit, then I did that.
    0:25:44 You ask all my teammates, the one thing about Michael Jordan was he never asked me to do
    0:25:46 something that he didn’t do.
    0:25:50 When people see this, they’re going to say, well, he wasn’t really a nice guy.
    0:25:51 He may have been a tyrant.
    0:25:54 Well, that’s you, because you never won anything.
    0:25:58 I wanted to win, but I wanted them to win and be a part of that as well.
    0:26:00 Look, I don’t have to do this.
    0:26:02 I’m only doing it because it’s who I am.
    0:26:03 That’s how I played the game.
    0:26:05 That was my mentality.
    0:26:08 If you don’t want to play that way, don’t play that way.
    0:26:11 Jordan could have easily been talking about Anna.
    0:26:14 At New York Magazine, Anna wasn’t just editing fashion.
    0:26:15 She was playing at the highest standard.
    0:26:19 And if you wanted to be a part of it, you needed to bring your A-game every day.
    0:26:21 No exceptions.
    0:26:25 Polly Mellon at Vogue had been watching Anna since London.
    0:26:31 She thought Vogue was getting boring and arranged a meeting with editor-in-chief Grace Mirabella.
    0:26:34 Mirabella called Anna and asked what position she wanted at Vogue.
    0:26:35 Anna’s answer?
    0:26:36 Yours.
    0:26:39 The meeting ended immediately.
    0:26:45 Most people hide their ambitions and Anna just announced hers and let the world adjust.
    0:26:47 But the real power wasn’t Mirabella.
    0:26:51 It was Alexander Lieberman, Conde Nast’s editorial director.
    0:26:56 Art director by day, wielding massive steel sculptures on weekends, he collected talent like
    0:26:57 others collected art.
    0:27:03 In August of 1983, Anna publishes a story where 12 artists create paintings inspired by fashion.
    0:27:09 Lieberman sees it and recognizes a kindred spirit, someone who understood fashion as high art,
    0:27:10 not just commerce.
    0:27:13 He invites Anna to his Connecticut estate.
    0:27:17 She shows up in what he called a wonderful, simple, gray tunic.
    0:27:20 Not trying to impress, just being precisely herself.
    0:27:23 I was absolutely enchanted with her, Lieberman would say.
    0:27:24 His problem?
    0:27:27 Mirabella was successfully running Vogue.
    0:27:31 So his solution was to create a fake job, creative director.
    0:27:37 A made-up title that made Anna second on the masthead with deliberately vague responsibilities.
    0:27:38 And I took it.
    0:27:42 She wasn’t a number two person, but she also understood chess.
    0:27:45 She would change tactics, but not her dream.
    0:27:48 For three years, she was officially Mirabella’s deputy.
    0:27:49 Actually, she was Lieberman’s protege.
    0:27:54 Learning the operation, building the relationships, waiting, preparing.
    0:27:59 Mirabella later wrote that Anna would sit in meetings, shaking her head, obviously disagreeing
    0:28:00 with everything I said.
    0:28:01 Anna wasn’t being insubordinate.
    0:28:03 She was being inevitable.
    0:28:09 In 1985, British Vogue editor Beatrix Miller stepped down after 21 years.
    0:28:11 Anna was offered the job.
    0:28:12 She’s pregnant.
    0:28:14 She hesitates a little bit, and then she takes it.
    0:28:17 She needs to prove that she can run something.
    0:28:22 Anna walks into British Vogue and detonates, fires most of the staff, demands shorter skirts,
    0:28:25 injects an energy that had been missing for decades.
    0:28:29 The British press nicknamed her Nuclear Winter.
    0:28:30 She doesn’t care.
    0:28:35 Circulation climbs, profits soar, British designers get discovered.
    0:28:39 The first rule of transformation, you can’t renovate a house with people still living in
    0:28:39 it.
    0:28:41 Two years later, she’s proven her point.
    0:28:46 When House and Garden’s editorship opens in New York, Anna takes it, not because she wants
    0:28:50 to edit home decor, because it’s her ticket back to America and closer to Vogue.
    0:28:53 At House and Garden, she renames it HG.
    0:28:58 Anna adds fashion shoots to a decorating magazine, replaces anonymous rich people’s homes with
    0:29:00 celebrity features.
    0:29:05 Readers revolt, advertisers flee, subscription cancellations require a dedicated phone line.
    0:29:09 She would do things that people have never done before and that alienated some people, a
    0:29:11 feature’s editor observed.
    0:29:14 But Anna wasn’t trying to save House and Garden.
    0:29:18 She was auditioning for Cy Newhouse and Alexander Lieberman.
    0:29:20 The magazine was her performance space.
    0:29:26 Years later, every decor magazine would copy what Anna tried at HG, voyeuristic glimpses into
    0:29:30 celebrity homes instead of furniture catalogs.
    0:29:32 She was right, just a little bit early.
    0:29:35 But by then, she’d have bigger things to transform.
    0:29:42 Just as the criticism at HG was starting to die down, Grace Mirabella’s 37 years at Vogue
    0:29:42 were ending.
    0:29:44 She just didn’t know it.
    0:29:47 Newhouse and Lieberman had decided by summer of 1988.
    0:29:52 They kept Anna in endless planning meetings while she pretended everything was normal at HG.
    0:29:55 June 28th, 1988.
    0:29:57 Mirabella’s husband calls her.
    0:30:00 He just saw gossip colonist Liz Smith on television.
    0:30:04 Anna Wintour will replace his wife as editor-in-chief of Vogue.
    0:30:06 Mirabella goes to Lieberman’s office.
    0:30:07 He’s waiting.
    0:30:11 Grace, he says, I’m afraid it’s true.
    0:30:14 37 years, dismissed via gossip column.
    0:30:17 Power transitions are never elegant.
    0:30:18 They’re either swift or sloppy.
    0:30:19 Never both.
    0:30:20 Anna was prepared.
    0:30:25 She spent three years studying Vogue’s operations from the inside, learning its weaknesses, building
    0:30:26 her network.
    0:30:33 While Mirabella finished her final two weeks, Anna summoned all 120 Vogue staff to her HG office.
    0:30:35 One by one, brief interviews.
    0:30:37 Three days later, 90 people remained.
    0:30:39 When you finally get power, use it immediately.
    0:30:41 Hesitation invites resistance.
    0:30:47 17 years after her father wrote editor of Vogue on that career form, after getting fired for
    0:30:54 being too European, after crying in Andy Warhol’s office, after transforming two other magazines,
    0:30:57 Anna had the job she’d wanted since she was 16.
    0:31:00 Now the real work could begin.
    0:31:07 Make memories, not medical bills, with travel insurance from BC’s health experts, Pacific
    0:31:08 Blue Cross.
    0:31:14 Their affordable travel plans offer protection from unexpected travel costs like medical emergencies,
    0:31:17 trip cancellation fees, baggage issues, and more.
    0:31:22 Plus, kids of all families are covered for free, and there’s no deductible in the event
    0:31:22 of a claim.
    0:31:25 Buy worry-free travel insurance in minutes.
    0:31:28 Visit pac.bluecross.ca.
    0:31:32 Spring is here, and you can now get almost anything you need delivered with Uber Eats.
    0:31:33 What do we mean by almost?
    0:31:37 You can’t get a well-groomed lawn delivered, but you can get chicken parmesan delivered.
    0:31:38 Sunshine?
    0:31:38 No.
    0:31:39 Some wine?
    0:31:39 Yes.
    0:31:42 Get almost, almost anything delivered with Uber Eats.
    0:31:42 Order now.
    0:31:43 Alcohol and Select Markets.
    0:31:44 See app for details.
    0:31:47 Anna didn’t come to Vogue to run it.
    0:31:48 She came to rebuild it.
    0:31:54 Her management style honed over the years, and now unleashed at Vogue was calculated to create
    0:31:58 a specific type of workspace that produced exceptional work.
    0:32:03 If you’ve seen the movie The Devil Wears Prada, you might be familiar with what comes next.
    0:32:07 She installed glass offices so she could see everything happening, fired people with startling
    0:32:13 frequency, and developed what became known as the look, a daily assessment of what every
    0:32:15 staff member’s outfit from shoes to hair.
    0:32:17 One assistant described it perfectly.
    0:32:21 She would stare at your shoes and work her way up.
    0:32:26 She was creating an environment where every detail mattered, because she understood that
    0:32:30 in fashion media, there is no separation between how you look and how you work.
    0:32:33 If you look sloppy, your work will eventually look sloppy.
    0:32:39 If your office operates with casual standards, your editorial standards will eventually become
    0:32:39 casual.
    0:32:40 She fired constantly.
    0:32:46 In the creative business, one person operating at 60% can bring an entire team down to their
    0:32:46 level.
    0:32:49 Excellence requires difficult choices.
    0:32:50 Some saw tyranny.
    0:32:52 Anna saw physics.
    0:32:56 She was creating environmental pressure that made mediocrity impossible to hide.
    0:33:01 When every detail of your appearance matters, every detail of your work starts mattering too.
    0:33:03 Most leaders try to change behavior.
    0:33:05 Anna changed the environment.
    0:33:06 The behavior followed.
    0:33:08 But the real revolution went deeper.
    0:33:10 Fashion magazines operated like art museums.
    0:33:13 Slow, contemplative, precocious.
    0:33:19 Anna brought newspaper urgency to an industry that thought deadlines were just suggestions.
    0:33:21 She wasn’t just changing Vogue.
    0:33:23 She was changing what a fashion magazine could be.
    0:33:25 The devil wasn’t in the details.
    0:33:28 The devil was ignoring the details.
    0:33:30 Anna came from newspaper blood.
    0:33:33 Remember, her father, Charles, ran the Evening Standard.
    0:33:36 She understood what fashion editors didn’t.
    0:33:38 Speed creates quality under pressure.
    0:33:43 Marabella had run Vogue like a museum, everything written down, committees for committees.
    0:33:47 Anna walked in and saw a bureaucracy where there should be velocity.
    0:33:49 First, she killed comfort.
    0:33:51 Out went beige walls and butter-colored chairs.
    0:33:55 In came white walls, glass offices, and metal seats.
    0:33:59 Comfort breeds complacency, and discomfort breeds decision.
    0:34:01 Her meeting revolution was pure newspaper.
    0:34:04 You walk in, you stand, you ask, you leave.
    0:34:06 The saying internally was, you get two minutes.
    0:34:08 The second is a courtesy.
    0:34:11 The chairs in her office were for decoration, not sitting.
    0:34:13 No sitting meant no settling.
    0:34:15 No chit-chat meant no waste.
    0:34:17 Every interaction became a transaction.
    0:34:19 The glass walls weren’t about surveillance.
    0:34:21 They were about accessibility.
    0:34:25 Anna could catch an editor’s eye and assign a task without leaving her desk.
    0:34:31 One editor realized Anna called on her constantly simply because her office was in the sightline.
    0:34:32 The lesson here is clear.
    0:34:34 Architecture is destiny.
    0:34:38 Design your environment to eliminate friction between thought and action.
    0:34:41 Then came Anna’s masterstroke, AWOC.
    0:34:44 Her initials plus OK became a verb.
    0:34:45 Is that AWOC’d yet?
    0:34:46 Nothing.
    0:34:50 Not a caption, not a photo, not a comma moved without her approval.
    0:34:52 This wasn’t micromanagement.
    0:34:54 It was standards transformation.
    0:34:58 Every AWOC taught editors what excellence looked like.
    0:35:01 The infamous clothing run-throughs that took hours under Mirabella.
    0:35:02 Anna did them in minutes.
    0:35:03 Yes.
    0:35:03 No.
    0:35:04 Yes.
    0:35:04 No.
    0:35:05 No.
    0:35:05 Yes.
    0:35:06 Goodbye.
    0:35:07 No explanations.
    0:35:08 No committees.
    0:35:09 Just decisions.
    0:35:13 When you explain every decision, people just learn to argue.
    0:35:15 When you just decide, they learn to anticipate.
    0:35:17 The fear of rejection made editors sharper.
    0:35:21 They learned to pre-filter, to think like Anna before presenting.
    0:35:22 She wasn’t reviewing work.
    0:35:24 She was programming their brains.
    0:35:26 Anna was anything but hands-off.
    0:35:31 She ran on founder mode, staying until midnight the first three months, personally reviewing
    0:35:32 every single layout.
    0:35:37 But unlike Mirabella, who hid in her office, Anna spent half her time with designers, telling
    0:35:38 them what to add to collections.
    0:35:41 Every hiring revealed her system.
    0:35:43 Anna personally screened everyone.
    0:35:45 One candidate was rejected for wearing matching pearls.
    0:35:46 Two matchy-matchy.
    0:35:49 Another almost didn’t get past HR for being overweight.
    0:35:54 They negotiated Anna, giving her at least two and a half minutes for this one, and that
    0:35:54 one got hired.
    0:35:57 She was building a machine where mediocrity had no place to hide.
    0:35:59 Glass walls meant no privacy.
    0:36:01 Speed meant no procrastination.
    0:36:03 Personal approval meant no excuses.
    0:36:05 A lot of people manage outputs.
    0:36:09 Anna managed inputs, control the environment, and excellence becomes inevitable.
    0:36:12 The British girl who couldn’t type had figured out something profound.
    0:36:17 In creative industries, velocity often beats perfection, because perfection without deadlines
    0:36:20 is just procrastination with a better wardrobe.
    0:36:24 What Anna did was change Vogue’s cover strategy.
    0:36:31 Anna puts a $10,000 Christian LaCroix jacket with $50 guest jeans on her first Vogue cover.
    0:36:34 The printer literally called the check.
    0:36:36 Surely someone had made an error.
    0:36:38 No error, just strategy.
    0:36:43 To understand why this matter, you need to understand fashion’s unwritten law.
    0:36:45 Luxury doesn’t mix with mass market.
    0:36:48 It’s like putting a Ferrari engine in a Toyota.
    0:36:53 It violates the hierarchy that lets luxury charge luxury prices.
    0:36:55 Anna broke that law on purpose.
    0:36:57 She understood something the industry didn’t.
    0:37:00 People don’t dress in just Prada or just Gap.
    0:37:01 They mix.
    0:37:05 She was documenting reality while everyone else was protecting mythology.
    0:37:07 This is how disruption often works.
    0:37:09 You don’t invent new behavior.
    0:37:12 You legitimize behavior that already exists.
    0:37:15 But the Madonna cover reveals her deeper insight.
    0:37:20 A businessman on a plane tells Anna he loves Vogue because it’s so elegant, so classic.
    0:37:22 Katherine Hepburn, Grace Kelly.
    0:37:23 It would never be Madonna.
    0:37:27 Most editors would take this as market research.
    0:37:29 Our readers want elegance, not controversy.
    0:37:31 Anna heard it differently.
    0:37:35 If everyone agrees Vogue would never do something, that’s exactly what would get attention.
    0:37:41 Anna would go on to say the fact that that very nice man that I sat next to on the plane thought
    0:37:45 that it would be completely wrong to put Madonna on the cover and completely out of keeping with
    0:37:50 the tradition of Vogue being this very classically correct publication pushed me to break the rules
    0:37:56 and had people talking about us in a way that was culturally relevant, important, and controversial,
    0:37:59 all of which you need to do from time to time.
    0:38:00 Context matters here.
    0:38:07 Madonna in 1989 had just released Like a Prayer, Burning Crosses, romantic scenes with a black saint.
    0:38:09 Pepsi pulled her sponsorship.
    0:38:11 Religious groups wanted boycotts.
    0:38:15 She represented everything Vogue readers theoretically rejected.
    0:38:16 Anna put her on the May cover.
    0:38:21 You need to be culturally relevant, important, and controversial from time to time, she later
    0:38:22 explained.
    0:38:23 The numbers told the story.
    0:38:27 200,000 more copies sold than previous May.
    0:38:28 But that’s not the lesson.
    0:38:30 The real lesson is about information asymmetry.
    0:38:34 When everyone knows something would never work, they stop testing it.
    0:38:35 That creates an opportunity.
    0:38:39 Within five years, every fashion magazine featured celebrities.
    0:38:40 Anna didn’t predict the future.
    0:38:43 She created it by doing what nobody else would test.
    0:38:48 Sometimes the best strategy isn’t finding what people want, it’s showing them what they didn’t
    0:38:49 know they were allowed to want.
    0:38:54 The early 1990s belonged to supermodels, Naomi Campbell, Kate Moss.
    0:38:56 They commanded massive fees and magazine covers.
    0:38:58 Anna killed them off.
    0:38:59 It wasn’t personal, it was business.
    0:39:01 Here’s the lesson she understood.
    0:39:04 Models offer only one story, beauty.
    0:39:07 Celebrities, however, offer infinite stories.
    0:39:10 Marriage, divorce, scandals, politics.
    0:39:13 Every life event becomes content.
    0:39:18 The insight, people don’t buy aspirational images, they buy aspirational narratives.
    0:39:19 Think about the math.
    0:39:22 A supermodel gives you 12 beautiful covers a year.
    0:39:25 A celebrity gives you 12 chapters of an ongoing drama.
    0:39:27 Which do you think keeps readers coming back?
    0:39:29 But Anna also saw something deeper.
    0:39:32 Supermodels influenced how people wanted to look.
    0:39:35 Celebrities influenced how people wanted to live.
    0:39:39 Fashion wasn’t just about clothes anymore, it was about lifestyle.
    0:39:41 The proof came at every red carpet.
    0:39:44 Who are you wearing became the question, not what are you wearing?
    0:39:45 Who?
    0:39:49 Fashion became a character in every celebrity story.
    0:39:53 One of the most remarkable things I discovered researching Anna was that she didn’t just change
    0:39:55 magazine covers.
    0:39:58 She changed how culture talks about clothing.
    0:40:02 The supermodel era ended not because models became less beautiful.
    0:40:05 It ended because beauty without story is just a wallpaper.
    0:40:08 And nobody subscribes to wallpaper.
    0:40:10 Anna was busy.
    0:40:12 Busier than she’d ever been.
    0:40:17 And she developed an assistant system that reveals something profound about how power operates in elite
    0:40:18 institutions.
    0:40:22 She would employ up to three assistants at any given time.
    0:40:27 Each with specific roles that collectively insulated her from administrative tasks that were not
    0:40:30 directly related to editorial decision making.
    0:40:31 This system worked like this.
    0:40:33 First assistant, schedule and communications.
    0:40:37 Second assistant, homes, screenings, and her dogs.
    0:40:42 Third, errands, tickets, and custom orders to designers for Anna’s personal clothing.
    0:40:46 While this may appear as an extravagance, it was math.
    0:40:49 Most executives spend 40% of their time on logistics.
    0:40:51 Anna spent zero.
    0:40:57 100% of her mental energy was spent on work, while an army of other assistants handled everything
    0:40:58 else.
    0:41:00 Think about how powerful that is.
    0:41:01 The system was brutal.
    0:41:03 Emails without subject lines.
    0:41:04 Just commands.
    0:41:05 Coffee, please.
    0:41:06 Get me Tom Ford.
    0:41:07 No niceties.
    0:41:08 No unnecessary words.
    0:41:13 Assistants arrived at 730 to prepare for her entrance when orders would rain down without
    0:41:14 pause.
    0:41:17 There’s an elevator story that captures it perfectly.
    0:41:22 Rumors said Anna banned others from riding with her, but the truth is people avoided the
    0:41:26 elevator because she’d immediately start issuing orders they’d need to write down.
    0:41:28 Impossible while moving.
    0:41:33 One assistant would meet her at her car to collect the AW bag, her papers from home.
    0:41:38 Not because Anna was lazy, because she understood every second carrying bags was a second not spent
    0:41:42 She wouldn’t learn assistants’ names until they proved they could last.
    0:41:44 Most burned out in weeks.
    0:41:45 Here’s the paradox.
    0:41:48 The survivors became fanatically loyal.
    0:41:48 Why?
    0:41:50 They weren’t just filing expenses.
    0:41:53 They were watching Anna negotiate with billionaires.
    0:41:57 They saw how she made split-second decisions that moved markets.
    0:42:00 They built relationships with every power player who walked through Vogue.
    0:42:03 One former assistant summed it up really nicely.
    0:42:04 The demands weren’t personal.
    0:42:09 When you’re affecting billion-dollar industries, there’s no room for casual execution.
    0:42:11 The lesson isn’t about having three assistants.
    0:42:14 It’s about understanding the value of your time.
    0:42:18 It’s about holding the people around you to the same unreasonable standards you hold yourself
    0:42:18 to.
    0:42:23 Anna calculated that her hour was worth more than three people’s days.
    0:42:24 She was right.
    0:42:27 While competitors managed calendars, she managed culture.
    0:42:29 Focus isn’t about doing one thing.
    0:42:32 It’s about doing only the things that you can do.
    0:42:38 By 1997, Anna had been editor-in-chief for nearly a decade, and Vogue was performing spectacularly.
    0:42:44 The magazine had its biggest March issue since 1990, with ad pages up 5.9%.
    0:42:52 The September issue that year weighed 4.3 pounds and was packed with 734 pages, mostly advertisements.
    0:42:57 It was the biggest issue in nine years and represented complete market dominance over its competitors.
    0:42:58 One problem?
    0:43:01 Anna’s publisher, Ron Gagliotti, wants more.
    0:43:07 Gagliotti was hired to maximize revenue, and Anna’s refusal to feature advertisers’ clothes
    0:43:09 in editorial spreads was making him very angry.
    0:43:11 His logic was simple.
    0:43:14 If you need a white shirt for a shoot, why not use Ann Klein’s?
    0:43:16 They’re paying us hundreds of thousands of dollars.
    0:43:18 Anna’s response was simple.
    0:43:20 If it’s ugly, it’s not in Vogue.
    0:43:23 This is the eternal war in creative businesses.
    0:43:25 The money people want compliance.
    0:43:27 The creative people want control.
    0:43:29 Gagliotti escalated to sigh Newhouse.
    0:43:33 They prepared a list of editors who could replace Anna.
    0:43:34 Then they invited her to lunch.
    0:43:36 The ultimatum was blunt.
    0:43:39 Start featuring advertisers’ products or find another job.
    0:43:42 Newhouse’s exact words, I suggest you follow the money.
    0:43:45 Most editors would choose one of two paths.
    0:43:50 Cave completely, turn Vogue into a catalog, or fight and lose, maintain your principles, and
    0:43:50 get fired.
    0:43:52 Anna chose door number three.
    0:43:57 She’d photograph advertisers’ clothes, but only pieces that met her standards.
    0:43:59 Yes to commerce, but she kept the veto.
    0:44:03 The genius here was she made herself indispensable to both sides.
    0:44:08 Advertisers got more coverage than ever, but only Anna could guarantee it would elevate their
    0:44:09 brand and not embarrass it.
    0:44:15 She started taking advertiser meetings, building relationships that transcended the transactions.
    0:44:16 The result?
    0:44:18 Vogue kept its credibility.
    0:44:19 Advertisers got prestige.
    0:44:20 Anna got more powerful.
    0:44:22 The lesson?
    0:44:24 When forced to choose between X and Y, don’t.
    0:44:26 Find the narrow path where both can win.
    0:44:28 Follow the money wasn’t a default.
    0:44:29 It was data.
    0:44:32 Anna learned to speak money fluently while thinking in art.
    0:44:36 That’s how you survive four decades at the top.
    0:44:41 In 1994, just one year after the introduction of the very first web browser to seamlessly
    0:44:46 integrate text and images, many in the publishing world were still pretending the internet didn’t
    0:44:47 exist.
    0:44:51 Anna wasn’t, but she also wasn’t particularly tech-oriented at this point.
    0:44:56 When a new feature editor sent an email to the entire Vogue staff in 1994 to introduce
    0:44:59 himself, he received a fax from Anna in Europe that said,
    0:45:08 But even as Anna dismissed email as impersonal, she was obsessively asking Condé Nast’s digital
    0:45:09 team, when can Vogue go online?
    0:45:14 It’s starting to get embarrassing that Vogue.com is not online.
    0:45:16 Why aren’t we online?
    0:45:17 Here’s what drove Anna’s urgency.
    0:45:23 The entire purpose of fashion is to be a reflection of the times, as one digital executive explained.
    0:45:28 Anna understood that a fashion magazine that felt antiquated or out of date would lose its
    0:45:32 cultural authority, authority that Anna had been building for six years now at Vogue,
    0:45:34 with no intention of stopping.
    0:45:39 While other editors saw the internet as a threat to their business model, Anna saw it as an
    0:45:41 opportunity to increase Vogue’s influence.
    0:45:44 The contradiction reveals her genius.
    0:45:45 Email was internal.
    0:45:46 The website was relevance.
    0:45:48 Email was internal.
    0:45:50 The website was external relevance.
    0:45:55 When Vogue.com eventually launched in 1998, Anna made a radical decision.
    0:45:57 Post every runway show.
    0:45:58 Make it searchable.
    0:45:58 Make it free.
    0:46:00 The fashion world revolted.
    0:46:03 Fashion’s entire business model depended on scarcity.
    0:46:05 Invitation-only shows.
    0:46:06 90-day embargoes.
    0:46:09 Magazines charging premiums for exclusive access.
    0:46:11 Anna was about to give it all away.
    0:46:13 Half the designers said no.
    0:46:16 Many of the fashion houses didn’t even have internet yet.
    0:46:17 Anna published anyway.
    0:46:20 The tagline at the time captured her strategy.
    0:46:23 Before it’s in Vogue, it’s on Vogue.com.
    0:46:26 Her own team worried about this diminished role for print.
    0:46:27 Anna saw it differently.
    0:46:29 It makes our brand more modern.
    0:46:35 The first rule of disruption is if you’re going to get cannibalized, it’s better to eat yourself.
    0:46:38 But Anna didn’t just go online.
    0:46:39 She pushed it farther.
    0:46:45 She orchestrated what may have been High Fashion’s first live stream for Chanel’s resort show in 2000.
    0:46:54 Clothes hit the runway, immediately photographed, instantly purchasable, the see-now, buy-now concept that Burberry would invent 13 years later.
    0:47:00 A partnership with Neiman Marcus represented another breakthrough that wouldn’t become standard until years later.
    0:47:09 Anna negotiated a deal where Condé Nast got a cut of all clothing purchases driven by the Vogue website, essentially inventing fashion e-commerce affiliate marketing.
    0:47:13 After that first season, designers discovered the hidden benefit.
    0:47:16 Digital slideshows replaced expensive lookbooks.
    0:47:18 Buyers could see collections instantly.
    0:47:20 Anna hadn’t just moved fashion online.
    0:47:24 She’d made Vogue indispensable to the entire supply chain.
    0:47:26 This was remarkable.
    0:47:29 And although it’s obvious in hindsight, it wasn’t at the time.
    0:47:36 She took a wild risk to use her name and reputation to push a very unwilling fashion industry into the digital age.
    0:47:40 The parallel to Andy Grove here really stands out.
    0:47:45 In episode 229, when memory chips got commoditized, Intel pivoted to microprocessors.
    0:47:49 When print got commoditized, Anna pivoted to platform.
    0:47:54 Her competitors spent the next decade protecting traditional revenue.
    0:47:58 By then, Anna owned the entire infrastructure that everyone needed to use.
    0:48:04 Women who wouldn’t use email built fashion’s digital future because she understood something her competitors didn’t.
    0:48:07 The question isn’t whether your industry will be disrupted.
    0:48:10 It’s whether you’ll be the one doing the disruption.
    0:48:14 Let’s fast forward to 1999.
    0:48:16 Anna has been running Vogue for 11 years.
    0:48:19 Revenues are up to $149 million.
    0:48:21 Anna’s professional life, perfect.
    0:48:22 Her personal life, imploding.
    0:48:26 The divorce from David Schaefer should have been a disaster.
    0:48:27 Instead, it became rocket fuel.
    0:48:31 New York Magazine was preparing a hit piece about Anna’s breakups.
    0:48:34 Her husband and her deputy editor both leaving.
    0:48:38 When they asked for a cover photo, her former colleague, Jordan Shapes, gave her the playbook.
    0:48:43 We all know it’s going to be a piece of shit article, but a fabulous cover.
    0:48:45 That’s all people take away anyway.
    0:48:49 In a visual culture, perception beats reality.
    0:48:52 Control the image, and you control the narrative.
    0:48:54 No one understood this better than her.
    0:48:56 A colleague noticed something remarkable.
    0:49:01 She was remarkably good at compartmentalizing, which bothered some staff.
    0:49:02 Bothered them?
    0:49:03 It made her unstoppable.
    0:49:09 While others would have crumbled, Anna separated her personal pain from a professional persona,
    0:49:12 like removing one outfit and putting on another.
    0:49:14 The divorce wasn’t a distraction.
    0:49:15 It was rocket fuel.
    0:49:17 Work became her outlet.
    0:49:20 Avoid a crisis as you can, but perform through it if you can’t.
    0:49:23 The woman who emerged from this divorce would be different.
    0:49:26 No personal crisis could derail her professional momentum.
    0:49:28 She was done building a magazine.
    0:49:30 It was time to build an empire.
    0:49:35 With her personal life stabilized, Anna turned her attention to something more ambitious than
    0:49:37 just editing a magazine.
    0:49:42 She wanted to build what she called Big Vogue, a media empire that would extend her influence
    0:49:45 across multiple platforms and demographics.
    0:49:47 The strategy was simple.
    0:49:50 Anna understood that power in media comes from controlling an ecosystem.
    0:49:57 Teen Vogue launched in 2003, followed by Men’s Vogue in 2005, and Vogue Living shortly thereafter.
    0:50:03 Each publication served a different audience, but all carried the Vogue brand, and more
    0:50:05 importantly, all reported to Anna.
    0:50:09 Her philosophy for managing this empire was characteristically direct.
    0:50:13 She described editing multiple magazines like planning a dinner party.
    0:50:16 You need to have the pretty girl, the controversy, and something reassuring.
    0:50:21 By controlling the different elements of the cultural conversation, Anna ensured that the
    0:50:24 Vogue brand touched every significant demographic.
    0:50:27 Teen Vogue was chess, though, not checkers.
    0:50:29 Hook the readers at 15 and keep them for life.
    0:50:35 Plus, it became Anna’s digital laboratory, testing strategies too risky for the mothership.
    0:50:37 Men’s Vogue expanded her range.
    0:50:40 Main Vogue only featured people Anna wanted to celebrate.
    0:50:42 Men’s Vogue could criticize.
    0:50:44 Same brand, different roles.
    0:50:47 But the real genius was the talent pipeline.
    0:50:49 These magazines became Anna’s farm system.
    0:50:51 Train editors at Teen Vogue.
    0:50:52 Promote them to the best Vogue.
    0:50:56 Her influence multiplied through protégés across the industry.
    0:50:59 The portfolio approach had another benefit, resilience.
    0:51:02 When Men’s Vogue folded in 2008, Anna shrugged.
    0:51:04 She had other pieces on the board.
    0:51:09 While competitors protected single titles, Anna built a portfolio that could absorb risk.
    0:51:14 Nothing revealed Anna’s approach to power more clearly than how she handled major crises.
    0:51:21 Her response to September 11th became legendary within Condé Nast and established a template she would follow for decades.
    0:51:27 On September 12th, 2001, while much of New York was still reeling from the 9-11 attacks, Anna went to work.
    0:51:32 Not because she always went to work, but because she had calculated that normalcy was a form of resilience.
    0:51:37 As the message trickled down to her staff, the best thing to do was keep going.
    0:51:38 If Vogue stopped, fashion stopped.
    0:51:41 If the world stopped, the terrorists would have won.
    0:51:43 Anna’s bias toward action revealed itself.
    0:51:49 She assigned a spring fashion preview celebrating the season 9-11 had canceled.
    0:51:52 While others froze, Vogue published.
    0:51:55 But 2008 revealed her true genius.
    0:52:01 While other executives partied through 2007, Anna and publisher Tom Florio were studying currency rates.
    0:52:04 The euro-dollar shift was crushing European luxury brands.
    0:52:06 They saw the canary in the coal mine.
    0:52:11 They built three scenarios, belt tightening, major cuts, or catastrophe mode.
    0:52:15 When Bear Stearns collapsed, Florio warned other Condé Nast publishers.
    0:52:16 They laughed.
    0:52:18 Anna and Tom executed their plan.
    0:52:19 The result?
    0:52:25 Condé Nast ad pages dropped 30% in 2009, wiping out nearly $1 billion in revenue.
    0:52:29 Vogue was one of only two magazines that stayed profitable.
    0:52:33 Famously, after the 2008 crisis, Anna said internally,
    0:52:35 we will not participate in the recession.
    0:52:38 The pattern never changed.
    0:52:40 Position yourself for multiple possible futures.
    0:52:42 Prepare to the extent possible.
    0:52:42 Execute.
    0:52:43 No emotion.
    0:52:47 Whether 9-11, 2008, or any crisis between,
    0:52:48 Anna’s approach was identical.
    0:52:49 See it coming.
    0:52:50 Build options.
    0:52:51 Stay focused.
    0:52:54 The 2008 lesson went deeper.
    0:52:56 When budgets shrink, profitable divisions survive.
    0:52:58 Unprofitable ones don’t.
    0:53:00 No matter how prestigious they are.
    0:53:02 Anna understood in good times, excellence matters.
    0:53:04 In bad times, only profit matters.
    0:53:06 Crises don’t build character.
    0:53:09 It reveals who was positioned and who was pretending.
    0:53:10 As Warren Buffett says,
    0:53:14 Only when the tide goes out do you discover who’s swimming naked.
    0:53:17 By keeping Vogue profitable when others bled,
    0:53:18 Anna made herself indispensable.
    0:53:21 While others around her were losing their job,
    0:53:23 no one could come at the queen.
    0:53:28 The digital revolution should have killed Anna’s tenure at Vogue.
    0:53:29 Instead, she weaponized it.
    0:53:34 Vogue.com traffic grew from 1 million to 10 million monthly visitors.
    0:53:40 Same principles, new medium, impeccable visuals, exclusive access, celebrity partnerships.
    0:53:43 But Teen Vogue revealed her real genius.
    0:53:45 In December 2016, Teen Vogue publishes,
    0:53:47 Donald Trump is gaslighting America.
    0:53:49 The media world gasps.
    0:53:53 A fashion magazine doing some serious political commentary.
    0:53:55 Anna’s response, more please.
    0:53:56 She understood,
    0:53:58 Controversy drives engagement.
    0:53:59 Engagement drives revenue.
    0:54:03 Teen Vogue’s traffic exploded from 2 million to 12 million.
    0:54:05 Print subscribers tripled.
    0:54:06 The lesson,
    0:54:09 Your sub-brands can take risks your main brand can’t.
    0:54:10 Use them as laboratories.
    0:54:13 Anna became obsessed with Metrix.
    0:54:18 The woman who once cared only about aesthetics now lived for traffic reports.
    0:54:24 The 2015 Met Gala coverage set records she’d chase after every year.
    0:54:25 She found the holy grail,
    0:54:28 modernizing her greatest creation while expanding its reach.
    0:54:32 Even her Go Ask Anna YouTube series in 2018 was strategic,
    0:54:34 answering random questions.
    0:54:37 No, humanizing her brand while maintaining mystique.
    0:54:41 Digital transformation isn’t about abandoning what made you successful.
    0:54:43 It’s about translating it into a new medium.
    0:54:46 Anna didn’t become a different person online.
    0:54:49 She became a more measurable version of herself.
    0:54:52 And in digital, what can’t be measured can’t be monetized.
    0:54:57 By 2008, Anna discovered fashion was just her vehicle.
    0:54:58 Power was the destination.
    0:55:01 She backed Obama, but not with just checks.
    0:55:05 She created events mixing fashion, entertainment, and political elites.
    0:55:08 Anna positioned herself as the essential connector.
    0:55:12 First principle of real power is don’t join other people’s networks.
    0:55:15 Create your own and be the one everything flows through.
    0:55:17 When ambassador rumors swirled, Anna stayed silent.
    0:55:20 The speculation alone increased her value.
    0:55:23 Why can firm or deny when mystery multiplies the leverage?
    0:55:25 She didn’t get the ambassadorship.
    0:55:26 She got something better.
    0:55:28 Artistic director of all of Condé Nast.
    0:55:29 Not just Vogue.
    0:55:29 Everything.
    0:55:33 But the Met Gala has been her masterpiece of power.
    0:55:37 In 1999, Anna inherits a stuffy charity dinner.
    0:55:40 Wealthy New Yorkers writing checks, patting themselves on the back.
    0:55:47 By 2018, she’s running a $12 million cultural phenomenon that determines who matters in America.
    0:55:49 The transformation reveals everything.
    0:55:51 Anna didn’t just change an event.
    0:55:54 She created a new currency.
    0:55:57 Met Gala invitations became more valuable than money.
    0:56:01 They signaled cultural relevance that no amount of wealth could buy.
    0:56:03 The Met Gala looks like a party.
    0:56:04 Look closer.
    0:56:06 It’s a machine for manufacturing power.
    0:56:09 Anna controls the three levers that matter.
    0:56:10 First, the guest list.
    0:56:14 Reality stars with millions of followers can’t buy their way in.
    0:56:18 By saying no to money, Anna created a currency more valuable than money.
    0:56:20 Second, the seating chart.
    0:56:23 Anna places emerging designers next to billionaire investors.
    0:56:25 Models next to beauty executives.
    0:56:28 She deliberately separates couples, forcing new connections.
    0:56:32 Anna wanted people to meet other people, a former planner revealed.
    0:56:34 That’s where a lot of business came from.
    0:56:36 Third, the content engine.
    0:56:39 One night generates 12 months of coverage.
    0:56:42 The anticipation, the arrivals, the analysis.
    0:56:47 When Lady Gaga spent 16 minutes changing outfits on the steps, that wasn’t spontaneous.
    0:56:49 That was strategy.
    0:56:53 Vogue.com breaks traffic records every Met Gala Monday.
    0:56:58 Ad sales follow eyeballs, the event pays for itself through the content it creates.
    0:57:00 Watch how the flywheel spins.
    0:57:03 Anna’s Vogue coverage can make a designer’s career.
    0:57:05 So when she calls, everyone says yes.
    0:57:07 Their presence makes the event matter.
    0:57:09 The coverage reinforces Vogue’s authority.
    0:57:12 That authority attracts next year’s guests.
    0:57:14 Each turn makes the wheel spin faster.
    0:57:19 The $12 million for the Met is impressive, but it’s a distraction from the real genius.
    0:57:23 Anna made herself essential to three industries at once.
    0:57:24 Fashion needs her platform.
    0:57:26 Museums need her funding.
    0:57:28 Entertainment needs her validation.
    0:57:33 If magazines vanish tomorrow, Anna would still control the room where culture gets decided.
    0:57:35 She didn’t just build a better magazine.
    0:57:37 She built better infrastructure.
    0:57:41 In 2020, the fashion industry was devastated by the pandemic.
    0:57:43 Conde Nast was bleeding money.
    0:57:45 Critics were circling Anna like vultures.
    0:57:47 Everyone predicted her fall.
    0:57:48 Instead, she got promoted.
    0:57:52 December 2020, Anna becomes chief content officer of everything.
    0:57:54 Every Conde Nast magazine.
    0:57:55 Every country.
    0:58:00 The New Yorker to Vanity Fair to GQ all report to the girl who couldn’t type.
    0:58:03 Looking back from today, that promotion wasn’t a reward.
    0:58:05 It was recognition of reality.
    0:58:08 Anna had built something that transcended job titles.
    0:58:10 Her power rested on three pillars.
    0:58:11 Anticipation.
    0:58:15 She saw the celebrity shift before supermodels peaked.
    0:58:18 Pushed digital while competitors protected print.
    0:58:21 And built platforms while others guarded pages.
    0:58:22 Adaptation.
    0:58:24 Her methods never changed.
    0:58:25 Control the environment.
    0:58:26 Maintain standards.
    0:58:27 Move fast.
    0:58:29 But her tactics evolved constantly.
    0:58:31 Indispensability.
    0:58:32 The magic formula.
    0:58:34 Even when controversial, she stayed profitable.
    0:58:36 Even when criticized, she delivered results.
    0:58:39 Revenue plus relevance equals irreplaceable.
    0:58:42 At 75, Anna controls more than she did at 40.
    0:58:46 Not because she’s holding on, but because she’s built the infrastructure everyone needs.
    0:58:47 Here’s what most people miss.
    0:58:49 Anna didn’t achieve power.
    0:58:50 She architected it.
    0:58:56 The 16-year-old who wrote Editor of Vogue on that form, she got that job in 1988.
    0:58:57 But that was just the beginning.
    0:59:01 She spent the next 40 years building something that couldn’t be taken away.
    0:59:03 Jobs can be lost.
    0:59:04 Titles can be stripped.
    0:59:09 But when you become the platform your entire industry runs on, when you control the room
    0:59:14 where culture gets decided, when three different multi-billion dollar industries need you to
    0:59:16 function, that’s not a career.
    0:59:17 That’s architecture.
    0:59:21 The fashion world that Anna entered in 1975 is dead.
    0:59:25 The magazines, the business model, the culture, all transformed beyond recognition.
    0:59:29 Yet Anna didn’t just survive each transformation.
    0:59:30 She caused them.
    0:59:33 True power isn’t controlling what exists today.
    0:59:35 It’s building what controls tomorrow.
    0:59:41 And tomorrow, like every tomorrow for 40 years, still belongs to Anna Wintour.
    0:59:49 Wow.
    0:59:56 I want to talk about some of my reflections from reading and learning about Anna and what a
    0:59:57 force this woman is.
    1:00:01 There’s a couple of things that didn’t make the episode that I really want to emphasize
    1:00:07 here, but I also want to point out one of her secrets is that she’s direct and clear.
    1:00:10 She is kind, but not nice.
    1:00:13 Kind people will tell you something a nice person won’t.
    1:00:14 She will give you the feedback.
    1:00:16 You know exactly what she’s thinking.
    1:00:18 There are no mixed messages.
    1:00:25 And I think a large part of her success is due to the fact that she’s decisive and clear.
    1:00:31 And I think it gets rid of the wrong people very quickly and the right people love it.
    1:00:32 Okay.
    1:00:36 I want to talk about one of the things that gets talked about a lot online with Anna, which
    1:00:37 is her daily routine.
    1:00:42 It’s practically become internet folklore among productivity geeks.
    1:00:43 She wakes up around five.
    1:00:45 She plays an hour of tennis at dawn.
    1:00:50 And then consumes a whole bunch of news, British US newspapers by breakfast by 8am.
    1:00:55 She’s in the office perfectly coiffured with her Starbucks cappuccino, which is her version
    1:00:57 of breakfast in hand.
    1:00:59 I got that from the 80 questions video.
    1:01:04 But her routine’s most viral element is perhaps her wardrobe strategy.
    1:01:06 I have a wardrobe full of print dresses.
    1:01:10 So every morning I just go to one of my print dresses of choice and put it on.
    1:01:11 It makes decision making a lot easier.
    1:01:18 That simple hack from the queen of fashion revealed in her Go Ask Anna video series is cited as a
    1:01:21 brilliant way to just figure out what to wear in the morning.
    1:01:22 I mean, think of Steve Jobs.
    1:01:23 He always wore the same thing.
    1:01:26 And Anna Wintour is effectively doing the same thing.
    1:01:27 I think it’s brilliant.
    1:01:28 You don’t have to think too much.
    1:01:29 You can buy a whole bunch of them.
    1:01:30 You know what size fits.
    1:01:31 It’s great.
    1:01:34 I also want to say something a bit underrated with Anna.
    1:01:39 She cultivated immense loyalty by genuinely helping other peoples.
    1:01:45 It’s a reminder that behind what appears from the outside to be cold, there’s incredible acts
    1:01:47 of generosity in her.
    1:01:49 There’s softer antidotes.
    1:01:54 You know, they might not trend on TikTok, but they circulate in professional communities.
    1:01:58 For instance, you know, one story that sticks in mind that didn’t make this was she helped
    1:02:02 designer John Galliano get his career back on track.
    1:02:08 She gave countless people who didn’t have a name at the time, photographers and assistants,
    1:02:09 a shot.
    1:02:12 And, you know, I want to think about this for a second.
    1:02:16 She might come across as cold as some, but despite what you think from the Devil Wears Prada,
    1:02:20 my sources tell me she was never insulting.
    1:02:23 She valued clarity, speed, and directness.
    1:02:26 She gave direct and honest feedback.
    1:02:28 She’s kind, but not always nice.
    1:02:35 She says people work so much better when feedback is fast, direct, and honest, and they know where
    1:02:36 they are.
    1:02:40 Nobody works well when the atmosphere feels slow and lazy.
    1:02:45 Okay, I want to get into some of the lessons and, you know, some of the recurring themes that
    1:02:46 we see over and over again.
    1:02:50 The first is a taste for salt water.
    1:02:55 Anna spent five years at Harper’s on a skeleton crew of three people doing everything from market
    1:02:57 visits to layouts to captions.
    1:03:00 There was no coffee fetching or filling.
    1:03:03 She was just thrown in completely over her head.
    1:03:06 She said, I was thrown into my career, frankly, with ignorance.
    1:03:07 I knew nothing.
    1:03:12 She treated this grinding apprenticeship as education, not exploitation.
    1:03:15 Most people would have complained or stopped trying.
    1:03:19 That’s why most people don’t get the education that Anna got.
    1:03:21 Two, unreasonable standards.
    1:03:25 Anna returned every borrowed item with original tissue paper intact.
    1:03:30 She’d send steaks back three times for being insufficiently rare, then only eat two bites.
    1:03:35 At Vogue, she instituted the look, a daily assessment of every employee’s appearance from
    1:03:36 shoes to hair.
    1:03:41 Her AWOC system meant nothing, not even a comma, moved without her approval.
    1:03:43 Excellence is a tyrant you invite in.
    1:03:47 Once it moves in, mediocrity has no place to hide.
    1:03:48 Three, high agency.
    1:03:55 When passed over for fashion editor at Harper’s, despite doing the job’s work, Anna didn’t complain
    1:03:56 or negotiate.
    1:03:58 She resigned immediately, taking her assistant with her.
    1:04:02 She moved to New York without a job lined up, betting everything on her vision.
    1:04:04 The system won’t fix itself for you.
    1:04:08 When merit meets politics, choose exodus over argument.
    1:04:10 Four, burn the boats.
    1:04:16 At Viva, the porn-funded fashion magazine, Anna had a total creative freedom but zero prestige.
    1:04:22 Rather than job hunting for something respectable, she used the disreputable platform to develop
    1:04:23 her aesthetic without interference.
    1:04:28 She studied European fashion magazines while working at a magazine sold behind the counter.
    1:04:31 Sometimes the worst address is the best classroom.
    1:04:34 Embrace opportunities others are too proud to take.
    1:04:37 Five, bias towards action.
    1:04:39 Anna’s meeting revolution at Vogue.
    1:04:41 Walk in, stand, ask, leave.
    1:04:42 You get two minutes.
    1:04:44 The second is a courtesy.
    1:04:49 The clothing run-throughs that took hours under Mirabella, Anna did them in minutes.
    1:04:51 Yes, no, yes, no, yes, no.
    1:04:52 Goodbye.
    1:04:55 No explanations, no committees, just decisions.
    1:04:59 When people avoided her in the elevator, it wasn’t because she banned them, it was because
    1:05:02 she’d immediately start issuing orders they’d need to write down.
    1:05:04 Decisiveness is a muscle.
    1:05:06 The more you use it, the faster you move.
    1:05:08 Velocity matters.
    1:05:10 Six, outthink, don’t just outwork.
    1:05:17 When her boss at Harper’s wanted advertiser-friendly spreads, Anna would meet photographers in the lobby,
    1:05:20 select only the best shots, and claim no others existed.
    1:05:25 She forced him to choose between her vision and expensive reshoots, and she won every time.
    1:05:26 Don’t fight the system.
    1:05:30 Architect situations where the system has to choose you.
    1:05:32 Seven, don’t care what they think.
    1:05:37 Putting Madonna on Vogue’s cover in 1989 horrified fashion purists.
    1:05:39 The woman had just released a video burning crosses.
    1:05:42 Pepsi had polled her sponsorship.
    1:05:43 Religious groups wanted boycotts.
    1:05:48 Anna did it anyway because a businessman on a plane said Vogue would never feature Madonna.
    1:05:52 The issue sold 200,000 extra copies.
    1:05:57 When everyone agrees something will never work, that’s precisely when they stopped testing it.
    1:05:59 Consensus kills innovation.
    1:06:01 Eight, positioning is leverage.
    1:06:07 Anna accepted a made-up creative director role at Vogue, officially Mirabella’s deputy,
    1:06:09 but in reality Lieberman’s protege.
    1:06:13 It wasn’t the job she wanted, but it got her foot in the door.
    1:06:17 For three years, she learned the operation while appearing to be number two.
    1:06:21 She’d sit in a meeting, shaking her head, obviously disagreeing with Mirabella,
    1:06:24 playing a longer game than office politics.
    1:06:26 When Mirabella was fired, Anna was ready.
    1:06:30 When you know what you want, the strongest form of positioning is preparation.
    1:06:33 Nine, be a talent collector.
    1:06:39 Anna championed unknown photographers who became legends and built a three-assistant system that
    1:06:42 created Fashion Magazine’s most powerful alumni network.
    1:06:44 Her protégés run fashion globally.
    1:06:48 They learned by watching her negotiate with billionaires and shape culture daily.
    1:06:53 Your legacy isn’t just what you build, it’s who you build with.
    1:06:55 And you can’t buy good company.
    1:06:57 Ten, overmatch.
    1:07:00 Anna didn’t just go into detail.
    1:07:07 She forced the entire industry online in 1988, making Vogue.com the platform every designer needed.
    1:07:09 She didn’t compete with other magazines.
    1:07:11 She built the infrastructure they’d have to use.
    1:07:13 The Met Gala wasn’t improved.
    1:07:20 It was weaponized into a $12 million annual event of cultural dominance where she controls
    1:07:23 the guest list, seating charts, and cultural relevance itself.
    1:07:25 Don’t play fair games.
    1:07:27 Build the game itself and then charge admission.
    1:07:29 11, win by not losing.
    1:07:35 During the 2008 financial crisis, while other Condé Nast magazines bled out, Vogue remained
    1:07:35 profitable.
    1:07:41 Anna and her publisher had watched Eurodollar exchange rates, built three scenarios, and executed
    1:07:44 their plan while others partied.
    1:07:47 When Bear Stearns collapsed, they were ready.
    1:07:48 They were well-positioned.
    1:07:51 In a crisis, profitable divisions survive.
    1:07:53 Unprofitable ones get cut.
    1:07:55 Excellence matters in good times.
    1:07:57 Profits matter in both.
    1:08:00 When you combine the two, you succeed no matter what.
    1:08:01 What a force.
    1:08:04 Anna, oh my God, I can’t even say enough about her.
    1:08:10 This woman is so amazing and incredible, and I hope you learned as much as I did listening
    1:08:15 to this episode, and I would love to have her as a guest on the podcast, so if you’re listening
    1:08:19 to this and you know how to get in touch with her, I would love to interview her and sit
    1:08:26 down and talk about her and Vogue, and man, what an amazing woman and an amazing story.
    1:08:38 Thanks for listening and learning with us, and be sure to sign up for my free weekly newsletter
    1:08:41 at fs.blog slash newsletter.
    1:08:45 I hope you enjoyed my reflections at the end of this episode, and that’s normally reserved
    1:08:50 for members, but with this outlier series, I wanted to make them available to everyone.
    1:08:55 The Farnham Street website is where you can get more info on our membership program, which
    1:09:02 includes access to episode transcripts, reflections for all episodes, my updated repository featuring
    1:09:07 highlights from the books used in this series, and more. Plus, be sure to follow myself and
    1:09:13 Farnham Street on X, Instagram, and LinkedIn. If you like what we’re doing here, leaving a rating
    1:09:17 and review would mean the world. And if you really like us, sharing with a friend is the best way to
    1:09:20 grow this special series. Until next time.

    The job was editor-in-chief. The goal was to become the platform. And she did. 

    Once she made it to the top, she didn’t just edit Vogue. She reinvented the power structures beneath it. This episode unpacks how a British girl who couldn’t type built the most bulletproof career in media, survived five decades of disruption, and made herself indispensable to fashion, politics, and culture.  

    You’ll hear how she weaponized speed over perfection, fired half the Vogue staff in three days, and turned a porn-funded job into a fashion laboratory. Why she said “Your job” when asked what she wanted. Why she put Madonna on the cover at the peak of a scandal. Why standards—not popularity—are her real moat. It’s not about fashion. It’s about building systems no one can take from you.  

    Most people aim for realistic. Anna Wintour named her destination—Editor of Vogue—at sixteen, then built a ladder no one else could climb. 

    This episode is for informational purposes only and is based on Amy Odell’s Anna: The Biography. Simon & Schuster, 2022. 

    Check out highlights from these books in our repository, and find key lessons from Wintour here—⁠⁠⁠⁠https://fs.blog/knowledge-project-podcast/outliers-anna-wintour/

    Approximate timestamps: Subject to variation due to dynamically inserted ads:

    (03:48 ) PART 1: A Childhood Defined: The Girl Who Couldn’t Type
    (05:50) Anna Chooses Her Path
    (07:28) Learning by Drowning
    (09:46) The Tyranny of Standards
    (12:01) When Merit Meets Reality

    (13:44) PART 2: Conquering New York: The Quiet Revolutionary
    (16:05) Quiet Focus
    (18:10) The Best Worst Job
    (19:29) A Reputation from Nothing
    (21:00) In the Wilderness
    (22:39) The Preparation Advantage
    (25:40) The Audacity Play
    (27:22) The London Interlude
    (28:44) The Execution

    (30:19) PART 3: Vogue’s Transformation: The Devil in the Details
    (32:04) Speed as Strategy
    (34:56) The Celebrity Revolution
    (38:44) The Three-Assistant Solution
    (41:07) Balancing Art and Commerce
    (43:11) Cannibalizing Yourself First

    (46:46) PART 4: Anna’s Empire: The Power of Compartmentalization
    (48:05) The Empire Strategy
    (49:44) Crisis as Opportunity
    (51:58) The Digital Reinvention
    (53:27) The Currency of Influence
    (54:36) The Machine Anna Built
    (56:11) The Persistence of Power

    (58:23) Reflections, afterthoughts, and lessons

    Upgrade—If you want to hear my thoughts and reflections at the end of all episodes, join our membership: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠fs.blog/membership⁠⁠⁠⁠⁠⁠ and get your own private feed.

    Newsletter—The Brain Food newsletter delivers actionable insights and thoughtful ideas every Sunday. It takes 5 minutes to read, and it’s completely free. Learn more and sign up at ⁠⁠⁠⁠fs.blog/newsletter⁠

    Follow Shane on X at: ⁠x.com/ShaneAParrish

  • Turn Your Idea Into a Working App With One Prompt (Live Demo)

    AI transcript
    0:00:03 Welcome to the Next Wave Podcast.
    0:00:09 I’m Matt Wolfe, and today’s guest is building what he calls the last piece of software the
    0:00:11 world will ever need.
    0:00:17 Anton OCK is the CEO of Lovable, a platform that lets anyone build fully working software
    0:00:19 just by describing what they want.
    0:00:23 No code, no dev team, just your ideas and AI.
    0:00:29 In this episode, we talk about what that means for the future of software development, entrepreneurship,
    0:00:30 and even AGI.
    0:00:37 Anton shows off a live demo of Lovable, shares how kids and solo founders are launching real
    0:00:42 businesses in just hours, and breaks down how this tech could level the playing field for
    0:00:43 creators everywhere.
    0:00:49 If you’re building anything, tools, products, businesses, this conversation might just change
    0:00:51 how you think about the whole process.
    0:00:54 So let’s dive in with Anton from Lovable.
    0:01:02 This episode is brought to you by HubSpot Inbound 2025, a three-day experience at the heart
    0:01:07 of San Francisco’s AI and startup scene, happening September 3rd through the 5th.
    0:01:13 With speakers like Amy Poehler, Marquise Brownlee, and Dario Amadei, you’ll get tactical breakout
    0:01:18 sessions, product reveals, and networking with the people shaping the future of business.
    0:01:19 Don’t miss out.
    0:01:23 Visit inbound.com forward slash register to get your ticket today.
    0:01:27 Hey, Anton.
    0:01:28 Thanks for joining us on the show.
    0:01:29 How are you doing today?
    0:01:30 It’s great to see you, Matt.
    0:01:31 I’m good.
    0:01:31 Yeah.
    0:01:32 Awesome.
    0:01:33 Well, I don’t want to waste your time.
    0:01:34 Let’s just jump right into it.
    0:01:36 And let’s talk about Lovable.
    0:01:39 So I’m curious a little bit about your backstory.
    0:01:40 What were you doing before Lovable?
    0:01:42 How did Lovable come about?
    0:01:43 Why did you decide to build it?
    0:01:44 Let’s get the background a little bit.
    0:01:45 Yeah, sure.
    0:01:47 I go back to my childhood.
    0:01:52 No, but I was always a kid that picked apart technology and wanted to understand everything.
    0:01:57 And then I found this way to create games when I was like 12 and got books in the library
    0:01:58 to learn how to code.
    0:02:01 I decided to study at university, as everyone did then.
    0:02:04 And then I thought, computer science?
    0:02:05 No, I’m going to go into physics.
    0:02:11 Because that was where all the people who became the most generalist, both in academia and industry,
    0:02:13 studied here, where I’m from, in Stockholm, Sweden.
    0:02:15 That was amazing.
    0:02:19 I took way too many courses in machine learning, computer science, AI, and math.
    0:02:24 And then what I realized, I was at this place where they discovered the Higgs boson,
    0:02:27 the particle accelerator in CERN, for three months.
    0:02:28 I was like, amazing.
    0:02:31 There’s 10,000 super smart people here.
    0:02:35 But they’re trying to solve a problem that is very hard to solve.
    0:02:36 It’s very inelastic.
    0:02:38 You don’t have any real-world impact.
    0:02:41 And I’m obsessed about impact and making things happen.
    0:02:44 Then I understood, I’m not going to be staying on this track in academia.
    0:02:48 So I went into building things in the industry.
    0:02:51 And for the last 10 years, I’ve been building AI products.
    0:02:55 And I’ve been specifically building great teams that build AI products together
    0:02:59 at two of the very, very well-known AI startups here from Stockholm.
    0:03:03 And then a bit more than one and a half years ago, I decided to start a new company,
    0:03:04 which is what I’m building now.
    0:03:08 So with Lovable, can you sort of give the elevator pitch?
    0:03:10 Like when somebody asks you, what is Lovable?
    0:03:11 How do you describe it to them?
    0:03:15 It’s a way to take as the prompt an explanation of an application.
    0:03:20 And then AI will build that application for you like it was a software engineer and deploy it.
    0:03:21 I think it was you.
    0:03:25 I think you said at one point that you’re trying to build the last piece of software.
    0:03:27 What does that mean?
    0:03:28 Yeah, sure.
    0:03:32 I can thank you to what spurred us starting the company, Lovable.
    0:03:39 Look, this was early 2023 that it was clear to me like this next generation of AI can actually start reason.
    0:03:41 And it’s specifically good at writing code.
    0:03:48 If you put it into an advanced system where the reasoning engine is used to take the decision on behalf of a human,
    0:03:55 then you’re going to be able to build a completely new type of interface to build software products.
    0:03:58 And this AI is going to help developers become more productive.
    0:04:05 But the more interesting unlock here is for the 99% who never learned how to code.
    0:04:09 I’m not sure about you, Matt, but you’ve probably been frustrated by the difficulty finding great software engineers.
    0:04:13 And so I was like my mom and everyone just asked me, how do I find a great software engineer?
    0:04:18 So this new interface, you talk to an AI and it builds your product.
    0:04:20 You build it together with the AI.
    0:04:25 It’s going to let the 99% go from zero to one and enable anyone to,
    0:04:31 unlock the creativity to build great companies, to just create things, create great software,
    0:04:34 and to build businesses on top of it.
    0:04:36 So that’s what we’re set out to do.
    0:04:37 And the last piece of software is this.
    0:04:40 It’s a platform to build software products.
    0:04:44 And it’s going to make sure that humans don’t need the right code anymore if they don’t want to.
    0:04:52 So with that in mind, what do you think the role of a software developer or software engineer, what does that look like in the future?
    0:04:55 I talk to people who ask me this, like, Anton, what should I do?
    0:04:57 I’m an engineer.
    0:05:06 And I think engineers should just always put on the hat of and seeing themselves as the person who translates a real-world problem into a technical solution.
    0:05:11 And that means different things depending on what type of engineer you are, and it changes over time what you do.
    0:05:16 And using AI is, of course, going to be a larger, larger part of that translation.
    0:05:25 And now I think what happens when you have AI that makes it faster to create software is that there’s going to be just much more software,
    0:05:30 and there’s going to be more iteration cycles to make each piece of software very, very good.
    0:05:34 And the jargon for that in some tech companies is to make them lovable.
    0:05:40 So I think that’s actually the end outcome of lowering the barriers through AI and new platforms like ours,
    0:05:45 making it very, very easy to take an idea, write it in, and you get a full working product.
    0:05:49 Right, right. I almost see it as like, you know, like with a symphony, right?
    0:05:54 It’s almost like taking the people that are playing the instruments and moving them to the conductor, right?
    0:05:59 Now everybody can be the conductor, and you’re telling the various instruments what to do now.
    0:06:03 Instead of actually being the player of the instrument, you get to become the conductor.
    0:06:05 Yeah, it hooks people.
    0:06:14 Yeah, and one thing I’ve loved about this sort of new era of like AI software development is I don’t need to necessarily
    0:06:22 build a SaaS product that I’m going out and trying to like raise capital on or, you know, sell it on a monthly fee or anything like that.
    0:06:28 I can find like little tiny bottlenecks in my business, little like holes of things that I’m like,
    0:06:30 all right, that is kind of a pain in the butt.
    0:06:31 I don’t like doing that every day.
    0:06:35 Let me build a little software for myself that just fixes that for me.
    0:06:40 I can just build it for myself and not have to worry about like trying to build a business around it.
    0:06:44 And I think that is one of the coolest things that software like this enables, in my opinion.
    0:06:47 Yeah, the personal software trend is also very big.
    0:06:48 It’s getting larger.
    0:06:49 Yeah, yeah.
    0:06:56 So let’s go ahead and maybe jump into Lovable and give a quick demo, show people what it’s sort of capable of.
    0:07:01 I know you have a project that you kind of already started working on that we can jump in and tweak with.
    0:07:01 Sure.
    0:07:05 I prepared a project right ahead of a call previously.
    0:07:08 And what I wanted to have in that call, I was going to be asking questions.
    0:07:10 And you can use it to ask me questions.
    0:07:12 It’s like a webinar Q&A app.
    0:07:14 So anyone can answer and input the question.
    0:07:18 And I’ll just focus first on what happened as I built this out.
    0:07:21 So I basically went to Lovable, which looks like this.
    0:07:25 And I put in the first prompt, which was mock-up, the webinar question app.
    0:07:33 So then I didn’t want it to be a fully working product where it works across devices, just a mock-up.
    0:07:36 And then Lovable went ahead and said like, okay, hey, I’m going to build this for you.
    0:07:37 I’m going to choose simple design.
    0:07:43 And then it tells me like, if I want to get back in functionality, you can use the Superbase to connect.
    0:07:45 And it lets me understand that better.
    0:07:51 So then what happened was I got this mock-up UI you can see here, but it doesn’t sync across devices.
    0:07:55 So if someone opens this website in one place and answers a question, I don’t see it.
    0:07:57 So I just asked AI, how do I do that?
    0:07:59 And that’s a big part of like, we’re working with AI.
    0:08:01 If you don’t understand, you can just ask.
    0:08:01 Yeah.
    0:08:07 One thing that I like about what you’re showing here too, is that it actually recommended Superbase, right?
    0:08:12 Like if you’re trying to develop a software and you don’t need necessarily know much about software,
    0:08:15 you might not know that you need this to be connected to a backend database.
    0:08:20 So it’s cool to me that it’s going, hey, we could build this for you, but you also need a database.
    0:08:21 Here’s what we recommend.
    0:08:22 Yeah.
    0:08:28 This is a native integration at this point, because most startups and simple projects that are successful,
    0:08:30 they start on Superbase.
    0:08:31 So it’s a very popular choice.
    0:08:36 And what it does is it told me like, okay, yeah, you need to just connect Superbase.
    0:08:39 And then what I did, I went up here and said, yeah, connect to a new project.
    0:08:40 Now it’s connected.
    0:08:48 And then it tells me, okay, now you can go ahead and add AI functionality, login, or just store data.
    0:08:50 So I said, add real-time sync as it explained.
    0:08:55 And then it says, I’m going to create the data table, like the place to store the questions.
    0:08:59 And I’ll change some in the UI to handle it, to connect to the database.
    0:09:02 And then I had to approve, like, okay, run this code.
    0:09:03 I got an error.
    0:09:04 And then I said, okay, I fixed error.
    0:09:06 And then it worked.
    0:09:08 And so that’s what happened when I built this.
    0:09:12 And if I just open this application, I can send it to you as well.
    0:09:15 It’s Q&A dot lovable dot app.
    0:09:19 If you go in, you ask me a question, you can try to do it live.
    0:09:24 Then it should synchronize real-time across any number of devices, which is a really useful,
    0:09:25 simple tool.
    0:09:26 And no one has to log in.
    0:09:28 It’s just like, you go to the, you scan the QR code.
    0:09:29 I ask for a QR code afterwards.
    0:09:35 And then I can always pick up this app if I want to have like a Q&A session with a company
    0:09:36 or with someone I’m presenting to.
    0:09:37 So let me see.
    0:09:40 I’ve actually got it open on my screen right now.
    0:09:40 Let me see.
    0:09:42 If I, let’s see, what should I ask?
    0:09:44 Let’s see.
    0:09:45 Nothing too personal.
    0:09:46 Okay.
    0:09:47 Let’s see.
    0:09:51 What is the coolest app you’ve seen built with lovable?
    0:09:52 Let’s see.
    0:09:53 Submit question.
    0:09:54 Okay.
    0:09:54 That worked.
    0:09:55 There it is.
    0:09:57 So I actually typed that on my screen,
    0:10:00 but you’re seeing it on Anton’s screen if you’re watching the video.
    0:10:00 Yeah.
    0:10:01 Awesome.
    0:10:03 So that’s a good question.
    0:10:05 What’s the coolest app I’ve seen built?
    0:10:08 I think I saw a better version of ChatGPT,
    0:10:09 which I liked.
    0:10:13 Like it had more keyboard shortcuts and way to like customize each thread.
    0:10:19 I really like that because a lot of the innovation right now actually happens on the AI intersection,
    0:10:22 the user interface, like the user experience side.
    0:10:25 And that was one of the really cool apps I saw.
    0:10:27 It’s also fun to see that people are launching,
    0:10:30 we built a lovable app for people to launch the things they built.
    0:10:31 Oh, cool.
    0:10:33 Almost got like a Reddit style upvote.
    0:10:35 And then you get like for projects that I’ve been around,
    0:10:38 there’s lots of people getting users through this.
    0:10:39 And I haven’t seen all of them,
    0:10:41 but there’s so many cool things that people build.
    0:10:44 That was just like me demoing how lovable works.
    0:10:47 What I could do next is to just show you like how the AI handles a change.
    0:10:50 But if I want something to happen instantly,
    0:10:51 because the AI is not instant,
    0:10:56 I could show you that something you can do here is that you can edit text and style
    0:10:58 by just selecting it similar to a website editor.
    0:11:01 So you don’t need to wait for the AI to make little sort of changes.
    0:11:02 Yeah, for small changes, exactly.
    0:11:04 Exactly, yeah.
    0:11:05 But let me first ask you something.
    0:11:08 Do you have any style you really like to see this in?
    0:11:13 I mean, I typically like my websites in dark mode,
    0:11:16 and I always like to use like blues and purples,
    0:11:17 like my background.
    0:11:23 So we will say something about like,
    0:11:29 and a cool hacker font and just make it look better.
    0:11:33 And what I sometimes do is I just attach a screenshot to the AI.
    0:11:35 I pasted it in here for it to see like,
    0:11:36 how does it look now?
    0:11:38 Let’s make it look more beautiful.
    0:11:40 But this is just how lovable works.
    0:11:42 You can do some more things,
    0:11:44 like you can customize knowledge.
    0:11:45 if you wanted to remember,
    0:11:49 like always use this way of connecting to an API
    0:11:51 that you wanted to use for some type of integration.
    0:11:54 If you want your engineer colleague to edit it,
    0:11:57 it’s all kept two-way synchronized for them to go to GitHub,
    0:12:00 which is like how engineers build software to date.
    0:12:00 Right.
    0:12:02 And you can invite collaborators,
    0:12:04 so I could send you this link,
    0:12:08 and then you’ll be able to edit this project if you want to.
    0:12:09 That’s most of it.
    0:12:11 We’re waiting for the AI to run the change here.
    0:12:14 And the last part that a lot of people are missing do here,
    0:12:15 now is to add the custom domain,
    0:12:16 which you might want to host.
    0:12:20 You might want to buy a website domain for your project.
    0:12:23 And I think you can even do that inside of Lovable now.
    0:12:25 So does Lovable host the whole thing,
    0:12:27 and then you’re just sort of pointing your domain name to Lovable?
    0:12:28 Yeah.
    0:12:29 I can just select the domain here.
    0:12:31 I don’t think this domain is free.
    0:12:33 I’ll pay inside of this flow.
    0:12:34 And then it’s all hosted.
    0:12:37 Like we’re using state-of-the-art hosting infrastructure.
    0:12:40 Yeah, but I imagine if somebody did want to export all the code
    0:12:42 and bring it over to their own host or whatever,
    0:12:43 they could do that as well.
    0:12:44 Yeah.
    0:12:45 So it’s all super flexible.
    0:12:49 You can do anything that a human engineer would like to do with this setup.
    0:12:49 Oh, nice.
    0:12:52 Here’s our new style.
    0:12:54 I’m not sure I love it, but…
    0:12:55 Hey, put it in what you asked.
    0:12:57 Yes, that’s true.
    0:12:58 That’s super cool.
    0:12:59 Yeah, so that’s it.
    0:13:06 The Hustle Daily Show, hosted by John Wigel, Juliet Bennett, Ryla, and Mark Dent,
    0:13:09 is brought to you by the HubSpot Podcast Network,
    0:13:11 the audio destination for business professionals.
    0:13:15 The Hustle Daily Show brings you a healthy dose of irreverent,
    0:13:18 offbeat, and informative takes on business and tech news.
    0:13:23 They recently had an episode about advertisers wanting billboards in space.
    0:13:26 It was a really fun and informative episode.
    0:13:27 I suggest you check it out.
    0:13:30 Listen to The Hustle Daily Show wherever you get your podcasts.
    0:13:37 So one of the things, like, when it comes to, you know,
    0:13:40 using AI for code that I’ve ran into a few times,
    0:13:44 is like, I’ll have it build something, and then there’ll be a bug,
    0:13:46 and then I’ll say, hey, this bug is popping up.
    0:13:47 Can you fix it?
    0:13:50 It’ll fix that bug, and then maybe introduce a new bug.
    0:13:55 Or it’ll keep on, like, having that same bug over and over and over again.
    0:13:59 I know a lot of the LLMs have gotten better and better and better over time.
    0:14:00 We’ve now got Cloud 4.
    0:14:02 We’ve got Gemini 2.5 Pro.
    0:14:04 A lot of these LLMs have gotten a lot better at coding.
    0:14:07 But I’m curious, like, how does Lovable specifically
    0:14:10 maybe help overcome some of that kind of frustration
    0:14:12 that, like, the Vibe coders might have?
    0:14:13 Yeah.
    0:14:17 So Lovable is not just a call to Cloud, like the new Cloud model.
    0:14:22 It does a few agentic chain, which is that it tries to understand,
    0:14:23 okay, what’s the context here?
    0:14:27 Like, exactly what information is most relevant
    0:14:30 to solve this specific problem that you’re having.
    0:14:32 If you’re seeing a repeated bug, like, that’s one type of situation.
    0:14:38 And then we’re applying best practices that we’ve been iterated ourselves towards
    0:14:41 to solve that specific type of context that you’re in.
    0:14:43 Okay, you’re stuck with the same type of bug.
    0:14:45 And we feed in some of those best practices
    0:14:48 that are, like, adopted to work
    0:14:53 for the specific technology stack that Lovable applications are built on.
    0:14:54 So that’s what we do to date.
    0:14:56 And another important thing is that we have to give access
    0:15:00 to, like, what human engineers use to debug,
    0:15:05 which is the AI is able to read all the error messages
    0:15:08 and, like, all the logs that are created
    0:15:10 as the user interacts with the website.
    0:15:12 So that’s fed into the AI system.
    0:15:14 So then you can see that if there’s a bug,
    0:15:16 it can really get much more of a picture of, like,
    0:15:17 what actually happened here,
    0:15:21 and then use that in terms of figuring out the error.
    0:15:23 That’s what takes most time for most software engineers
    0:15:25 to understand what is it exactly that goes wrong.
    0:15:26 It doesn’t work.
    0:15:30 It’s not sufficient information to fix it.
    0:15:31 So those are some of the things we’re doing,
    0:15:33 and we’re working on a lot more.
    0:15:33 Very cool.
    0:15:35 This is sort of getting more into the, like,
    0:15:37 theoretical, philosophical kind of area.
    0:15:40 And I’m not sure if this is something you’ve thought about or not.
    0:15:43 So if you haven’t, we can just always skip over it.
    0:15:47 But I’m curious, like, when everybody has access
    0:15:50 to be able to develop any software,
    0:15:54 how do companies, like, actually build a moat?
    0:15:56 Like, have you thought about that at all?
    0:15:59 Like, how would a software company actually build something
    0:16:01 if, like, anybody else can just go and use a tool
    0:16:02 to build the same thing?
    0:16:04 Like, how do businesses get formed around this kind of thing?
    0:16:08 I mean, you don’t need a moat to build a great business.
    0:16:08 Right.
    0:16:09 I don’t think so.
    0:16:13 And, I mean, most of the moats are the same.
    0:16:18 I think there is a bit of a moat in terms of just trust
    0:16:19 and knowing who’s behind this,
    0:16:23 that these people who built this have my best intentions at heart.
    0:16:23 Right.
    0:16:25 That will always remain.
    0:16:28 Then there are, like, network effects moats.
    0:16:31 So I guess, like, one on that all your friends are using it,
    0:16:32 and then it becomes more productive to use this tool.
    0:16:36 And one on that this tool is connected to everything else out there,
    0:16:37 and that’s, like, a network platform effect.
    0:16:38 Right.
    0:16:42 And I think, like, maybe at some point there’s an economy of scale as well,
    0:16:45 like, that you can make it give a better value proposition
    0:16:45 because you’re larger.
    0:16:49 That hasn’t shown to be so useful in software businesses,
    0:16:51 but maybe that’s going to be the case in the future as well.
    0:16:52 Yeah, yeah, yeah.
    0:16:55 So I want to shift the conversation quickly to the topic of AGI,
    0:16:57 because I know in the past you’ve also mentioned
    0:17:00 that you want to help contribute to getting to AGI.
    0:17:03 So a couple questions there.
    0:17:04 A, how would you define AGI?
    0:17:08 Because I feel like, obviously, everybody kind of has a slightly different definition.
    0:17:11 And then the follow-up to that is, like, how does Lovable,
    0:17:14 or what you’re trying to build, sort of get us closer to that?
    0:17:15 Yeah.
    0:17:23 My favorite definition is that at the point when anything you could hire a human remote
    0:17:27 worker for can be done with AI, that’s when you have AGI.
    0:17:32 It hedges you against that humans can do some things that only humans can,
    0:17:34 because we humans don’t want to talk to a machine.
    0:17:35 We want to talk to a human.
    0:17:39 But if it’s a remote, like a remote and over Slack, I think,
    0:17:41 then it’s only cognitive labor.
    0:17:42 It’s a pure cognitive labor.
    0:17:45 So I think that’s a pretty clear way to define,
    0:17:50 is it intelligence and not just a human that has networks and connections with other humans?
    0:17:51 Right.
    0:17:53 But how do we get there?
    0:18:00 I think building systems that execute, write and execute code is a huge part of this.
    0:18:08 And increasingly, what I’m thinking is that we are not going to work on the foundation model layer.
    0:18:14 We’re creating the most delightful and intuitive interface to interface with this type of technology.
    0:18:21 And now it’s for spinning up software, and that’s hosted, available for anyone.
    0:18:26 In the future, we’re naturally adding a lot more types of interactions into our interface,
    0:18:29 like, oh, browse the web for me and find all these interfaces, these things,
    0:18:34 and then put them into a website, or even like, okay, can you check for all the feature requests
    0:18:36 from my email and then implement some of them?
    0:18:41 Like, that’s the type of direction that we would go for on the very long term.
    0:18:47 And that type of interface is going to be one of the most important things in how humans perceive
    0:18:49 the level that AI has reached, right?
    0:18:53 Like, it’s a good friend of mine told me way, way back, 10 years ago,
    0:18:55 like, the AI is never better than the UI.
    0:18:59 So if you can’t, as a human, get value from it, then it’s worthless, right?
    0:18:59 Yeah.
    0:19:04 Do you think that, like, the future of UI, the future of user interface is going to be
    0:19:10 what it is now, where people are sort of typing prompts, and we have these sort of visual user
    0:19:13 interfaces, or do you think it’s going to switch to some alternate modality?
    0:19:17 No, I think it’s going to be pure mind reading in the future, right?
    0:19:19 We don’t know what it’s going to be yet.
    0:19:25 It’s going to be a combination of things, like, us humans are really good at getting a lot
    0:19:31 of information visually, so that’s going to continue to be a big part, and like, we’re
    0:19:35 not as good at getting a lot of information by reading text, I think, as like, just looking
    0:19:37 at a picture and boom, you get a lot of information really fast.
    0:19:38 Right.
    0:19:42 That’s going to be a part of it, and then, like, how you as a human communicate as much
    0:19:46 information as possible to an AI, which is going to be important as well.
    0:19:51 I mean, at some point, I do think we’re going to see more and more adoption of, like, brain
    0:19:57 computer interfaces, but just speaking or lip reading might be like an emerging pattern
    0:19:58 UX-wise with AI.
    0:19:59 Yeah, yeah.
    0:20:03 You know, I had some chats with people over at, like, Microsoft and Google and that sort
    0:20:10 of thing, and sort of their position is that they want AI to be much more predictive, right?
    0:20:14 Like, it knows what you want to do before you ask it to do the thing.
    0:20:18 It’s going to get to this point where it starts to understand you, it understands your patterns,
    0:20:23 it understands what you do on a daily basis, anticipates it, and then just gets ahead of
    0:20:23 you on it.
    0:20:28 So, I don’t know, to me, that’s like a real sort of fascinating future that we’re heading
    0:20:29 into with AI.
    0:20:30 Yeah, that’s huge.
    0:20:35 Is there anything else that we didn’t cover about, you know, Lovable and what you’ve been
    0:20:37 building that you think we should be covering?
    0:20:42 I could talk a bit about, like, where this technology is giving the most value today.
    0:20:48 We spoke to one of our users recently, Felipe, I think it was a very fun story where he had
    0:20:50 built large companies before.
    0:20:55 He raised $50 million, hired 130 engineers, and now he’s past that.
    0:20:59 And he’s just building a business himself using Lovable.
    0:21:04 And then he can take all the ideas instantly and it moves so much faster, which is a bit
    0:21:08 paradoxical, to having this large organization where there are many chains of communication.
    0:21:12 And it sounds very productive to have 130 engineers, right?
    0:21:17 But he’s making tens of thousands of dollars on this, like, small new business that he’s
    0:21:18 growing organically.
    0:21:23 This is like the stereotypical AI native founder that I think we’re going to hear a lot more
    0:21:27 from in how one person can build much faster than larger companies.
    0:21:33 And we’re doing where they let AI do more and more, you know, the building part, but also
    0:21:34 the marketing side.
    0:21:37 And all of that is going to be like one human and a lot of AI systems.
    0:21:39 So that’s what we’re seeing.
    0:21:43 What also inspires me a lot is that kids love to use Lovable because they are super creative,
    0:21:43 right?
    0:21:45 They love to create things.
    0:21:49 And I’ve seen many, like, 14-year-olds and even younger who post, like, they’re selling
    0:21:54 something online or they’re, like, services to walk the dogs with a website built on Lovable.
    0:22:01 And increasingly now, since we launched a Teams plan recently, Lovable is getting huge in
    0:22:05 larger companies, like, Fortune 500 companies that are individuals in Teams.
    0:22:11 They accelerate how the team and the entire company takes decisions, both in the terms of,
    0:22:12 like, okay, we should really build this thing.
    0:22:13 Look, I’ve already built it.
    0:22:14 It’s working.
    0:22:20 And then they look in engineering and for building tools that just accelerates, like, finance,
    0:22:23 the marketing, building landing pages and all of that.
    0:22:26 So it’s fun that it’s a tool that’s being used for so many different things.
    0:22:28 And we’re just keeping up on our side.
    0:22:33 Yeah, I think last year at some point, Sam Altman from OpenAI mentioned that he thinks
    0:22:37 within the next couple of years, we’re going to see the first billion-dollar company built
    0:22:38 by one person, right?
    0:22:43 And then Daria from Anthropic just said it again, like, two weeks ago that he thinks within
    0:22:50 2026, we’ll probably see the first one-person billion-dollar company, which is absolutely wild.
    0:22:52 Also, you mentioned kids are building apps.
    0:22:57 I actually had a conversation with Kevin Scott, the CTO of Microsoft, and he told this whole
    0:23:01 story about how his daughter built an entire app for her school.
    0:23:05 And he was frustrated because she didn’t even consult with him.
    0:23:06 And he’s a software developer.
    0:23:08 She just went and built it herself.
    0:23:12 I mean, what you’re saying, we’re definitely seeing more and more of.
    0:23:18 It’s fun that kids are going to be, of course, better than older people generally to use AI.
    0:23:22 And it’s going to be such a difference in how productive you are if you’re good at using
    0:23:22 AI.
    0:23:23 Yeah, yeah.
    0:23:26 I have one small rabbit hole I want to go down with you really, really quickly.
    0:23:32 I’m curious how the developer community as a whole has received something like Lovable.
    0:23:36 Because I know some developers probably absolutely love it.
    0:23:37 It speeds up their time.
    0:23:43 But then there’s also that sort of existential fear that their skill that they’ve been building
    0:23:43 is no longer needed.
    0:23:45 How has the reception been so far?
    0:23:50 If you just look at the product, the features, a lot of developers love that you just create
    0:23:51 a fully working application.
    0:23:57 And then if you want to go in and customize, use your normal ID, you just sync it with the
    0:23:58 like secure code base.
    0:24:04 And then you’re getting more done and you’re shipping more value to your customer, your employer.
    0:24:07 And that’s like a very positive reception generally.
    0:24:10 If you do zoom out and you’re like, oh, but wait, where is this actually headed?
    0:24:11 Yeah.
    0:24:18 Then it is the case that people are like, wait, what’s my role in all of this areas?
    0:24:24 But I think it’s not so different from anyone working in white collar jobs that everything
    0:24:26 is going to be easier and easier to automate.
    0:24:31 If you’re on top and master these tools, you’re going to be much, much more valuable in the
    0:24:32 workplace.
    0:24:37 But otherwise, you’re not going to maybe have an as cushy job as a software engineer
    0:24:37 has been.
    0:24:41 But you’re going to have to combine that with like doing sales or doing something more manual
    0:24:42 as well.
    0:24:47 There’s going to be a potentially the long term reduction in how many people sit and build
    0:24:47 software.
    0:24:48 Yeah.
    0:24:49 No, I couldn’t agree more.
    0:24:53 And like you mentioned, a lot of the white collar work is in that same boat, right?
    0:24:57 Like if your job is sitting around looking at Excel spreadsheets all day or bookkeeping
    0:25:04 or doing research for a law firm, a lot of that work is also going to probably get automated
    0:25:07 away through AI fairly quickly too.
    0:25:12 And just one thing on the software, the demand for software doesn’t end.
    0:25:17 Like there’s seems to be a lot of things that can improve, be improved with software.
    0:25:22 And hence, there’s going to be more people building software, maybe fewer people writing code.
    0:25:23 That’s how I see it.
    0:25:24 Oh yeah.
    0:25:25 That makes a lot of sense to me.
    0:25:28 Well, Anton, this has been absolutely amazing.
    0:25:32 I know you have to get off to another meeting, so I don’t want to waste any more of your time.
    0:25:34 So the app is over at lovable.app.
    0:25:36 That’s the best place to go to get it?
    0:25:36 Yeah.
    0:25:38 Or lovable.dev is the normal one.
    0:25:39 Oh, lovable.dev.
    0:25:39 Okay.
    0:25:41 So head over to lovable.dev.
    0:25:45 Is there any place that you maybe want people to follow you on social media or anything like
    0:25:46 that after listening to this interview?
    0:25:53 I share fun takes on building from Europe and on the AI space and what’s happening with
    0:25:55 our advancements at my Twitter.
    0:25:57 That’s my first and last name combined.
    0:25:58 Awesome.
    0:26:01 Well, thank you so much for hanging out with me and having this conversation.
    0:26:06 It’s been really fun and, you know, really excited to see how lovable evolves over time.
    0:26:07 So really appreciate it.
    0:26:08 Thank you.
    0:26:08 Likewise.
    0:26:09 It was a pleasure.
    0:26:11 I’m looking forward to another chat in the future.
    0:26:12 Absolutely.
    0:26:12 Absolutely.
    0:26:12 Absolutely.
    0:26:12 Absolutely.
    0:26:27 We’ve got a major announcement.
    0:26:32 HubSpot is the first CRM to launch a deep research connector with ChatGPT.
    0:26:37 Customers can now bring their customer context into the HubSpot deep research connector and
    0:26:39 take action on those insights.
    0:26:43 Now you can do truly remarkable things for your business.
    0:26:48 Customer success teams can quickly surface inactive companies, identify expansion opportunities
    0:26:51 and receive targeted plays to re-engage pipelines.
    0:26:56 Then take those actions in the customer success workspace in HubSpot to drive retention.
    0:27:02 Support teams can analyze seasonal patterns and ticket volume by category to forecast staffing
    0:27:08 needs for the upcoming quarter and activate Breeze customer agents to handle spikes in support
    0:27:08 tickets.
    0:27:11 This truly is a game changer for the first time ever.
    0:27:17 Get the power of ChatGPT fueled by your CRM data with no complex setup.
    0:27:23 The HubSpot deep research connector will automatically be available to all HubSpot accounts across
    0:27:27 all tiers that have a ChatGPT team, enterprise, or Edu subscription.
    0:27:33 Turn on the HubSpot deep research connector in ChatGPT to get powerful PhD level insights from
    0:27:34 your customer data.
    0:27:36 Now let’s get back to the show.
    0:27:39 Thank you.

    Episode 63: What if you could turn your idea into a fully working app—just by describing it in plain English? Matt Wolfe (https://x.com/mreflow) sits down with Anton Osika (https://x.com/antonosika), CEO of Lovable, a revolutionary platform that lets anyone build and launch software using AI—no code or development team required.

    In this episode, Anton gives a live demo of Lovable, reveals how creators of all ages—including kids and solo founders—are launching real businesses in hours, and dives into how AI-powered platforms like Lovable will change the future of entrepreneurship, creativity, and even move us closer to AGI. If you’re a builder, maker, or curious about the next frontier in software creation, this conversation will reshape how you think about launching your next product.

    Check out The Next Wave YouTube Channel if you want to see Matt and Nathan on screen: https://lnk.to/thenextwavepd

    Show Notes:

    • (00:00) AI-Powered Code Revolution

    • (04:21) Engineers as Problem Translators

    • (07:50) Supabase Integration Simplifies Startups

    • (10:49) Enhancing Design and Collaboration

    • (16:46) Intuitive AI Interface Development

    • (19:31) AI Empowering Solo Entrepreneurs

    • (22:40) Future of Software Development: Automation Impact

    • (24:18) Lovable App

    Mentions:

    Get the guide to build your own Custom GPT: https://clickhubspot.com/tnw

    Check Out Matt’s Stuff:

    • Future Tools – https://futuretools.beehiiv.com/

    • Blog – https://www.mattwolfe.com/

    • YouTube- https://www.youtube.com/@mreflow

    Check Out Nathan’s Stuff:

    The Next Wave is a HubSpot Original Podcast // Brought to you by Hubspot Media // Production by Darren Clarke // Editing by Ezra Bakker Trupiano

  • Warlords, Espionage, and Disinformation | Introducing Hot Money: Agent of Chaos

    AI transcript
    0:00:15 Hi, I’m Sam Jones, and I hope you don’t mind me dropping in to give you a quick preview
    0:00:21 of my new podcast, Hot Money, Agent of Chaos. It all started in 2020, when my colleagues
    0:00:26 at the Financial Times exposed the German company Wirecard as a huge fraud. But underneath
    0:00:32 that I discovered another, more elusive tale. Jan Marsalek was more than just Europe’s biggest
    0:00:39 financial con artist. He was someone who had other lives. And his shadow, it seemed to appear
    0:00:45 in the most unexpected places. In the investigation into a deadly poisoning, in the wake of an
    0:00:51 Austrian political scandal, in Libya’s refugee camps, with mercenaries in Syria, oligarchs
    0:00:56 on the French Riviera. Bulgarian criminals in a dishevelled English seaside resort.
    0:01:01 I’ve been pulling together all these threads to try and understand who Jan Marsalek was,
    0:01:07 and what it is that connects them all. And I think I’ve got an explanation for you. It’s
    0:01:13 a story that says as much about our own society as it does about the wildlife of one rogue individual.
    0:01:20 It’s about power and corruption, and the secret front line of a huge geopolitical game
    0:01:26 that affects us all. I hope you enjoy this preview, and if you do, find Hot Money,
    0:01:29 Agent of Chaos, wherever you listen to your podcasts.
    0:01:58 It’s a winter’s day in 2018. Paul Murphy is standing in front of the mirror of the gents’
    0:02:01 lavatory at work. He’s changing for lunch.
    0:02:05 I kind of stopped wearing ties, but I think I put a tie on for that occasion.
    0:02:11 Paul is in his mid-fifties. He’s got a slightly grizzled look about him. You wouldn’t pick
    0:02:14 him out in a crowd, but that’s an advantage in his line of work.
    0:02:23 In his hands, Paul is holding a small silver disc about the size of a penny. He takes his shirt
    0:02:30 off, grabs a piece of medical tape, and fixes this disc onto his shoulder, because this disc
    0:02:38 is a tiny microphone. He slips his white shirt back on, puts a jacket on top, and with one last
    0:02:45 glance in the mirror, he’s ready for lunch. Paul is the head of investigations at the Financial Times
    0:02:54 in London. He takes a cab across town, to Mayfair, to a venue called 45 Park Lane.
    0:03:01 It’s, you know, it’s one of those places that is priced to keep out ordinary people. You know,
    0:03:09 it’s all glass windows and bling and mirrored interiors and very few customers. Very few.
    0:03:11 It’s Dubai style, essentially.
    0:03:18 As Paul walks in, he tries to keep his cool. Despite four decades in journalism, this is a
    0:03:22 first for him. He’s never actually worn a wire himself.
    0:03:27 It’s very, very nerve-wracking. You know, I’ve got a bug on me. You know, I didn’t want our
    0:03:32 undercover team to get discovered. That would be hugely embarrassing. So I was, you know,
    0:03:33 I was nervous.
    0:03:38 The maitre d’ escorts Paul across the room, and there, rising from his chair, smiling
    0:03:44 courteously and greeting Paul with a handshake, is the man he’s come to meet, Jan Marsalek.
    0:03:50 Very slim, athletic build, razor-sharp blue suit.
    0:03:56 Paul came here to set a trap, to get this successful businessman on tape.
    0:04:01 But by the time they finish their meal, he wonders if he’s the one who has walked into
    0:04:01 a trap.
    0:04:09 If I’m honest, I felt a bit amateurish, you know. We were out of our depth. This guy was
    0:04:19 very, very slick, controlled, careful, polished. And, you know, I’m not.
    0:04:33 My name is Sam Jones, and I’m a journalist with the Financial Times. I’m a foreign correspondent
    0:04:39 based in Central Europe. This lunch you’ve just heard about, it’s the unexpected beginning
    0:04:46 of an investigation that has, in one way or another, preoccupied me for the past five years.
    0:04:54 At the centre of it is the man in the sharp blue suit, Jan Marsalek. A man who, I discovered,
    0:05:02 is so fascinated by risk and deceit that one identity, one life, wasn’t enough for him.
    0:05:10 I find it’s often people like this, the most unusual people, who reveal universal truths,
    0:05:16 the fact that we’re all inventors of our own personal narratives, how fictions can be stitched
    0:05:18 together to create realities.
    0:05:27 This tale begins in London and Munich, but leaps across the globe, from Libya to Austria,
    0:05:32 from Bulgaria to Afghanistan, from the Côte d’Azur to Moscow.
    0:05:40 Jan Marsalek’s life is a window into a hidden world of geopolitical power games, games which,
    0:05:47 in ways big and small, govern our lives. Games which have never felt more relevant, or the
    0:05:56 players of them, harder to fathom. This is a story about espionage, about Europe, about Russia,
    0:06:04 and ultimately, America. From the Financial Times and Pushkin Industries, this is Hot Money,
    0:06:10 Season 3, Agent of Chaos. Episode 1, The Bride.
    0:06:28 Paul Murphy hired me to work for the FT 17 years ago. It’s been a long time since Paul’s
    0:06:35 my actual boss, but he was, and still is, a mentor to me. All of my best habits in journalism,
    0:06:41 and some of my worst ones, I’ve picked up from Paul. Pretty much since starting my career,
    0:06:48 every couple of months or so, I end up at lunch with him, in Sweetings. It’s a noisy,
    0:06:54 crowded fish restaurant, deep in the city, London’s financial district. It’s distinctly old school,
    0:07:01 even a bowler hat wouldn’t look out of place. And coming here, it underscores lesson number one
    0:07:03 in the Paul Murphy School of Journalism.
    0:07:08 You have to get out of the bloody office. Get out of the bloody office. Young reporters in particular
    0:07:14 think that you can do everything digitally. But actually, you get a lot more information
    0:07:20 of somebody face to face. You have to win people’s trust. And one way of doing that is have lunch with
    0:07:29 people. It’s a great social setting to develop, you know, a relationship with somebody who you need
    0:07:35 them to trust you. I want to paint a bit of a picture for you about Paul, because it pays in
    0:07:40 this story to try and get the measure of people’s character. Or at least, to try and understand the
    0:07:46 version of themselves people present to the world, and why. Although Paul spends a lot of time at lunch,
    0:07:52 he’s definitely not just another city soak. Most people tend to miss the little silver ring he’s
    0:07:58 wearing, a skull designed by his daughter. People miss a lot about Paul, but that’s part of the trick.
    0:08:05 He’s very good at being underestimated. And because of that, he’s also very good at getting people to
    0:08:09 trust him, to talk to him, and to give him information.
    0:08:16 To understand why I was drawn into this story, you need to know a bit about the reporting that was
    0:08:23 dominating Paul’s life back in 2018. He and his star reporter, Dan McCrum, were neck deep investigating a
    0:08:30 German company called Wirecard, a company that was run by the man in the razor-sharp blue suit,
    0:08:37 the man who Paul would eventually meet for lunch in Mayfair, Jan Marsalek. Wirecard ran the financial
    0:08:44 plumbing behind billions of online transactions. It was so successful at that point, it was even
    0:08:51 secretly plotting a takeover of Germany’s biggest bank. So to the world, Wirecard was a booming digital
    0:08:57 payments company. To Paul and his reporter, Dan, Wirecard was a huge fraud, and they were well on
    0:09:07 the way to proving it. But it was no normal fraud. Because for months, Paul and Dan, they suspected
    0:09:14 they’d been under intense surveillance, all directed by someone at Wirecard, from its base in southern
    0:09:20 German. I mean, it’s kind of like, almost sounds silly to recount it. But, you know, we were paranoid
    0:09:28 about being followed around London, we would get on and off tube trains quickly, just in case somebody
    0:09:34 was getting on the same tube train as us. We would turn off our phones, so that our location couldn’t
    0:09:41 be tracked. Dan had already had his emails hacked, and some of them leaked online. It was an attempt to
    0:09:46 embarrass and discredit him. There had been a mounting and seemingly coordinated attack on his
    0:09:51 reputation on social media. When Paul told me all of this over a series of lunches at Sweetings,
    0:09:57 I guess he was doing so because he wanted to know if I had any contact, in private intelligence or even
    0:10:04 in the actual intelligence services, people who might be able to help. Because the subject I really write
    0:10:13 about, the subject that has become my specialism at the FT, is spying. Paul was probably also telling me
    0:10:19 out of frustration, because back then he and Dan had hit a bit of a wall in their reporting. They’d
    0:10:23 published all they could about Wirecard based on the evidence they had gathered so far, but they still
    0:10:30 didn’t have a smoking gun. And Wirecard’s aggressive lawyers, Shillings, had meanwhile come down hard on
    0:10:35 them. Dan had only just avoided a ruinous lawsuit. It wasn’t a great time.
    0:10:44 It was this sense that, what have we got ourselves into? That was like a real low moment. Maybe I’ve
    0:10:51 got myself into a bit too much hot water here. You do start to worry what you’ve sort of brought down
    0:10:57 on your family. It was quite oppressive. There was this turning point for Dan. One of his sources rang
    0:11:02 him up to tell him he’d been roughed up on the street by two thugs right outside his children’s
    0:11:07 school. They demanded to know if this source had passed on confidential information about Wirecard.
    0:11:13 Hearing this sent Dan into a bit of a tailspin, because suddenly he was worrying about the safety
    0:11:20 of his own family. My first thing is I sort of go home and obsessively change every single one of my
    0:11:29 passwords. Start checking all the security on my house. I mean, the worst moment is we had just moved
    0:11:35 into this rented house. And I suddenly realized I haven’t checked the lock on this patio door at the
    0:11:41 back of the house, which we’d never used. And it just slides straight open. Like our house had
    0:11:46 essentially been unlocked for the last couple of months. And at that point, I really did start
    0:11:53 freaking out about security, who might be after us. I mean, I basically became really paranoid.
    0:11:58 It was right at the peak of this paranoia that something even stranger happened.
    0:12:06 Something that led to that lunch at 45 Park Lane. Paul was talking to one of his oldest sources.
    0:12:13 And we got onto the subject of Wirecard. Just a completely, you know, innocent, relaxed conversation.
    0:12:21 And this guy just suddenly said, you know that they’ll pay you a lot of money to stop writing about
    0:12:31 them. And I kind of laughed. And he stopped me and said, no, they will pay you $10 million to stop
    0:12:36 writing about them. I don’t know if you work in the kind of job or live the kind of life where you’ve
    0:12:43 ever been bribed. But even as a journalist for the FT, this doesn’t really happen, let alone for such a
    0:12:51 ridiculous sum of money. I mean, for $10 million, what would you do? And as such, it takes Paul a while
    0:12:57 to realise that this is a serious offer. How do you know this? He asks. Through my son, his source tells
    0:13:02 him. He’s got to know someone at Wirecard pretty well. They’ve been out together a few times, carousing.
    0:13:09 He’s called Jan Marsalek. And then Paul’s source, he says something which makes Paul clock that this
    0:13:17 offer is real. Marsalek is paying this guy more than $200,000 just to convey the message. You should meet
    0:13:24 him for lunch, he suggests. So what does Paul say? Tell me when and tell me where.
    0:13:31 Paul has no intention of taking the bribe. But this backchannel offer, it seems to confirm everything
    0:13:33 they suspect about Wirecard.
    0:13:40 absolutely confirmed all our suspicions. Which were that the company is a criminal enterprise.
    0:13:43 Absolutely. This was kind of tangible evidence.
    0:13:50 All they need now is for Marsalek to offer the bribe himself and to get that on tape. It’s time for the FT to
    0:13:52 mount its own surveillance operation.
    0:14:00 So that day at 45 Park Lane, the formal introduction’s over, it’s time to order.
    0:14:08 Steaks. The overpriced speciality of this place. Around £170 for a six-ounce filet mignon.
    0:14:14 Right from the start, though, Paul begins to feel that Marsalek isn’t quite what he was expecting.
    0:14:21 Paul is on edge, but he’s not alone. To his relief, it’s not long before he spots his undercover
    0:14:28 support team. Three FT colleagues who pose as wealthy ladies catching up over lunch.
    0:14:33 They’ve snagged a table just next to him, and they look pretty convincing.
    0:14:37 One of the reporters places her handbag on the back of a chair.
    0:14:43 Hidden inside, a camera films the lunch at an angle, catching Jan Marsalek in profile.
    0:14:51 You can hear the tenor of his voice, but the background noise means it’s impossible
    0:14:52 to make out his words.
    0:14:59 To me, watching this footage back, it’s striking how animated he is.
    0:15:03 He turns from side to side, addressing everyone at the table as he talks.
    0:15:10 His face lights up. He’s sort of holding court, emphasising his words with expansive hand gestures.
    0:15:12 He almost looks like a politician.
    0:15:18 The longer the conversation goes on like this, the more clear it becomes to Paul that
    0:15:20 Marsalek is the one in control.
    0:15:27 This guy is expansive and engaging, charming, but not at all defensive.
    0:15:32 There’s no trace of anger or guilt or care.
    0:15:38 He gently protests about the FT’s unfair coverage of Wirecard, as if it’s been an inconvenience.
    0:15:43 But his whole tone seems to be saying, let’s put this behind us.
    0:15:49 As they settle into the meal, Paul nudges the conversation into more dubious terrain.
    0:15:56 Eager to get something incriminating, even if it’s just a hint of something, on tape and on camera.
    0:16:02 I certainly talked about the kind of aggression that the business had shown us.
    0:16:06 And we also talked about whether journalists were corrupt.
    0:16:12 And he absolutely assured me that he knew that journalists could be bought.
    0:16:16 I remember saying, we don’t take bribes.
    0:16:19 And I remember him very specifically saying, I know that, Paul.
    0:16:20 I know you don’t.
    0:16:23 I’ve seen evidence that you don’t take bribes.
    0:16:27 And I thought, ah, you’ve seen my bank account.
    0:16:34 I remember the kind of jolting that he was kind of like stating this so openly.
    0:16:39 But the conversation continues in this vein, nothing concrete.
    0:16:42 The killer offer of a bribe Paul had been hoping for.
    0:16:49 Well, it’s clear that Marsalek is far too savvy an operator to make it here and now, at their first meeting.
    0:16:57 I pretty quickly, you know, came to the conclusion that I wasn’t going to be offered a bribe in front of these people.
    0:16:59 A bit of a damp squip, in a way.
    0:17:00 Yes, it was.
    0:17:05 So Paul is now left wondering, what does Marsalek want from him?
    0:17:09 Why has this meeting happened if he’s not actually going to make him some kind of offer?
    0:17:12 The lunch lasted about 90 minutes.
    0:17:15 And at the end, Marsalek insisted on paying.
    0:17:23 And pulled out a gold credit card, a novelty credit card of solid gold.
    0:17:24 Was he a bit of a show-off?
    0:17:26 Well, yes.
    0:17:32 You know, we’re in one of the most expensive restaurants in London, eating kind of 200 quid steak.
    0:17:38 And he was paying for the bill with a gold credit card.
    0:17:39 So, yeah.
    0:17:46 As Paul leaves the restaurant, he almost laughs at himself for having thought he’d be heading back with something explosive.
    0:17:50 But he also realises that this experience actually hasn’t been a busted flush.
    0:17:52 Far from it.
    0:17:56 Meeting Jan Marsalek has only intrigued Paul more.
    0:17:58 It’s put him into 3D.
    0:18:02 There’s something about Marsalek he can’t quite put his finger on.
    0:18:10 I felt I’d met somebody who was very controlled and confident, who was almost certainly corrupt.
    0:18:13 I basically said, can we do that again?
    0:18:17 And indeed, Paul does meet with him again.
    0:18:21 That’s coming up after the break.
    0:19:08 When Paul first started telling me about Wirecard, I think I treated it all as entertaining table talk.
    0:19:15 Paul is a great teller of stories, and I always enjoyed hearing the gossip about what his investigations team was up to.
    0:19:20 After he told me about meeting Marsalek, though, something began to needle at me.
    0:19:24 Just a feeling about what kind of person Marsalek was.
    0:19:26 A feeling I couldn’t pin down.
    0:19:29 Until I heard about the second lunch.
    0:19:38 One month after that lunch at Park Lane, Paul met Marsalek again, this time without undercover colleagues or secret cameras.
    0:19:40 It was just the two of them.
    0:19:44 They met at the Lanesborough, another high-end hotel in London.
    0:19:47 We talked about geopolitics.
    0:19:49 We talked about technology.
    0:19:51 We talked about finance.
    0:19:54 You know, we talked about the state of the world.
    0:19:59 He had interesting opinions and information on all these things.
    0:20:07 If I’m honest, at this stage, I’d become fascinated by this character because he seemed to know so many people.
    0:20:14 And I kind of, you know, I was thinking, well, you know, he’s probably not going to offer me a bribe.
    0:20:15 We’re not going to just catch him.
    0:20:18 He’s not that stupid.
    0:20:22 This guy is smart, and he knows people, and he has information.
    0:20:27 At this point, did it occur to you that he’d charmed you in any way?
    0:20:32 Yes, it did, but he was a charming man.
    0:20:33 Did you like him?
    0:20:35 Yeah.
    0:20:36 Yes, I liked him.
    0:20:43 If Wirecard, if you hadn’t have known it to be a fraud, do you think you would have sought to stay in touch with him?
    0:20:45 Absolutely, absolutely.
    0:20:51 I mean, in actual fact, you know, my thinking after that second lunch, I did.
    0:20:55 I actually thought I’m going to, you know, develop this guy as a source.
    0:21:01 What did you think he was hoping to get out of a relationship with you?
    0:21:04 Actually, it was very clear.
    0:21:07 We posed an existential risk to Wirecard.
    0:21:14 He knew that by, you know, building a relationship directly with me,
    0:21:21 that he could potentially stop us writing about them,
    0:21:26 or at least he’d get the kind of intel in advance about what we were thinking.
    0:21:35 So as Paul tells me about all of this, the feeling I get most is that a game is afoot.
    0:21:40 And both Paul and Marsalek are enjoying playing it.
    0:21:48 They’ve both established rapport, they’re both working to build trust, but they also test each other, push,
    0:21:52 try to implicate each other in this polite conversation.
    0:22:00 And all of this grips me because in it I see so much of the kind of psychology that I’ve spotted glimpses of covering intelligence and espionage.
    0:22:04 I recognise the shape of this kind of interaction.
    0:22:09 A certain amused, matter-of-fact detachment from things, despite the stakes.
    0:22:11 Think about it.
    0:22:17 Marsalek is lunching happily with a man who is trying to destroy the company he works for and put him in jail.
    0:22:18 And Paul?
    0:22:22 Well, in a funny way, Paul is being encouraged into a minor transgression.
    0:22:27 Something that almost felt to me like a textbook trick from an intelligence recruitment manual.
    0:22:30 An indiscretion that might later make you vulnerable.
    0:22:36 Because Paul does all of this, works Marsalek,
    0:22:41 behind the back of the lead reporter on the Wirecard project, Dan McCrum.
    0:22:46 Why were you dealing with Marsalek and not Dan?
    0:22:47 Dan and I are different characters.
    0:22:52 Dan is a guy, you know, he’s tall and he has all his features in the right place.
    0:22:55 And if your daughter brought him home as a boyfriend, he’d be really happy.
    0:23:00 You know, he’s a good guy, he’s intelligent, he’s articulate, he’s well-educated.
    0:23:06 But actually, actually, Dan is lethal.
    0:23:08 Dan’s like a kind of smiling axe man.
    0:23:09 He’s dangerous.
    0:23:10 He’s forensic.
    0:23:13 Yes, he’s absolute forensic and he won’t let it lie.
    0:23:16 And, you know, I have a different style, all right?
    0:23:20 I’m much softer and I, you know, chat people up and, you know,
    0:23:23 I present myself as being very kind of clubbable.
    0:23:26 You know, all journalists have different styles.
    0:23:30 I mean, I think you’re probably more comfortable playing a role as well, no?
    0:23:33 Possibly, yes.
    0:23:38 Reading between the lines, I think probably a doubting part of him
    0:23:41 was also wondering whether the Wirecard investigation was at a dead end.
    0:23:45 The threat of a lawsuit from shillings meant their reporting had stalled.
    0:23:50 And if that was the case, it might be worth Paul pursuing Marsalek as a source of his own.
    0:23:52 Someone who could help him with other stories.
    0:23:59 Then, around six months after that second meeting, Paul gets a call from an intermediary.
    0:24:03 Marsalek conveys that he has something very interesting to offer.
    0:24:05 Documents.
    0:24:10 He hints at what they’re about and it sounds outlandish.
    0:24:15 But it’s enough of a hint that Paul agrees to Marsalek’s suggestion
    0:24:19 that he fly out to Munich, where Marsalek lives, in order to get them.
    0:24:28 They meet at the Kiefer Schenker.
    0:24:32 It’s a Munich institution, patrician, reassuringly expensive,
    0:24:36 white tablecloths, panelled rooms, but warm and efficient service.
    0:24:39 And it’s practically Marsalek’s house restaurant.
    0:24:42 Jan was waiting for me outside.
    0:24:42 We went in.
    0:24:45 We had a little private room.
    0:24:47 I remember having salmon with caviar.
    0:24:52 And as they talked, Marsalek pushed a brown folder full of papers
    0:24:53 across the table towards Paul.
    0:24:57 But of course, he’s in a restaurant.
    0:24:59 I couldn’t pull them out and start reading through them.
    0:25:01 I just had to kind of politely say,
    0:25:02 thank you very much, I’ll have a read of those.
    0:25:07 And then we just had a kind of stilted, awkward lunch conversation.
    0:25:09 We talked about his bad back.
    0:25:13 If I’m honest, I was trying to get out of the lunch as quickly as possible
    0:25:15 because I wanted to see what was in the folder.
    0:25:17 They finished lunch.
    0:25:19 Marsalek said he had to go back to the office.
    0:25:24 The restaurant has lots of kind of separate bars and rooms.
    0:25:29 And so I literally went down some stairs and found myself a little corner
    0:25:32 and sat down and opened the folder.
    0:25:40 These documents, they related to something that happened in the UK that spring.
    0:25:44 Something awful, which had shocked the whole country.
    0:25:48 Yesterday afternoon, passers-by noticed two people,
    0:25:51 apparently unconscious, on a bench in Salisbury.
    0:25:52 The area…
    0:25:54 The Salisbury poisonings.
    0:25:57 As a police presence remains here in the city whilst they investigate,
    0:26:01 residents and visitors to the city have been reacting to the news.
    0:26:08 Yeah, just completely surprised and shocked that something could happen like this in Salisbury.
    0:26:15 An assassination attempt against a former spy using one of the deadliest nerve agents ever created,
    0:26:19 a chemical that only a handful of government specialists knew about,
    0:26:21 Novichok 234.
    0:26:26 The spy was found half-dead alongside his unconscious daughter.
    0:26:29 But thanks to some remarkable medical work, they both survived.
    0:26:34 Another local resident, a mother of three, did not.
    0:26:38 She died after coming into contact with the Novichok.
    0:26:41 It had been hidden by the assassins in a perfume bottle.
    0:26:48 The intended target was soon identified as a Russian intelligence officer who had fled to Britain in 2010.
    0:26:55 Prime Minister Theresa May announced to a shocked parliament that Moscow was to blame.
    0:27:02 The government has concluded that the two individuals named by the police and CPS
    0:27:06 are officers from the Russian military intelligence service,
    0:27:10 also known as the GRU.
    0:27:17 The GRU, the main directorate, Russia’s fearsome military intelligence agency,
    0:27:22 an organisation with goals that should have consigned it to Cold War history,
    0:27:27 misinformation, civil disorder, violence, assassinations.
    0:27:33 Under Vladimir Putin’s long watch, the GRU has quietly grown in power and influence.
    0:27:38 In the weeks that followed the poisoning, Russia aggressively denied its involvement.
    0:27:42 The Organisation for the Prohibition of Chemical Weapons, meanwhile,
    0:27:44 launched its own investigation.
    0:27:48 Sending its experts to Salisbury to pour over the evidence.
    0:27:56 They produced a highly classified dossier based on shared intelligence and chemical analysis from the site.
    0:28:00 The dossier also included Russia’s own version of events.
    0:28:04 These were the documents Paul now had in his hands.
    0:28:12 It was fascinating to read all this kind of close detail, you know, the Russian version of the story.
    0:28:20 And then the other very interesting part of the documents was the actual formula for Novichok.
    0:28:24 The chemical diagram for the poison.
    0:28:28 A technical outline for something that had been kept hidden from the world for decades.
    0:28:31 A weapon of mass destruction.
    0:28:46 Run a business and not thinking about podcasting?
    0:28:47 Think again.
    0:28:51 More Americans listen to podcasts than ad-supported streaming music from Spotify and Pandora.
    0:28:56 And as the number one podcaster, iHeart’s twice as large as the next two combined.
    0:28:59 So whatever your customers listen to, they’ll hear your message.
    0:29:03 Plus, only iHeart can extend your message to audiences across broadcast radio.
    0:29:05 Think podcasting can help your business?
    0:29:06 Think iHeart.
    0:29:08 Streaming, radio and podcasting.
    0:29:11 Call 844-844-IHEART to get started.
    0:29:13 That’s 844-844-IHEART.
    0:29:17 So what have we got?
    0:29:21 Fart 1, 2, 3, 4, 5 sort of staple chiefs of paper.
    0:29:27 Those documents that Marsalek handed over that day at the Kefershenka, Paul showed them to me.
    0:29:33 And, well, they’re internal documents from the Organisation for the Prohibition of Chemical Weapons.
    0:29:36 And these have been sort of illegally photocopied, right?
    0:29:37 Or so I think they’re photocopies anyway.
    0:29:42 Yeah, they’re all kind of photocopies, except that one is a PowerPoint presentation.
    0:29:44 They’ve all got barcodes on them.
    0:29:48 And this sort of big stamped watermark, which says…
    0:29:52 This printout may contain OPCW confidential information warning.
    0:29:54 Yeah.
    0:29:57 They’re all different copy numbers, though, as well, aren’t they?
    0:29:58 Yeah, which is kind of curious.
    0:30:07 The Organisation for the Prohibition of Chemical Weapons is an international body based in The Hague.
    0:30:11 Almost all of the world’s big military powers are signatories.
    0:30:17 Its job is to police and monitor weapons like Novichok, to ensure they are never, ever used.
    0:30:24 What was going through your head when you kind of first pulled this out of the manila envelope that they were all in?
    0:30:27 Well, I was looking for a story.
    0:30:34 You know, the Salisbury poisoning had been headline news for weeks on end.
    0:30:43 Suddenly, I had, you know, what clearly were kind of classified documents pertaining specifically to that event.
    0:30:45 There had to be a story in it.
    0:30:46 You know, that’s what I was after.
    0:31:00 And I was struck at how detailed and careful and yet completely fanciful the Russian version of events was.
    0:31:06 In the documents, the Russians made the case that the British had manufactured Novichok.
    0:31:12 Because Salisbury is just down the road from Porton Down, a highly secure military research base.
    0:31:20 And the Russians, they argued that the British government had somehow leaked the Novichok from its own chemical research lab.
    0:31:24 You know, I asked him, you know, point blank, where did he get this information?
    0:31:25 What did he say?
    0:31:26 He said he got it from a friend.
    0:31:34 And he did actually say that, you know, if I wanted further information, I should try him in future.
    0:31:39 That I’d be quite surprised at the sort of information he could access.
    0:31:50 So this was sort of like a little bit of an opening, kind of showing his wares, you know, that if you wanted to keep him on side, then he could push other material your way.
    0:31:51 Yeah, absolutely that.
    0:31:56 He was basically saying, look, I have friends in interesting places.
    0:31:58 I can help you in the future.
    0:32:02 We were building a relationship on both sides.
    0:32:12 While all of this unfolded, Dan McCrum, the lead reporter on the Wirecard investigation, hadn’t been sitting still.
    0:32:16 In fact, he’d just found his very own treasure trove of documents.
    0:32:28 And these documents, they would change everything because they finally gave Dan the ammunition he needed to prove that Wirecard was a fraud and that Marsalek was at the centre of it.
    0:32:32 So when Paul got back to London and Dan told him all of this,
    0:32:37 Paul knew it was time to go back on the offensive against Wirecard directly.
    0:32:46 And also, therefore, that it was time to fess up to Dan and to tell him he’d been secretly lunching with Marsalek over the past few months.
    0:32:57 Paul, you know, he’d gone to meet Marsalek for lunch and he was kind of cultivating this parallel kind of, you know, relationship with Marsalek.
    0:33:00 When did you find out about that and what was your first thought?
    0:33:01 Oh, man.
    0:33:08 There are moments in life when you are taken by surprise.
    0:33:19 I basically think he hadn’t wanted to, like, blow my mind whilst I was focused on getting the story because the important thing was to get the story out.
    0:33:27 But it had reached the point where it was sort of becoming embarrassing that he hadn’t mentioned that he had quietly been dining with Jan Marsalek.
    0:33:29 I’m like, sorry, I’m like, sorry, what?
    0:33:38 But then he goes, he’s been flashing around top secret documents with a recipe for Novichok on them.
    0:33:44 I think my reaction was if he had just tried to tell me that Marsalek had faked the moon landings.
    0:33:57 It was so completely out of left field that you’re like, sorry, what did you just say?
    0:34:09 To be clear, we had no evidence that Marsalek actually had anything to do with carrying out the poisonings.
    0:34:13 But the fact that he even had these documents was a bombshell.
    0:34:20 Not only because the documents made it clear that Marsalek was entangled with something besides just a huge corporate fraud,
    0:34:25 but also because Marsalek had effectively chosen to disclose this.
    0:34:28 Marsalek pulled the spotlight onto himself.
    0:34:31 And it made us realise how little we knew about him at all.
    0:34:39 At that point, we just kind of had this sense that Marsalek was this kind of man of action
    0:34:44 and was mixed up somehow in Viennese politics.
    0:34:49 Wachard’s aggressive surveillance of Paul and Dan intensified.
    0:34:53 And they managed to trace it back to a private security company in Vienna,
    0:34:57 the capital of Austria and Marsalek’s home city.
    0:35:01 Paul and Dan were now going to spend the next few months battling to prove the fraud
    0:35:04 with the new documents Dan had received.
    0:35:05 But me?
    0:35:10 I was about to start a foreign posting in Switzerland and in Austria.
    0:35:13 If I was going to be on the ground, Paul thought,
    0:35:16 then I could surely make some inquiries.
    0:35:23 We already knew that there was a big Vienna angle to all this.
    0:35:24 We just didn’t know what the angle was.
    0:35:27 We just didn’t know which doors you had to knock on.
    0:35:31 We didn’t know who you needed to get to.
    0:35:32 Yeah, well, it worked.
    0:35:34 I remember thinking you were mad.
    0:35:38 I just thought, OK, all right, I’m just going to go to Austria
    0:35:41 and start talking to people about Marsalek.
    0:35:42 But, you know, you were right.
    0:35:52 Sometimes it’s the smallest, most unpromising or unexpected little thread that you pull on that suddenly unravels something.
    0:36:00 Sometimes that thread is just an intuition, a feeling about someone, a sense that there’s definitely something more here I don’t know about,
    0:36:02 but that I recognise the shadow of.
    0:36:12 As it turned out, this particular trace, well, it would slowly unravel into a story that wasn’t just the sordid tale of one well-connected fraudster,
    0:36:18 but instead the tale of one of the biggest spy scandals to have hit Europe since the Cold War.
    0:36:27 To this day, I remember that first note coming back from you, just saying that you needed a secure channel to communicate.
    0:36:34 The detail you put in that first note was just mind-boggling, absolutely shocking.
    0:36:37 It was like a whole world just opened up.
    0:36:42 You know, this was no longer just about some weird German corporate.
    0:36:50 There was this kind of huge geopolitical kind of side to the story that was only just coming into view.
    0:36:57 Maybe you’ve felt in recent years that the world is a less certain place.
    0:37:03 That from the background, there are threats or worries you’d never had to think about before that are suddenly present.
    0:37:07 Wars that look like they might tip out of control.
    0:37:10 Radical politicians tearing at the threads of civil society.
    0:37:13 Lies turned into truth by money.
    0:37:16 Well, this story is, in some senses, an accounting of that.
    0:37:24 A story that can sometimes make you realise how tissue-thin the idea of a stable, law-abiding society can be.
    0:37:28 One that’s governed by economic, political and moral rules we’ve all agreed on.
    0:37:32 It’s a story about what kind of people get drawn into the world on the other side of that.
    0:37:34 And what kind of world that is.
    0:37:43 A space carved out by crime and corruption, where money and power are unchecked by laws, or borders, or markets.
    0:37:47 That kind of world might sound terrifying.
    0:37:50 But to some people, it’s irresistible.
    0:37:53 To some people, it’s not an alternative world at all.
    0:37:55 It’s the real world.
    0:38:01 Coming up this season on Hot Money.
    0:38:03 I know politics is corrupt.
    0:38:04 I know everything.
    0:38:04 I know that.
    0:38:05 I know that.
    0:38:06 I believe to know that.
    0:38:07 But this is too much.
    0:38:12 I thought, I hope that he will talk to you and you will be able to investigate on it.
    0:38:17 And perhaps misdeeds and misbehaviour is stopped.
    0:38:19 Very fast, actually.
    0:38:22 He started then talking about his experience in Syria.
    0:38:29 He definitely has a view that he’s operating with complete freedom to do whatever he likes.
    0:38:32 I don’t know if they followed me to my home.
    0:38:34 The decision was very simple.
    0:38:38 It was a choice between being killed or in prison.
    0:38:42 And the other option was just to try to get real freedom.
    0:38:45 How much of it was an act?
    0:38:46 How much was genius?
    0:38:47 How much was learned?
    0:38:48 How much was instinctive?
    0:38:54 I often ask myself now, did I know the true Jan at all?
    0:39:03 Hot Money is a production of the Financial Times and Pushkin Industries.
    0:39:07 It was written and reported by me, Sam Jones.
    0:39:11 The senior producer and co-writer is Peggy Sutton.
    0:39:13 Our producer is Izzy Carter.
    0:39:15 Our researcher is Maureen Saint.
    0:39:18 Our show is edited by Karen Shakurji.
    0:39:21 Fact-checking by Keira Levine.
    0:39:26 Sound design and mastering by Jake Gorski and Marcelo de Oliveira.
    0:39:29 With additional sound design by Izzy Carter.
    0:39:35 Original music from Matthias Bossi and John Evans of Stellwagen Symphonette.
    0:39:38 Our show art is by Sean Carney.
    0:39:44 Our executive producers are Cheryl Brumley, Amy Gaines McQuaid and Matthew Garrahan.
    0:39:47 Additional editing by Paul Murphy.
    0:39:55 Special thanks to Rula Kalaf, Dan McCrum, Laura Clark, Alistair Mackey, Manuele Zaragoza,
    0:40:02 Nigel Hansen, Vicky Merrick, Eric Sandler, Morgan Ratner, Jake Flanagan, Jacob Goldstein,
    0:40:05 Sarah Nix and Greta Cohn.
    0:40:06 I’m Sam Jones.
    0:40:19 This is an iHeart Podcast.

    In 2020, the Financial Times exposed a 2 billion euro fraud at Wirecard, a high-flying German fintech. Many thought that was the end of the story. But for reporter Sam Jones, it was just the beginning.

    This season on Hot Money: Agent of Chaos, from Pushkin Industries and the Financial Times, Jones investigates Wirecard’s chief operating officer who vanished just as Wirecard collapsed. And turned out to also be a Russian spy.

    Here’s episode 1. Listen to Hot Money: Agent of Chaos wherever you get your podcasts.

    See omnystudio.com/listener for privacy information.

  • Adam Neumann: This is How You Build Iconic Companies

    In this recent episode of The Ben & Marc Show, a16z co-founders Marc Andreessen and Ben Horowitz sit down with Adam Neumann—founder of WeWork and now Flow—to unpack one of the most unlikely comeback storie s in tech.

    What began as a personal reckoning after a very public fall has become a bold new vision for how we live and belong. Flow isn’t just a real estate company—it’s an operating system for community, built on first-principles software, design, and soul.

    Joined by a16z General Partner Erik Torenberg, the group goes deep on:

    • Why Adam’s childhood shaped his obsession with community
    • Adam’s fall from WeWork—and how he found a new path to redemption
    • How Flow is re-architecting real estate from scratch
    • Why loneliness is the greatest design challenge of our time

    With reflections on dyslexia, the American dream, and the thin line between failure and greatness, this is a candid and wide-ranging conversation about redemption, vision, and building something that matters in this world. We hope you enjoy this deeply human conversation about the future of living.

    Timecodes

    00:00 Introduction 

    00:51 Adam’s Early Life and Family Background

    07:56 Military Service and Discipline

    10:08 Transition to the US and Education

    14:43 Entrepreneurial Journey Begins

    17:49 The Concept of Flow and Vision

    20:28 Meeting and Partnership Formation

    25:22 Overcoming Challenges and Resilience

    28:30 The Isolation Phenomenon

    30:03 Navigating Post-Crisis Relationships

    31:50 Real Estate Strategies During COVID

    33:47 The Genesis of a New Venture

    36:47 Lessons from WeWork

    38:49 Building Flow: The Vision

    41:44 The Importance of Alignment

    51:23 Technological Innovations in Real Estate

    55:44 Revolutionizing Real Estate Software and Flexible Living Solutions

    56:28 Challenges and Innovations in Multifamily Housing Rental Markets

    58:40 Global Housing Crisis and Solutions

    01:06:10 Expanding to Saudi Arabia

    01:08:49 Success in Saudi Arabia

    01:12:43 Real Estate Funds and Future Plans

    01:19:10: Why Is This an Opportunity? 

    01:20:45 Impact of COVID on Living and Working

    01:26:14 Future Potential of Housing and Living

    Resources: 

    Read Marc’s blog post about Flow: https://a16z.com/announcement/flow/

    Marc on X: https://x.com/pmarca 

    Marc’s Substack: https://pmarca.substack.com/ 

    Ben on X: https://x.com/bhorowitz 

    Erik on X: https://x.com/eriktorenberg 

    Erik’s Substack: https://eriktorenberg.substack.com/

    Stay Updated: 

    Let us know what you think: https://ratethispodcast.com/a16z

    Find a16z on Twitter: https://twitter.com/a16z

    Find a16z on LinkedIn: https://www.linkedin.com/company/a16z

    Subscribe on your favorite podcast app: https://a16z.simplecast.com/

    Follow our host: https://x.com/eriktorenberg

    Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.

  • 677: 8 Ways to Use AI to Make More Money

    How are real side hustlers and entrepreneurs are using AI to work smarter, save time, and make more money?

    Last month, I asked Side Hustle Nation email subscribers: “How are you using AI in your business?”

    The responses were awesome, revealing 8 key categories where entrepreneurs are leveraging artificial intelligence to scale their side hustles.

    Listen to Episode 677 of the Side Hustle Show to discover how Side Hustle Nation is using AI to:

    • speed up content creation and marketing
    • get strategic business coaching on demand
    • create and design profitable digital products
    • automate workflows and save hours of manual work

    Full Show Notes: 8 Ways to Use AI to Make More Money

    New to the Show? Get your personalized money-making playlist ⁠here⁠!

    Sponsors:

    Mint Mobile⁠⁠⁠ — Cut your wireless bill to $15 a month!

    ⁠⁠⁠Indeed⁠⁠⁠ – Start hiring NOW with a $75 sponsored job credit to upgrade your job post!

    ⁠⁠⁠OpenPhone⁠⁠⁠ — Get 20% off of your first 6 months!

    ⁠⁠⁠Shopify⁠⁠⁠ — Sign up for a $1 per month trial!

  • How Allies Should React to Trump, How to Calm Your Nerves, and First-Time Board Advice

    Scott shares his take on how traditional U.S. allies should navigate the Trump era — and what might come after. He then offers advice for managing nerves before a big meeting or pitch. Finally, what does meaningful engagement look like when you’re a first-time board member and full-time parent?

    Want to be featured in a future episode? Send a voice recording to officehours@profgmedia.com, or drop your question in the r/ScottGalloway subreddit.

    Learn more about your ad choices. Visit podcastchoices.com/adchoices

  • How to Grow From Doing Hard Things | Michael Easter

    My guest is Michael Easter, a professor at the University of Nevada, Las Vegas and best-selling author. We discuss how particular daily life choices undermine our level of joy, our sense of purpose, our physical and our mental health and the daily, weekly, monthly and yearly steps we can all take to vastly increase our level of motivation, gratitude and overall life satisfaction. We discuss how effortful foraging for information, undistracted reflection and physical exercise are ways to ‘invest’ and therefore grow our levels of dopamine, energy and motivation, whereas low-friction activities are specifically designed to hijack or diminish them. We also discuss dopamine reward circuitry in the context of how to build and reset one’s energy levels and create a deeper sense of purpose in work, creative pursuits and relationships.

    Read the episode show notes at hubermanlab.com.

    Thank you to our sponsors

    AG1: https://drinkag1.com/huberman

    Maui Nui: https://mauinuivenison.com/huberman

    Helix Sleep: https://helixsleep.com/huberman

    Mateina: https://drinkmateina.com/huberman

    Function: https://functionhealth.com/huberman

    Timestamps

    00:00:00 Michael Easter

    00:02:14 Discomforts, Modern vs Ancient Life

    00:07:35 Sponsors: Maui Nui & Helix Sleep

    00:10:17 Modern Problems, Exercise, Trail vs Treadmill Running, Optic Flow, Hunting

    00:20:01 Risk & Rewards, Intellectual vs Experiential Understanding

    00:23:39 Modern Luxuries, First-World Problems, Gratitude, Tool: Volunteer

    00:34:33 Rites of Passage, Tool: Challenge, Narrative & Purpose; Embracing Discomfort

    00:40:43 Sponsors: AG1 & Mateina

    00:43:33 Choice, 2% Study, Silence, Tools: Do Slightly Harder Things; Notice Resistance

    00:54:05 Cognitive Challenges, Walking, Screens, Tool: Sitting with Boredom

    01:01:53 Capturing Ideas, Attractor States, Tool: Being in Nature

    01:06:50 2% Rule, Rites of Passage, Tool: Misogi Challenge

    01:14:12 Phones, Sharing with Others, Social Media, Tool: Reflection vs Screen Time

    01:23:23 Dopamine, Spending vs Investing, Guilt

    01:29:48 Sponsor: Function

    01:31:35 Relaxation, Shared Identities & Community, Music, Tool: In-Person Meeting

    01:38:58 Loss of Gathering Places, Internet & Distorted Views, Hitchhiking

    01:45:06 Misogi & Entry Points; Daily Schedule, Caffeine Intake

    01:54:37 Optimal Circadian Schedule, Work Bouts, Exercise

    01:59:12 Outdoor Adventures, Backpacking & Nutrition

    02:04:57 Camping & Sleeping, Nature, Three-Day Effect

    02:10:10 Sea Squirts; Misogi Adventures & Cognitive Vigor, Writing, Happiness

    02:17:55 Effort & Rewards, Addiction, Dopamine, Catecholamines

    02:22:36 Humans, Running & Carrying Weight, Fat Loss, Tool: How to Start Rucking

    02:32:32 Physical/Cognitive Pursuits & Resistance; Creative “Magic” & Foraging

    02:39:27 Motivation; Slot Machines, Loss Disguised as a Win, Speed

    02:46:06 Gambling, Dopamine, Addiction

    02:50:29 Tool: Avoid Frictionless Foraging; Sports Betting, Speed; Junk Food, Three V’s

    02:56:22 Conveniences, Technology; Upcoming Book, Satisfaction

    03:02:57 Substack Links, Zero-Cost Support, YouTube, Spotify & Apple Follow & Reviews, Sponsors, YouTube Feedback, Protocols Book, Social Media, Neural Network Newsletter

    Disclaimer & Disclosures

    Learn more about your ad choices. Visit megaphone.fm/adchoices

  • Is Trump winning?

    We’re nearly six months into Donald Trump’s second term as president, and a lot of us are still trying to figure out what that actually means. Not just politically. But culturally. What kind of country are we living in? And what kind of future are we heading toward?

    In today’s episode, Sean and Vox senior correspondent Zack Beauchamp try to answer these difficult questions. They discuss Trump’s successes and failures, how he appeals to his supporters, and how the left can respond to the Trump administration.

    Host: Sean Illing (@SeanIlling)

    Guest: Zack Beauchamp, Vox senior correspondent and the author of the On the Right newsletter. Sign up for the newsletter here.

    Listen to Sean’s previous interview with Zack about the state of right-wing politics here.

    Listen to The Gray Area ad-free by becoming a Vox Member: vox.com/members

    Learn more about your ad choices. Visit podcastchoices.com/adchoices

  • Godfather of AI: I Tried to Warn Them, But We’ve Already Lost Control! Geoffrey Hinton

    He pioneered AI, now he’s warning the world. Godfather of AI Geoffrey Hinton breaks his silence on the deadly dangers of AI no one is prepared for.

    Geoffrey Hinton is a leading computer scientist and cognitive psychologist, widely recognised as the ‘Godfather of AI’ for his pioneering work on neural networks and deep learning. He received the 2018 Turing Award, often called the Nobel Prize of computing. In 2023, he left Google to warn people about the rising dangers of AI.

    He explains:

    • Why there’s a real 20% chance AI could lead to HUMAN EXTINCTION.

    • How speaking out about AI got him SILENCED.

    • The deep REGRET he feels for helping create AI.

    • The 6 DEADLY THREATS AI poses to humanity right now.

    • AI’s potential to advance healthcare, boost productivity, and transform education.

      00:00 Intro

      02:28 Why Do They Call You the Godfather of AI?

      04:37 Warning About the Dangers of AI

      07:23 Concerns We Should Have About AI

      10:50 European AI Regulations

      12:29 Cyber Attack Risk

      14:42 How to Protect Yourself From Cyber Attacks

      16:29 Using AI to Create Viruses

      17:43 AI and Corrupt Elections

      19:20 How AI Creates Echo Chambers

      23:05 Regulating New Technologies

      24:48 Are Regulations Holding Us Back From Competing With China?

      26:14 The Threat of Lethal Autonomous Weapons

      28:50 Can These AI Threats Combine?

      30:32 Restricting AI From Taking Over

      32:18 Reflecting on Your Life’s Work Amid AI Risks

      34:02 Student Leaving OpenAI Over Safety Concerns

      38:06 Are You Hopeful About the Future of AI?

      40:08 The Threat of AI-Induced Joblessness

      43:04 If Muscles and Intelligence Are Replaced, What’s Left?

      44:55 Ads

      46:59 Difference Between Current AI and Superintelligence

      52:54 Coming to Terms With AI’s Capabilities

      54:46 How AI May Widen the Wealth Inequality Gap

      56:35 Why Is AI Superior to Humans?

      59:18 AI’s Potential to Know More Than Humans

      1:01:06 Can AI Replicate Human Uniqueness?

      1:04:14 Will Machines Have Feelings?

      1:11:29 Working at Google

      1:15:12 Why Did You Leave Google?

      1:16:37 Ads

      1:18:32 What Should People Be Doing About AI?

      1:19:53 Impressive Family Background

      1:21:30 Advice You’d Give Looking Back

      1:22:44 Final Message on AI Safety

      1:26:05 What’s the Biggest Threat to Human Happiness?

    Follow Geoffrey:

    X – https://bit.ly/4n0shFf 

    The Diary Of A CEO:

    • Join DOAC circle here -https://doaccircle.com/
    • The 1% Diary is back – limited time only: https://bit.ly/3YFbJbt
    • The Diary Of A CEO Conversation Cards (Second Edition):
      https://g2ul0.app.link/f31dsUttKKb
    • Get email updates – https://bit.ly/diary-of-a-ceo-yt
      Follow Steven – https://g2ul0.app.link/gnGqL4IsKKb

    Sponsors:

    Stan Store – Visit https://link.stan.store/joinstanchallenge to join the challenge!

    KetoneIQ – Visit https://ketone.com/STEVEN  for 30% off your subscription order

    #GeoffreyHinton #ArtificialIntelligence #AIDangers

    Learn more about your ad choices. Visit megaphone.fm/adchoices

  • #472 – Terence Tao: Hardest Problems in Mathematics, Physics & the Future of AI

    AI transcript
    0:00:03 The following is a conversation with Terence Tao,
    0:00:07 widely considered to be one of the greatest mathematicians in history,
    0:00:10 often referred to as the Mozart of math.
    0:00:15 He won the Fields Medal and the Breakthrough Prize in Mathematics
    0:00:18 and has contributed groundbreaking work
    0:00:22 to a truly astonishing range of fields in mathematics and physics.
    0:00:27 This was a huge honor for me, for many reasons,
    0:00:32 including the humility and kindness that Terry showed to me
    0:00:34 throughout all our interactions.
    0:00:35 It means the world.
    0:00:38 And now, a quick few-second mention of each sponsor.
    0:00:43 Check them out in the description or at lexfriedman.com slash sponsors.
    0:00:45 It’s the best way to support this podcast.
    0:00:49 We’ve got Notion for teamwork, Shopify for selling stuff online,
    0:00:52 NetSuite for your business, Element for electrolytes,
    0:00:54 and AG1 for your health.
    0:00:55 Choose Wizen, my friends.
    0:00:57 And now, on to the full ad reads.
    0:00:58 They’re all here in one place.
    0:01:03 I do try to make them interesting by talking about some random things
    0:01:04 I’m reading or thinking about.
    0:01:07 But if you skip, please still check out the sponsors.
    0:01:08 I enjoy their stuff.
    0:01:09 Maybe you will, too.
    0:01:11 To get in touch with me, for whatever reason,
    0:01:13 go to lexfriedman.com slash contact.
    0:01:15 All right, let’s go.
    0:01:17 This episode is brought to you by Notion,
    0:01:20 a note-taking and team collaboration tool.
    0:01:24 I use Notion for everything, for personal notes, for planning these podcasts,
    0:01:27 for collaborating with other folks,
    0:01:30 and for super boosting all of those things with AI,
    0:01:34 because Notion does a great job of integrating AI into the whole thing.
    0:01:38 You know what’s fascinating is the mechanisms of human memory
    0:01:43 before we had widely adopted technologies and tools
    0:01:46 for writing and recording stuff,
    0:01:47 certainly before the computer.
    0:01:50 So you can look at medieval monks, for example,
    0:01:55 that would use the now well-studied memory techniques,
    0:01:57 like the memory palace,
    0:01:58 the spatial memory techniques,
    0:01:59 to memorize entire books.
    0:02:01 That is certainly the effect of technology,
    0:02:03 started by Google Search
    0:02:05 and moving to all the other things like Notion,
    0:02:08 that we’re offloading more and more and more
    0:02:10 of the task of memorization to the computers,
    0:02:15 which I think is probably a positive thing
    0:02:20 because it frees more of our brain to do deep reasoning,
    0:02:24 whether that’s deep dive, focused specialization,
    0:02:26 or the journalist type of thinking,
    0:02:28 versus memorizing facts.
    0:02:31 Although I do think that there’s a kind of
    0:02:34 Brackard model that’s formed when you memorize a lot of things,
    0:02:39 and from there, from inspiration, arises discovery.
    0:02:40 So I don’t know.
    0:02:47 It could be a great cost to offloading most of our memorization to the machines.
    0:02:50 But it is the way of the world.
    0:02:53 Try Notion AI for free when you go to notion.com slash lex.
    0:02:56 That’s all lowercase notion.com slash lex
    0:02:58 to try the power of Notion AI today.
    0:03:01 This episode is also brought to you by Shopify,
    0:03:05 a platform designed for anyone to sell anywhere with a great looking online store.
    0:03:08 Our future friends has a lot of robots in it.
    0:03:10 Looking into that distant future,
    0:03:16 you have Amazon warehouses with millions of robots that move packages around.
    0:03:22 You have Tesla bots everywhere in the factories and in the home and on the streets and the baristas.
    0:03:23 All of that.
    0:03:24 That’s our future.
    0:03:28 Right now you have something like Shopify that connects a lot of humans in the digital space.
    0:03:38 But more and more, there will be an automated, digitized, AI-fueled connection between humans in the physical space.
    0:03:43 Like a lot of futures, there’s going to be negative things and there’s going to be positive things.
    0:03:48 And like a lot of possible futures, there’s little we could do about stopping it.
    0:03:53 All we can do is steer it in the direction that enables human flourishing.
    0:04:04 Instead of hiding in fear or fear-mongering, be part of the group of people that are building the best possible trajectory of human civilization.
    0:04:10 Anyway, sign up for a $1 per month trial period at shopify.com slash lex.
    0:04:11 That’s all lowercase.
    0:04:15 Go to shopify.com slash lex to take your business to the next level today.
    0:04:22 This episode is also brought to you by NetSuite, an all-in-one cloud business management system.
    0:04:26 There’s a lot of messy components to running a business.
    0:04:34 And I must ask, and I must wonder, at which point there’s going to be an AI, AGI-like CFO of a company.
    0:04:43 An AI agent that handles most, if not all, of the financial responsibilities or all of the things that NetSuite is doing.
    0:04:49 At which point will NetSuite increasingly leverage AI for those tasks?
    0:05:06 I think probably it will integrate AI into its tooling, but I think there’s a lot of edge cases that we need the human wisdom, the human intuition grounded in years of experience in order to make the tricky decision around the edge cases.
    0:05:22 I suspect that running a company is a lot more difficult than people realize, but there’s a lot of sort of paperwork type stuff that could be automated, could be digitized, could be summarized, integrated, and used as a foundation for the said humans to make decisions.
    0:05:25 Anyway, that’s our future.
    0:05:30 Download the CFO’s Guide to AI and Machine Learning at netsuite.com slash lex.
    0:05:32 That’s netsuite.com slash lex.
    0:05:39 This episode is also brought to you by Element, my daily zero-sugar and delicious electrolyte mix.
    0:05:45 You know, I run along the river often and get to meet some really interesting people.
    0:05:50 One of the people I met was preparing for his first ultra-marathon.
    0:05:52 I believe he said it was 100 miles.
    0:05:59 And that, of course, sparked in me the thought that I need for sure to do one myself.
    0:06:10 Some time ago now, I was planning to do something with David Goggins, and I think that’s still on the sort of to-do list between the two of us, to do some crazy physical feat.
    0:06:16 Of course, the thing that is crazy for me is daily activity for Goggins.
    0:06:25 But nevertheless, I think it’s important in the physical domain, the mental domain, and all domains of life to challenge yourself.
    0:06:32 And athletic endeavors is one of the most sort of crisp, clear, well-structured way of challenging yourself.
    0:06:34 But there’s all kinds of things.
    0:06:35 Writing a book.
    0:06:40 To be honest, having kids and marriage and relationships and friendships.
    0:06:46 All of those, if you take it seriously, if you go all in and do it right, I think that’s a serious challenge.
    0:06:50 Because most of us are not prepared for it.
    0:06:51 You can learn along the way.
    0:06:59 And if you have the rigorous feedback loop of improving, constantly growing as a person, and really doing a great job of the thing,
    0:07:04 I think that might as well be an ultra-marathon.
    0:07:07 Anyway, get a sample pack for free with any purchase.
    0:07:10 Try it at drinkelement.com slash lex.
    0:07:15 And finally, this episode is also brought to you by AG1.
    0:07:19 An all-in-one daily drink to support better health and peak performance.
    0:07:22 I drink it every day.
    0:07:28 I’m preparing for a conversation on drugs in the Third Reich.
    0:07:34 And funny enough, it’s a kind of way to analyze Hitler’s biography.
    0:07:37 It’s to look at what he consumed throughout.
    0:07:41 And Norman Oler does a great job of analyzing all of that.
    0:07:47 And tells the story of Hitler and the Third Reich in a way that hasn’t really been touched by historians before.
    0:07:54 It’s always nice to look at key moments in history through a perspective that’s not often taken.
    0:08:00 Anyway, I mention that because I think Hitler had a lot of stomach problems.
    0:08:04 And so that was the motivation for getting a doctor.
    0:08:08 The doctor that eventually would fill him up with all kinds of drugs.
    0:08:15 But the doctor earned Hitler’s trust by giving him probiotics, which is a kind of revolutionary thing at the time.
    0:08:20 And so that really helped deal with whatever stomach issues that Hitler was having.
    0:08:24 All of that is a reminder that war is waged by humans.
    0:08:26 And humans are biological systems.
    0:08:31 And biological systems require fuel and supplements and all of that kind of stuff.
    0:08:36 And depending on what you put in your body will affect your performance in the short term and the long term.
    0:08:40 With meth, that’s true with Hitler.
    0:08:43 To his last days in the bunker in Berlin.
    0:08:46 All the cocktail of drugs that he was taking.
    0:08:49 So, I think I got myself somewhere deep.
    0:08:53 I’m not sure how to get out of this.
    0:08:57 It deserves a multi-hour conversation versus a few seconds of mention.
    0:09:04 But yeah, all of that was sparked by my thinking of AG1 and how much I love it.
    0:09:07 I appreciate that you’re listening to this.
    0:09:12 And coming along for the wild journey that these ad reads are.
    0:09:19 Anyway, AG1 will give you a one-month supply of fish oil when you sign up at drinkag1.com slash Lex.
    0:09:22 This is the Lex Friedman podcast.
    0:09:28 To support it, please check out our sponsors in the description or at lexfriedman.com slash sponsors.
    0:09:32 And now, dear friends, here’s Terrence Tao.
    0:09:54 What was the first really difficult research-level math problem that you encountered?
    0:09:56 One that gave you pause, maybe?
    0:10:02 Well, I mean, in your undergraduate education, you learn about the really hard impossible problems.
    0:10:05 Like the Riemann hypothesis, the Trin-Primes conjecture.
    0:10:07 You can make problems arbitrarily difficult.
    0:10:08 That’s not really a problem.
    0:10:10 In fact, there’s even problems that we know to be unsolvable.
    0:10:17 What’s really interesting are the problems just on the boundary between what we can do easily and what are hopeless.
    0:10:25 But what are problems where existing techniques can do like 90% of the job and then you just need that remaining 10%?
    0:10:30 I think as a PhD student, the Kikeya problem certainly caught my eye.
    0:10:32 And it just got solved, actually.
    0:10:34 It’s a problem I’ve worked on a lot in my early research.
    0:10:41 Historically, it came from a little puzzle by the Japanese mathematician Soji Kikeya in like 1918 or so.
    0:10:48 So the puzzle is that you have a needle on the plane.
    0:10:52 Well, think of like driving on a road or something.
    0:10:54 And you want to execute a U-turn.
    0:10:55 You want to turn the needle around.
    0:10:59 But you want to do it in as little space as possible.
    0:11:03 So you want to use this little area in order to turn it around.
    0:11:06 But the needle is infinitely maneuverable.
    0:11:09 So you can imagine just spinning it around.
    0:11:10 It’s a unit needle.
    0:11:12 You can spin it around its center.
    0:11:15 And I think that gives you a disk of area, I think, pi over 4.
    0:11:22 Or you can do a 3-point U-turn, which is what we teach people in their driving schools to do.
    0:11:24 And that actually takes area pi over 8.
    0:11:27 So it’s a little bit more efficient than a rotation.
    0:11:31 And so for a while, people thought that was the most efficient way to turn things around.
    0:11:37 But Vesikovic showed that, in fact, you could actually turn the needle around using as little area as you wanted.
    0:11:47 So 0.001, there was some really fancy multi-back-and-forth U-turn thing that you could do, that you could turn the needle around.
    0:11:50 And in so doing, it would pass through every intermediate direction.
    0:11:51 Is this in the two-dimensional plane?
    0:11:53 This is in the two-dimensional plane.
    0:11:55 So we understand everything in two dimensions.
    0:11:57 So the next question is what happens in three dimensions.
    0:12:01 So suppose the Hubble Space Telescope is tube in space.
    0:12:04 And you want to observe every single star in the universe.
    0:12:07 So you want to rotate the telescope to reach every single direction.
    0:12:09 And here’s the unrealistic part.
    0:12:11 Suppose that space is at a premium, which it totally is not.
    0:12:18 You want to occupy as little volume as possible in order to rotate your needle around in order to see every single star in the sky.
    0:12:22 How small a volume do you need to do that?
    0:12:25 And so you can modify Vesikovic’s construction.
    0:12:30 And so if your telescope has zero thickness, then you can use as little volume as you need.
    0:12:32 That’s a simple modification of the two-dimensional construction.
    0:12:37 But the question is that if your telescope is not zero thickness, but just very, very thin,
    0:12:44 some thickness delta, what is the minimum volume needed to be able to see every single direction as a function of delta?
    0:12:49 So as delta gets smaller, as your needle gets thinner, the volume should go down.
    0:12:50 But how fast does it go down?
    0:12:59 And the conjecture was that it goes down very, very slowly, like logarithically, roughly speaking.
    0:13:01 And that was proved after a lot of work.
    0:13:04 So this seems like a puzzle-wise and interesting.
    0:13:08 So it turns out to be surprisingly connected to a lot of problems in partial differential equations,
    0:13:12 in number theory, in geometry, combinatorics.
    0:13:16 For example, in wave propagation, you splash some water around, you create water waves,
    0:13:17 and they travel in various directions.
    0:13:22 But waves exhibit both particle and wave type behavior.
    0:13:29 So you can have what’s called a wave packet, which is like a very localized wave that is localized in space and moving a certain direction in time.
    0:13:34 And so if you plot it into space and time, it occupies a region which looks like a tube.
    0:13:43 And so what can happen is that you can have a wave which initially is very dispersed, but it all focuses at a single point later in time.
    0:13:47 Like you can imagine dropping a pebble into a pond and ripples spread out.
    0:13:52 But then if you time reverse that scenario, and the equations of wave motion are time reversible,
    0:13:59 you can imagine ripples that are converging to a single point, and then a big splash occurs, maybe even a singularity.
    0:14:07 And so it’s possible to do that, and geometrically what’s going on is that there’s always sort of light rays.
    0:14:12 So like if this wave represents light, for example, you can imagine this wave as a superposition of photons,
    0:14:15 all traveling at the speed of light.
    0:14:18 They all travel on these light rays, and they’re all focusing at this one point.
    0:14:24 So you can have a very dispersed wave, focus into a very concentrated wave at one point in space and time,
    0:14:27 but then it defocuses again, and it separates.
    0:14:30 But potentially, if the pinjaccio had a negative solution,
    0:14:36 so what that means is that there’s a very efficient way to pack tubes pointing in different directions
    0:14:40 into a very, very narrow region of a very narrow volume.
    0:14:43 Then you would also be able to create waves that start out,
    0:14:46 there’ll be some arrangement of waves that start out very, very dispersed,
    0:14:49 but they would concentrate not just at a single point,
    0:14:55 but there’ll be a lot of concentrations in space and time.
    0:15:01 And you could create what’s called a blow-up, where these waves, their amplitude becomes so great
    0:15:05 that the laws of physics that they’re governed by are no longer wave equations,
    0:15:06 but something more complicated and non-linear.
    0:15:11 And so in mathematical physics, we care a lot about whether certain equations
    0:15:15 and wave equations are stable or not, whether they can create these singularities.
    0:15:19 There’s a famous unsolved problem called the Navier-Stokes regularity problem.
    0:15:22 So the Navier-Stokes equations, equations that govern the fluid flow,
    0:15:24 or incompressible fluids like water.
    0:15:28 The question asks, if you start with a smooth velocity field of water,
    0:15:32 can it ever concentrate so much that the velocity becomes infinite at some point?
    0:15:33 That’s called a singularity.
    0:15:37 We don’t see that in real life.
    0:15:40 If you splash around water in a bathtub, it won’t explode on you,
    0:15:44 or have water leaving at a speed of light.
    0:15:46 But potentially, it is possible.
    0:15:55 And in fact, in recent years, the consensus has drifted towards the belief that,
    0:16:00 in fact, for certain very special initial configurations of, say, water,
    0:16:02 that singularities can form.
    0:16:05 But people have not yet been able to actually establish this.
    0:16:08 The Clay Foundation has these seven Millennium Prize problems,
    0:16:11 has a million-dollar prize for solving one of these problems.
    0:16:12 This is one of them.
    0:16:14 Of these seven, only one of them has been solved.
    0:16:16 At the point, great conjecture.
    0:16:22 So, the Kakeha conjecture is not directly, directly related to the Navier-Stokes problem,
    0:16:28 but understanding it would help us understand some aspects of things like wave concentration,
    0:16:31 which would indirectly probably help us understand the Navier-Stokes problem better.
    0:16:33 Can you speak to the Navier-Stokes?
    0:16:37 So, the existence and smoothness, like you said, Millennial Prize problem.
    0:16:38 Right.
    0:16:39 You’ve made a lot of progress on this one.
    0:16:43 In 2016, you published a paper, Finite Time Blow-Up,
    0:16:46 for an averaged three-dimensional Navier-Stokes equation.
    0:16:46 Right.
    0:16:51 So, we’re trying to figure out if this thing usually doesn’t blow up.
    0:16:52 Right.
    0:16:55 But, can we say for sure it never blows up?
    0:16:56 Right.
    0:16:56 Yeah.
    0:16:58 So, yeah, that is literally the million-dollar question.
    0:16:59 Yeah.
    0:17:03 So, this is what distinguishes mathematicians from pretty much everybody else.
    0:17:11 Like, if something holds 99.99% of the time, that’s good enough for most, you know, for most things.
    0:17:20 But, mathematicians are one of the few people who really care about whether, like, 100%, really 100% of all situations are covered by, yeah.
    0:17:24 So, most fluid, most of the time, water does not blow up.
    0:17:28 But, could you design a very special initial state that does this?
    0:17:33 And, maybe we should say that this is a set of equations that govern in the field of fluid dynamics.
    0:17:34 Yes.
    0:17:36 Trying to understand how fluid behaves.
    0:17:42 And, it actually turns out to be a really complicated, you know, fluid is an extremely complicated thing to try to model.
    0:17:42 Yeah.
    0:17:44 So, it has practical importance.
    0:17:48 So, this clay price problem concerns what’s called the incompressible Navier-Stokes, which governs things like water.
    0:17:51 There’s something called the compressible Navier-Stokes, which governs things like air.
    0:17:53 And, that’s particularly important for weather prediction.
    0:17:56 Weather prediction, it does a lot of computational fluid dynamics.
    0:17:59 A lot of it is actually just trying to solve the Navier-Stokes equations as best they can.
    0:18:05 Also, gathering a lot of data so that they can get, they can initialize the equation.
    0:18:06 There’s a lot of moving parts.
    0:18:08 So, it’s a very important problem, practically.
    0:18:12 Why is it difficult to prove general things?
    0:18:17 About the set of equations like it not blowing up.
    0:18:18 The short answer is Maxwell’s Demon.
    0:18:21 So, Maxwell’s Demon is a concept in thermodynamics.
    0:18:24 Like, if you have a box of two gases, you know, oxygen and nitrogen.
    0:18:27 And, maybe you start with all the oxygen on one side and nitrogen on the other side.
    0:18:29 But, there’s no barrier between them.
    0:18:30 Then, they will mix.
    0:18:32 And, they should stay mixed.
    0:18:35 There’s no reason why they should unmix.
    0:18:40 But, in principle, because of all the collisions between them, there could be some sort of weird conspiracy.
    0:18:50 Like, maybe there’s a microscopic demon called Maxwell’s Demon that will, every time an oxygen and nitrogen atom collide, they will bounce off in such a way that the oxygen sort of drifts onto one side and the nitrogen goes to the other.
    0:18:56 And, you could have an extremely improbable configuration emerge, which we never see.
    0:19:00 And, statistically, it’s extremely unlikely.
    0:19:03 But, mathematically, it’s possible that this can happen.
    0:19:05 And, we can’t rule it out.
    0:19:09 And, this is a situation that shows up a lot in mathematics.
    0:19:11 A basic example is the digits of pi.
    0:19:13 3.14159, and so forth.
    0:19:16 The digits look like they have no pattern.
    0:19:17 And, we believe they have no pattern.
    0:19:21 On the long term, you should see as many ones and twos and threes as fours and fives and sixes.
    0:19:26 There should be no preference in the digits of pi to favor, let’s say, 7 over 8.
    0:19:35 But, maybe there’s some demon in the digits of pi that, like, every time you compute more and more digits, it biases one digit to another.
    0:19:39 And, this is a conspiracy that should not happen.
    0:19:40 There’s no reason it should happen.
    0:19:45 But, there’s no way to prove it with our current technology.
    0:19:47 Okay, so, getting back to Navier-Stokes.
    0:19:49 A fluid has a certain amount of energy.
    0:19:52 And, because the fluid is in motion, the energy gets transported around.
    0:19:54 And, water is also viscous.
    0:20:02 So, if the energy is spread out over many different locations, the natural viscosity of the fluid will just damp out the energy and it will go to zero.
    0:20:08 And, this is what happens when we actually experiment with water.
    0:20:11 You splash around, there’s some turbulence and waves and so forth.
    0:20:13 But, eventually, it settles down.
    0:20:18 And, the lower the amplitude, the smaller the velocity, the more calm it gets.
    0:20:26 But, potentially, there is some sort of demon that keeps pushing the energy of the fluid into a smaller and smaller scale.
    0:20:27 And, it will move faster and faster.
    0:20:31 And, at faster speeds, the effect of viscosity is relatively less.
    0:20:41 And, it could happen that it creates some sort of, what’s called a self-similar blow-up scenario, where, you know, the energy of the fluid starts off at some large scale.
    0:20:53 And, then, it all sort of transfers the energy into a smaller region of the fluid, which then, at a much faster rate, moves into an even smaller region and so forth.
    0:20:59 And, each time it does this, it takes maybe half as long as the previous one.
    0:21:07 And, then, you could actually converge to all the energy concentrating in one point in a finite amount of time.
    0:21:12 And, that scenario is called finite amount of blow-up.
    0:21:14 So, in practice, this doesn’t happen.
    0:21:17 So, water is what’s called turbulent.
    0:21:23 So, it is true that, if you have a big eddy of water, it will tend to break up into smaller eddies.
    0:21:26 But, it won’t transfer all the energy from one big eddy into one smaller eddy.
    0:21:28 It will transfer into maybe three or four.
    0:21:31 And, then, those ones split up into maybe three or four small eddies of their own.
    0:21:37 And, so, the energy gets dispersed to the point where the viscosity can then keep everything under control.
    0:21:50 But, if it can somehow concentrate all the energy, keep it all together, and do it fast enough that the viscous effects don’t have enough time to calm everything down, then this blow-up can occur.
    0:21:57 So, there are papers who have claimed that, oh, you just need to take into account conservation of energy and just carefully use the viscosity.
    0:22:02 And, you can keep everything under control for not just Navier-Stokes, but for many, many types of equations like this.
    0:22:10 And, so, in the past, there have been many attempts to try to obtain what’s called global regularity for Navier-Stokes, which is the opposite of final time blow-up, that velocity stays smooth.
    0:22:12 And, it all failed.
    0:22:15 There was always some sign error or some subtle mistake, and it couldn’t be salvaged.
    0:22:24 So, what I was interested in doing was trying to explain why we were not able to disprove final time blow-up.
    0:22:28 I couldn’t do it for the actual equations of fluids, which were too complicated.
    0:22:38 But, if I could average the equations of motion of Navier-Stokes, so, basically, if I could turn off certain types of ways in which water interacts, and only keep the ones that I want.
    0:22:58 So, in particular, if there’s a fluid, and it could transfer its energy from a large eddy into this small eddy, or this other small eddy, I would turn off the energy channel that would transfer energy to this one, and direct it only into this smaller eddy, while still preserving the law of conservation of energy.
    0:22:59 So, you’re trying to make a blow-up.
    0:22:59 Yeah.
    0:23:06 So, I basically engineer a blow-up by changing volts of physics, which is one thing that mathematicians are allowed to do.
    0:23:07 We can change the equation.
    0:23:10 How does that help you get closer to the proof of something?
    0:23:10 Right.
    0:23:13 So, it provides what’s called an obstruction in mathematics.
    0:23:26 So, what I did was that, basically, if I turned off certain parts of the equation, which, usually, when you turn off certain interactions, make it less non-linear, it makes it more regular and less likely to blow up.
    0:23:35 But, I found that by turning off a very well-designed set of interactions, I could force all the energy to blow up in finite time.
    0:23:51 So, what that means is that, if you wanted to prove global regularity for Navier-Stokes, for the actual equation, you must use some feature of the true equation, which my artificial equation does not satisfy.
    0:23:54 So, it rules out certain approaches.
    0:24:04 So, the thing about math is, it’s not just about finding, you know, taking a technique that is going to work and applying it, but you need to not take the techniques that don’t work.
    0:24:17 And, for the problems that are really hard, often there are dozens of ways that you might think might apply to solve the problem, but it’s only after a lot of experience that you realize there’s no way that these methods are going to work.
    0:24:30 So, having these counter-examples for nearby problems kind of rules out, it saves you a lot of time because you’re not wasting energy on things that you now know cannot possibly ever work.
    0:24:37 How deeply connected is it to that specific problem of fluid dynamics, or is it some more general intuition you build up about mathematics?
    0:24:38 Right, yeah.
    0:24:43 So, the key phenomenon that my technique exploits is what’s called supercriticality.
    0:24:48 So, in partial differential equations, often these equations are like a tug-of-war between different forces.
    0:24:54 So, in Navier-Stokes, there’s the dissipation force coming from viscosity, and it’s very well understood.
    0:24:55 It’s linear.
    0:24:56 It calms things down.
    0:25:00 So, if viscosity was all there was, then nothing bad would ever happen.
    0:25:09 But there’s also transport, that energy in one location of space can get transported because the fluid is in motion to other locations.
    0:25:13 And that’s a non-linear effect, and that causes all the problems.
    0:25:19 So, there are these two competing terms in the Navier-Stokes equation, the dissipation term and the transport term.
    0:25:24 If the dissipation term dominates, if it’s large, then basically you get regularity.
    0:25:29 And if the transport term dominates, then we don’t know what’s going on.
    0:25:30 It’s a very non-linear situation.
    0:25:31 It’s unpredictable.
    0:25:31 It’s turbulent.
    0:25:38 So, sometimes these forces are in balance at small scales, but not in balance at large scales, or vice versa.
    0:25:40 So, Navier-Stokes is what’s called supercritical.
    0:25:45 So, at smaller and smaller scales, the transport terms are much stronger than the viscosity terms.
    0:25:48 So, the viscosity terms are the things that calm things down.
    0:25:53 And so, this is why the problem is hard.
    0:26:00 In two dimensions, so, the Soviet mathematician, Ladishan Skaya, she, in the 60s, showed in two dimensions there was no blow-up.
    0:26:03 And as you mentioned, the Navier-Stokes equation is what’s called critical.
    0:26:08 The effect of transport and the effect of viscosity are about the same strength, even at very, very small scales.
    0:26:13 And we have a lot of technology to handle critical and also subcritical equations and prove regularity.
    0:26:17 But for supercritical equations, it was not clear what was going on.
    0:26:21 And I did a lot of work, and then there’s been a lot of follow-up,
    0:26:26 showing that for many other types of supercritical equations, you can create all kinds of blow-up examples.
    0:26:31 Once the nonlinear effects dominate the linear effects at small scales, you can have all kinds of bad things happen.
    0:26:40 So, this is sort of one of the main insights of this line of work, is that supercriticality versus criticality and subcriticality, this makes a big difference.
    0:26:46 I mean, that’s a key qualitative feature that distinguishes some equations for being sort of nice and predictable,
    0:26:47 and, you know, like planetary motion.
    0:26:53 I mean, there’s certain equations that you can predict for millions of years, or thousands at least.
    0:26:54 Again, it’s not really a problem.
    0:26:59 But there’s a reason why we can’t predict the weather past two weeks into the future,
    0:27:00 because it’s a supercritical equation.
    0:27:03 Lots of really strange things are going on at very fine scales.
    0:27:12 So, whenever there’s some huge source of nonlinearity, that can create a huge problem for predicting what’s going to happen.
    0:27:13 Yeah.
    0:27:17 And if nonlinearity is somehow more and more featured and interesting at small scales.
    0:27:23 I mean, there’s many equations that are nonlinear, but in many equations, you can approximate things by the bulk.
    0:27:29 So, for example, planetary motion, you know, if you wanted to understand the orbit of the Moon or Mars or something,
    0:27:35 you don’t really need the microstructure of, like, the seismology of the Moon or, like, exactly how the Mars is distributed.
    0:27:39 You just, basically, you can almost approximate these planets by point masses.
    0:27:43 And just the aggregate behavior is important.
    0:27:50 But if you want to model a fluid, like the weather, you can’t just say, in Los Angeles, the temperature is this, the wind speed is this.
    0:27:54 For supercritical equations, the fine scale information is really important.
    0:27:57 So, if we can just linger on the Navier-Stokes equations a little bit.
    0:28:12 So, you’ve suggested, maybe you can describe it, that one of the ways to solve it or to negatively resolve it would be to sort of to construct a liquid, a kind of liquid computer.
    0:28:12 Right.
    0:28:17 And then show that the halting problem from computation theory has consequences for fluid dynamics.
    0:28:20 So, show it in that way.
    0:28:22 Can you describe this idea?
    0:28:22 Right, yeah.
    0:28:27 So, this came out of this work of constructing this average equation that blew up.
    0:28:33 So, as part of how I had to do this, so, there’s sort of this naive way to do it.
    0:28:35 You just keep pushing.
    0:28:41 Every time you get energy at one scale, you push it immediately to the next scale as fast as possible.
    0:28:44 This is sort of the naive way to force blow up.
    0:28:46 It turns out in five and higher dimensions, this works.
    0:28:50 But in three dimensions, there was this funny phenomenon that I discovered.
    0:28:59 That if you keep, if you change the laws of physics, you just always keep trying to push the energy into smaller, smaller scales.
    0:29:04 What happens is that the energy starts getting spread out into many scales at once.
    0:29:12 So, you have energy at one scale, you’re pushing it into the next scale, and then as soon as it enters that scale, you also push it to the next scale.
    0:29:15 But there’s still some energy left over from the previous scale.
    0:29:16 You’re trying to do everything at once.
    0:29:19 And this spreads out the energy too much.
    0:29:26 And then it turns out that it makes it vulnerable for viscosity to come in and actually just damp out everything.
    0:29:30 So, it turns out this directive motion doesn’t actually work.
    0:29:34 There was a separate paper by some other authors that actually showed this in three dimensions.
    0:29:38 So, what I needed was to program a delay.
    0:29:40 So, kind of like airlocks.
    0:29:46 So, I needed an equation which would start with a fluid doing something at one scale.
    0:29:48 It would push its energy into the next scale.
    0:29:54 But it would stay there until all the energy from the larger scale got transferred.
    0:29:58 And only after you pushed all the energy in, then you sort of opened the next gate.
    0:30:00 And then you push that in as well.
    0:30:07 So, by doing that, the energy inches forward scale by scale in such a way that it’s always localized at one scale at a time.
    0:30:11 And then it can resist the effects of viscosity because it’s not dispersed.
    0:30:18 So, in order to make that happen, I had to construct a rather complicated non-linearity.
    0:30:24 And it was basically like, you know, it was constructed like an electronic circuit.
    0:30:28 So, I actually thanked my wife for this because she was trained as an electrical engineer.
    0:30:34 And, you know, she talked about, you know, she had to design circuits and so forth.
    0:30:45 And, you know, if you want a circuit that does a certain thing, like maybe have a light that flashes on and then turns off and then on and then off, you can build it from more primitive components, you know, capacitors and resistors and so forth.
    0:30:54 And these diagrams, you can sort of follow up with your eyeballs and say, oh, yeah, the current will build up here and then it will stop and then it will do that.
    0:31:00 So, I knew how to build the analog of basic electronic components, you know, like resistors and capacitors and so forth.
    0:31:07 And I would stack them together in such a way that I would create something that would open one gate and then there would be a clock.
    0:31:10 And then once the clock hits a certain threshold, it would close it.
    0:31:13 It would become a Rube Goldberg type machine, but described mathematically.
    0:31:15 And this ended up working.
    0:31:19 So, what I realized is that if you could pull the same thing off for the actual equations.
    0:31:38 So, if the equations of water support a computation, so, like, you can imagine kind of a steampunk, but it’s really waterpunk type of thing where, you know, so modern computers are electronic, you know, they’re powered by electrons passing through very tiny wires and interacting with other electrons and so forth.
    0:31:44 But instead of electrons, you can imagine these pulses of water moving at a certain velocity.
    0:31:49 And maybe it’s, there are two different configurations corresponding to a bit being up or down.
    0:32:03 Probably that if you had two of these moving bodies of water collide, they would come out with some new configuration, which is, which would be something like an AND gate or OR gate, you know, that the output would depend in a very predictable way on the inputs.
    0:32:07 And like, you could chain these together and maybe create a Turing machine.
    0:32:11 And then you could, you have computers, which are made completely out of water.
    0:32:17 And if you have computers, then maybe you can do robotics, you know, hydraulics and so forth.
    0:32:25 And so you could create some machine, which is basically a fluid analog, what’s called a von Neumann machine.
    0:32:32 So von Neumann proposed, if you want to colonize Mars, the sheer cost of transporting people and machines to Mars is just ridiculous.
    0:32:47 But if you could transport one machine to Mars, and this machine had the ability to mine the planet, create some more materials, smelt them, and build more copies of the same machine, then you could colonize the whole planet over time.
    0:32:55 So if you could build a fluid machine, which, yeah, so it’s, it’s, it’s a, it’s a, it’s a, it’s a fluid robot.
    0:32:56 Okay.
    0:32:58 And what it would do, it’s, it’s purpose in life.
    0:33:03 It’s programmed so that it would create a smaller version of itself in some sort of cold state.
    0:33:04 It wouldn’t start just yet.
    0:33:10 Once it’s ready, the big robot conviction of water would transfer all its energy into the smaller configuration and then power down.
    0:33:11 Okay.
    0:33:12 And then like, like clean itself up.
    0:33:18 And then what’s left is this newest state, which would then turn on and do the same thing, but smaller and faster.
    0:33:20 And then the equation has a certain scaling symmetry.
    0:33:22 Once you do that, it can just keep iterating.
    0:33:26 So this in principle would create a blow up for the actual Navier-Stokes.
    0:33:29 And this is what I managed to accomplish for this average Navier-Stokes.
    0:33:32 So it provided this sort of roadmap to solve the problem.
    0:33:39 Now, this is a pipe dream because there are so many things that are missing for this to actually be a reality.
    0:33:44 So I, I, I can’t create these basic logic gates.
    0:33:48 I don’t, I don’t have these in these special configurations of water.
    0:33:59 So, um, I mean, there’s candidates that include vortex rings that might possibly work, but, um, um, but also, you know, analog computing is really nasty, um, compared to digital computing.
    0:34:00 I mean, cause there’s always errors.
    0:34:04 Um, you have to, you have to do a lot of error correction along the way.
    0:34:12 I don’t know how to completely power down the big machine so that it doesn’t interfere with the, the, the writing of a smaller machine, but everything in principle can happen.
    0:34:14 Like it doesn’t contradict any of the laws of physics.
    0:34:18 Um, so it’s sort of evidence that this thing is possible.
    0:34:26 Um, there are other groups who are now pursuing ways to make Navier-Stokes blow up, which are nowhere near as ridiculously complicated as this.
    0:34:39 Um, um, they, they actually are pursuing much closer to the direct self-similar model, which can, uh, it, it doesn’t quite work as is, but there could be some simpler scheme than what I just described to make this work.
    0:34:46 There is a real leap of genius here to go from Navier-Stokes to this Turing machine.
    0:35:03 So it goes from what the self-similar blob scenario that you’re trying to get the smaller and smaller blob to now having a liquid Turing machine gets smaller, smaller, smaller, and somehow seeing how that could be used.
    0:35:06 To say something about a blow up.
    0:35:07 I mean, that’s a big leap.
    0:35:08 So there’s precedent.
    0:35:18 I mean, um, so the, the thing about mathematics is that it’s, it’s really good at, um, spotting connections between what you think of, what you might think of as completely different, um, problems.
    0:35:23 Um, but if, if, if the mathematical form is the same, you, you, you, you can, you can, you can draw a connection.
    0:35:28 Um, so, um, there’s a lot of work previously on what’s called cellular automator.
    0:35:31 Um, the most famous of which is Conway’s Game of Life.
    0:35:33 This is infinite discrete grid.
    0:35:36 And at any given time, the grid is either occupied by a cell or it’s empty.
    0:35:40 And there’s a very simple rule that, uh, tells you how these cells evolve.
    0:35:42 So sometimes cells live and sometimes they die.
    0:35:51 Um, and there’s, um, you know, um, when I was a, uh, a student, uh, it was a very popular screensaver to actually just have these, these animations going on and they look very chaotic.
    0:35:57 In fact, they look a little bit like turbulent flow sometimes, but at some point people discovered more and more interesting structures within this game of life.
    0:36:00 Um, so for example, they discovered this thing called a glider.
    0:36:05 So a glider is a very tiny configuration of like four or five cells, which evolves and it just moves at a certain direction.
    0:36:07 And that’s like this, this vortex rings.
    0:36:10 Um, yeah, so this is an analogy.
    0:36:19 The Game of Life is kind of like a discrete equation and, and, um, the fluid Navier-Stokes is a continuous equation, but mathematically they have some similar features.
    0:36:27 Um, and, um, so over time people discovered more and more interesting things that you could build within the Game of Life.
    0:36:28 Game of Life is a very simple system.
    0:36:34 It only has like three or four rules, um, to, to do it, but, but you can design all kinds of interesting configurations inside it.
    0:36:38 Um, there’s something called a glider gun that does nothing to spit out gliders one at a, one, one at a time.
    0:36:47 Um, and then after a lot of effort, people managed to, to create, um, and gates and all gates for gliders.
    0:36:55 Like there’s this massive ridiculous structure, which if you, if, if, uh, if, uh, if you have a stream of gliders, um, coming in here and a stream of gliders coming in here,
    0:36:57 then you may produce a stream gliders coming out.
    0:37:04 If, if, maybe, if both of, of the, um, streams, um, have gliders, then there’ll be an output stream.
    0:37:06 But if only one of them does, then nothing comes out.
    0:37:08 So they could build something like that.
    0:37:17 And once you could build, and, um, these basic gates, then just from software engineering, you can build almost anything.
    0:37:19 Um, you can build a Turing machine.
    0:37:22 I mean, it’s, it’s like an enormous steampunk type things.
    0:37:28 They look ridiculous, but then people also generated self-replicating objects in the game of life.
    0:37:36 A massive machine, a boner machine, which over a lot, huge period of time, and it always looked like glider guns inside doing these very steampunk calculations.
    0:37:40 It would create another version of itself, which could replicate.
    0:37:41 It’s so incredible.
    0:37:45 A lot of this was like community crowdsourced by like amateur mathematicians, actually.
    0:37:48 Um, so I knew about that, that, that work.
    0:37:52 And so that is part of what inspired me to propose the same thing with Navier-Stokes.
    0:37:57 Um, which is a much, as I said, analog is much worse than digital.
    0:38:03 Like it’s going to be, um, you can’t just directly take the constructions in the game of life and plunk them in.
    0:38:05 But again, it just, it shows it’s possible.
    0:38:10 You know, there’s a kind of emergence that happens with these cellular automata.
    0:38:14 Local rules, maybe it’s similar to fluids.
    0:38:15 I don’t know.
    0:38:24 But local rules operating at scale can create these incredibly complex dynamic structures.
    0:38:28 Do you think any of that is amenable to mathematical analysis?
    0:38:33 Do we have the tools to say something profound about that?
    0:38:38 The thing is, you can get these emergent, very complicated structures, but only with very carefully prepared initial conditions.
    0:38:44 Yeah, so, so these, these, these glider guns and gates and, and software machines, if you just plunk down randomly,
    0:38:47 some cells and you, on the left, you will not see any of these.
    0:38:57 Um, and that’s the analogous situation of Navier-Stokes again, you know, that, that with, with typical initial conditions, you will not, you will not have any of this weird computation going on.
    0:39:06 Um, but basically through engineering, you know, by, by, by, by, by, by specially designing things in a very special way, you can pick clever constructions.
    0:39:15 I wonder if it’s possible to prove the sort of the negative of like, basically prove that only through engineering can you ever create something interesting.
    0:39:21 This, this, this is a recurring challenge in mathematics that, um, I call it the dichotomy between structure and randomness.
    0:39:24 That most objects that you can generate in mathematics are random.
    0:39:26 They look like random, like the digits of pi.
    0:39:28 Well, we believe is a good example.
    0:39:31 Um, but there’s a very small number of things that have patterns.
    0:39:41 Um, but, um, now you can prove something as a pattern by just constructing, you know, like if something has a simple pattern and you have a proof that it, it does something like repeat itself every so often you can do that.
    0:39:48 But, um, and you can prove that, that for example, you can, you can prove that most sequences of, of digits have no pattern.
    0:39:52 Um, so like if, if you just pick digits randomly, there’s something called the low large numbers.
    0:39:55 It tells you, you’re going to get as many ones as, as twos in the long run.
    0:40:02 Um, but, um, we have a lot fewer tools to, to, to, to, if I give you a specific pattern.
    0:40:06 Like the digits of pi, how can I show that this doesn’t have some weird pattern to it?
    0:40:14 Some other work that I have spent a lot of time on is to prove what are called structure theorems or inverse theorems that give tests for when something is, is very structured.
    0:40:17 So some functions are, what’s called additive.
    0:40:20 Like if you have a function that maps the natural numbers to the natural numbers.
    0:40:24 So maybe, um, you know, two maps to four or three maps to six and so forth.
    0:40:30 Um, some functions are what’s called additive, which means that if you add, if you add two inputs together, the output gets, gets added as well.
    0:40:32 Uh, for example, I’m multiplying by a constant.
    0:40:40 If you multiply a number by 10, um, if you, if you, if you, if you multiply a plus b by 10, that’s the same as multiplying a by 10 and b by 10 and then adding them together.
    0:40:42 So some, um, functions are additive.
    0:40:46 Some functions are kind of additive, but not completely additive.
    0:40:53 Um, so for example, if I take a number n, I multiply by the square root of two and I take the integer part of that.
    0:40:56 So 10 by square root of two is like 14 point something.
    0:40:59 So 10 up to 14, um, 20 up to 28.
    0:41:03 Um, so in that case, additivity is true then.
    0:41:05 So 10 plus 10 is 20 and 14 plus 40 is 28.
    0:41:08 But because of this rounding, uh, sometimes there’s roundoff errors.
    0:41:16 And sometimes when you, um, add a plus b, this function doesn’t quite give you the sum of, of the two individual outputs, but the sum plus minus one.
    0:41:19 Um, so it’s almost additive, but not quite additive.
    0:41:33 Um, so there’s a lot of useful results in mathematics and I’ve worked a lot on developing things like this to the effect that if, if a function exhibits some structure like this, then, um, it’s basically, there’s a reason for why it’s true.
    0:41:42 And the reason is because there’s, there’s some other nearby function, which is actually, um, completely structured, which is explaining this sort of partial pattern that you have.
    0:41:54 Um, and so if you have these sort of inverse theorems, it, um, it creates this sort of dichotomy that, that either the objects that you study are either have no structure at all, or they are somehow related to something that is structured.
    0:41:59 Um, and in either way, in either, um, uh, in either case, you can make progress.
    0:42:06 Um, a good example of this is that there’s this old theorem in mathematics called Szemeredi’s theorem, uh, proven in the 1970s.
    0:42:09 It concerns trying to find a certain type of pattern in a set of numbers.
    0:42:14 The pattern is arithmetic progression, things like three, five, and seven, or, or, or 10, 15, and 20.
    0:42:26 And Szemeredi, André, Szemeredi proved that, um, any set of numbers that are sufficiently big, um, what’s called, what’s called positive density, has, um, arithmetic progressions in it of, of any length you wish.
    0:42:33 Um, so for example, um, the odd numbers have a set of density one half, um, and they contain arithmetic progressions of any length.
    0:42:37 Um, so in that case, it’s obvious because the, the, the odd numbers are really, really structured.
    0:42:40 I can just take, uh, 11, 13, 15, 17.
    0:42:44 I just, I can, I can easily find arithmetic progressions in, in, in that set.
    0:42:48 Um, but, um, Szemeredi’s theorem also applies to random sets.
    0:42:56 If I take the set of all numbers and I flip a coin, um, and I, uh, for each number, and I only keep the numbers which, for which I got a heads.
    0:43:00 Okay, so I just flip coins, I just randomly take out half the numbers, I keep one half.
    0:43:02 So that’s a set that has no, no patterns at all.
    0:43:10 But just from random fluctuations, you will still get a lot of, um, um, of arithmetic progressions in that set.
    0:43:17 Can you prove that there’s arithmetic progressions of arbitrary length within a random?
    0:43:19 Yes, um, have you heard of the infinite monkey theorem?
    0:43:23 Usually, mathematicians give boring names to theorists, but occasionally they, they give colorful names.
    0:43:32 Yes, the popular version of the infinite monkey theorem is that if you have an infinite number of monkeys in a room with each of a typewriter, they type out, uh, text randomly.
    0:43:37 Almost surely one of them is going to generate the entire script of Hamlet or any other finite string of text.
    0:43:40 Uh, it will just take some time, quite a lot of time, actually.
    0:43:42 But if you have an infinite number, then it happens.
    0:43:53 Um, so, um, basically the theorem says that if you take an infinite string of, of digits or whatever, um, eventually any finite pattern you wish will emerge.
    0:43:56 Uh, it may take a long time, but it will eventually happen.
    0:43:59 Um, in particular, the arithmetic progressions of any length will eventually happen.
    0:44:03 Okay, but you need that, you, but you need an extremely long random sequence for this to happen.
    0:44:05 I suppose that’s intuitive.
    0:44:07 It’s just infinity.
    0:44:08 Yeah.
    0:44:10 Infinity absorbs a lot of sins.
    0:44:11 Yeah.
    0:44:13 How are we humans supposed to deal with infinity?
    0:44:26 Well, you can think of infinity as, as, as an abstraction of, um, a finite number for which you, you do not have a bound for, um, that, uh, you know, I mean, so nothing in real life is truly infinite.
    0:44:35 Um, but, you know, you can, um, you know, you can ask yourself questions like, you know, what if I had as much money as I wanted, you know, or what if I could go as fast as I wanted?
    0:44:45 And a way in which mathematicians formalize that is mathematics has found a formalism to idealize instead of something being extremely large or extremely small to actually be exactly infinite or zero.
    0:44:49 Um, and often the, the mathematics becomes a lot cleaner when you do that.
    0:45:01 I mean, in physics, we, we joke about, uh, assuming spherical cows, um, you know, like real world problems have got all kinds of real world effects, but you can idealize, send certain things to infinity, send certain things to zero.
    0:45:06 Um, and, um, and the mathematics becomes a lot simpler to work with there.
    0:45:16 I wonder how often using infinity, uh, forces us to deviate from, um, the physics of reality.
    0:45:16 Yeah.
    0:45:18 So there’s a lot of pitfalls.
    0:45:30 Um, so, you know, we, we spend a lot of time, you know, undergraduate math classes, teaching analysis, um, and analysis is often about how to take limits and, and, and, and whether, you know, so for example, a plus B is always B plus A.
    0:45:34 Um, so when, when you have a finite number of terms and you add them, you can swap them and there’s no, there’s no problem.
    0:45:43 But when you have an infinite number of terms, they’re these sort of show games you can play where you can have a series which converges to one value, but you rearrange it and it suddenly converges to another value.
    0:45:45 And so you can make mistakes.
    0:45:55 You have to know what you’re doing when you allow infinity, um, you have to introduce these epsilons and deltas and, and, and there’s, there’s a certain type of way of reasoning that helps you avoid mistakes.
    0:46:06 Um, in more recent years, um, people have started taking results that are true in, in infinite limits and what’s called, and what’s called, and what’s called finalizing them.
    0:46:11 Um, so you know that something’s true eventually, but, um, you don’t know when now give me a rate.
    0:46:11 Okay.
    0:46:18 So if I don’t have an infinite number of monkeys, but, but a large finite number of monkeys, how long do I have to wait for Hamlet to come out?
    0:46:21 Um, and, um, that’s a more quantitative question.
    0:46:28 Um, and this is something that you can, you can, um, attack by purely finite methods and you can use your finite intuition.
    0:46:33 Um, and in this case, it turns out to be exponential in the length of the text that you’re trying to generate.
    0:46:38 Um, so, um, and so this is why you never see the monkeys create Hamlet.
    0:46:41 You can maybe see them create a four letter word, but nothing that big.
    0:46:50 And so I personally find once you finiteize an infinite statement, it’s, it does become much more intuitive and it’s no longer so, so weird.
    0:46:56 Um, so even if you’re working with infinity, it’s good to finiteize so that you can have some intuition.
    0:46:57 Yeah.
    0:47:00 The downside is that the finite groups are just much, much messier.
    0:47:07 And, and, uh, yeah, so, so the infinite ones I found first, usually like decades earlier, and then later on people finalize them.
    0:47:16 So since we mentioned a lot of math and a lot of physics, uh, what is the difference between mathematics and physics as disciplines, as ways of understanding of seeing the world?
    0:47:19 Maybe we can throw an engineering in there.
    0:47:22 You mentioned your wife is an engineer, give it new perspective on circuits.
    0:47:22 Right.
    0:47:28 So this different way of looking at the world, given that you’ve done mathematical physics, you, you’ve, you’ve worn all the hats.
    0:47:29 Right.
    0:47:33 So I think science in general is interaction between three things.
    0:47:35 Um, there’s the real world.
    0:47:43 Um, there’s what we observe of the real world, our observations, and then our mental models as to how we think the world works.
    0:47:47 Um, so, um, we can’t directly access reality.
    0:47:48 Okay.
    0:47:52 Uh, all we have are the observations, which are incomplete and they, they have errors.
    0:47:59 Um, and, um, there are many, many cases where we would, um, uh, we want to know, for example, what is the weather like tomorrow?
    0:48:02 And we don’t yet have the observation and we’d like to, like a prediction.
    0:48:08 Um, and then we have these simplified models, sometimes making unrealistic assumptions, you know, spherical cow type things.
    0:48:10 Those are the mathematical models.
    0:48:12 Mathematics is concerned with the models.
    0:48:19 Science collects the observations and it proposes the models that might explain these observations.
    0:48:24 What mathematics does is, uh, you, we stay within the model and we ask what are the consequences of that model?
    0:48:32 What observations, what, what predictions would the model make of the, of future observations or past observations?
    0:48:33 Does it fit observed data?
    0:48:35 Um, so there’s definitely a symbiosis.
    0:48:48 Um, it’s math, I guess mathematics is, is unusual among other disciplines is that we start from hypotheses, like the axioms of a model and ask what conclusions come up from that model.
    0:48:54 Um, in almost any other discipline, uh, you start with the conclusions, you know, I want to do this.
    0:48:57 I want to build a bridge, you know, I want to, to make money.
    0:48:57 I want to do this.
    0:48:58 Okay.
    0:49:01 And then you, you, you find the path to get there.
    0:49:07 Um, a lot, there’s, there’s a lot less sort of speculation about, you know, suppose I did this, what would happen?
    0:49:14 Um, you know, planning and, and, and modeling, um, uh, speculative fiction maybe is one other place.
    0:49:16 Uh, but, uh, that’s about it actually.
    0:49:20 Most of the things we do in life is conclusions driven, including physics and science.
    0:49:22 You know, I mean, they want to know, you know, where is this asteroid going to go?
    0:49:24 You know, what, what, what, what is the weather going to be tomorrow?
    0:49:31 Um, but, um, but thanks also has this other direction of, of going from the, uh, the axioms.
    0:49:32 What do you think?
    0:49:36 There is this tension in physics between theory and experiment.
    0:49:41 What do you think is the more powerful way of discovering truly novel ideas about reality?
    0:49:43 Well, you need both top down and bottom up.
    0:49:46 Um, yeah, it’s just a, it’s a, it’s a really interaction between all these things.
    0:49:53 So over time, the observations and the theory and the modeling should both get closer to reality.
    0:49:59 But initially, and it isn’t, I mean, uh, this is, um, this is, um, this is always the case, you know, they’re, they’re always far apart to begin with.
    0:50:04 Um, but you need one to figure out where, where to push the other, you know?
    0:50:15 So, um, if your model is predicting anomalies, um, that are not picked up by experiment, that tells experimenters where to look, you know, um, to, to, to, to, to, to find more data to refine the models.
    0:50:18 Um, you know, so it, it, it goes, it goes back and forth.
    0:50:23 Um, within mathematics itself, there’s, there’s also a theory and experimental component.
    0:50:28 It’s just that until very recently, theory has dominated almost completely.
    0:50:30 Like 99% of mathematics is theoretical mathematics.
    0:50:33 And there’s a very tiny amount of experimental mathematics.
    0:50:40 Um, I mean, people do do it, you know, like if they want to study prime numbers or whatever, they can just generate large data sets.
    0:50:45 And so once we had the computers, um, we began to do it a little bit.
    0:50:55 Um, although even before, well, like Gauss, for example, he discovered, he conjectured the most basic theorem in, in number theory, which is called the prime number theorem, which predicts how many primes that are up to a million, up to a trillion.
    0:50:57 It’s not an obvious question.
    0:51:13 And basically what he did was like, he computed, uh, I mean, mostly, um, by himself, but also hired human computers, um, people whose professional job it was to do arithmetic, um, to compute the first hundred thousand primes or something and made tables and made a prediction.
    0:51:16 Um, and that was an early example of experimental mathematics.
    0:51:23 Um, but until very recently, it was not, um, yeah, I mean, theoretical mathematics was just much more successful.
    0:51:30 I mean, because doing complicated mathematical computations is, uh, was just not, not feasible until very recently.
    0:51:36 Uh, and even nowadays, you know, even though we have powerful computers, only some mathematical things can be, um, explored numerically.
    0:51:38 There’s something called the combinatorial explosion.
    0:51:43 If you want us to study, for example, you want to study all possible subsets of numbers one to a thousand.
    0:51:45 There’s only one thousand numbers.
    0:51:46 How bad could it be?
    0:51:56 It turns out the number of different subsets of one to a thousand is two to the power of one thousand, which is way bigger than, than, than any computer can currently, can, can, can any computer ever, or ever, um, enumerate.
    0:52:06 Um, so if you have to be, um, there are certain math problems that very quickly become just intractable to attack by direct brute force computation.
    0:52:09 Uh, chess is another, um, a famous example.
    0:52:14 Uh, the number of chess positions, uh, we can’t get a computer to fully explore.
    0:52:28 But now we have AI, um, um, we have tools to explore this space, not with 100% guarantees of success, but with experiment, you know, so, like, um, we can empirically solve chess now.
    0:52:37 Uh, for example, uh, we have, we have, uh, very, very good AIs that, that can, you know, they don’t explore every single position in the game tree, but they have found some very good approximation.
    0:52:50 Um, and people are using, actually, these chess engines, uh, to make, uh, to do experimental chess, um, that, uh, they’re, they’re revisiting old chess theories about, oh, you know, when you, this type of opening, you know, this is a good, this is a good type of move, this is not.
    0:52:57 And they can use these chess engines to actually, uh, refine, uh, in some cases, overturn, um, um, um, conventional wisdom about chess.
    0:53:04 And I, I do hope that, uh, that mathematics will, will have a larger experimental component in the future, perhaps powered by AI.
    0:53:07 We’ll, of course, talk about that, but in the case of chess.
    0:53:17 And there’s a similar thing in mathematics that I don’t believe it’s providing a kind of formal explanation of the different positions.
    0:53:20 It’s just saying which position is better or not, that you can intuit as a human being.
    0:53:25 And then from that, we humans can construct a theory of the matter.
    0:53:29 You’ve mentioned the Plato’s cave allegory.
    0:53:30 Mm-hmm.
    0:53:37 So, in case people don’t know, it’s where people are observing shadows of reality, not reality itself.
    0:53:41 And they believe what they’re observing to be reality.
    0:53:50 Is that, in some sense, what mathematicians and maybe all humans are doing, is, um, looking at shadows of reality?
    0:53:54 Is it possible for us to truly access reality?
    0:53:57 Well, there are these three ontological things.
    0:54:02 There’s actual reality, there’s our observations, and our, our models.
    0:54:07 Um, and technically they are distinct, and I think they will always be distinct.
    0:54:07 Um, right.
    0:54:11 But they can get closer, um, over time.
    0:54:20 Um, you know, so, um, and the process of getting closer often means that you’re, you have to discard your initial intuitions.
    0:54:36 Um, so, um, astronomy provides great examples, you know, like, you know, like, you know, like, you know, an initial model of the world is flat because it looks flat, you know, and, um, and that it’s, and it’s big, you know, and the rest of the universe, the skies is not, you know, like the sun, for example, looks really tiny.
    0:55:05 Um, and so, you, you start off with a model which is actually really far from reality, um, but it fits kind of the observations that you have, um, you know, so, you know, so things look good, you know, but over time, as you make more and more observations, bringing it closer to, to, to reality, um, the model gets dragged along with it, you know, and so, over time, we had to realize that the Earth was round, that it spins, it goes around the solar system, solar system goes around the galaxy, and so on and so forth, and the guys about the universe, you know, it’s expanding, um, expansions, it’s self-expanding, accelerating, and in fact,
    0:55:11 very recently in this year, I saw this, uh, even the acceleration of the universe itself is, uh, this evidence that is, is non-constant.
    0:55:15 And, uh, the explanation behind why that is, is…
    0:55:16 It’s catching up.
    0:55:18 Um, it’s catching up.
    0:55:21 I mean, it’s still, you know, the dark matter, dark energy, this kind of thing.
    0:55:21 Yes.
    0:55:42 We have, we have a model that sort of explains, that fits the data really well, it just has a few parameters that, um, uh, you have to specify, um, but, so, you know, people say, oh, that’s fudge factors, you know, with, with enough fudge factors, you can, you can explain anything, um, yeah, but, uh, the mathematical point of the model is that, um, you want to have fewer parameters in your model than data points in your observational set.
    0:55:48 So, if you have a model with 10 parameters that explains 10 observations, that is a completely useless model.
    0:55:49 It’s what’s called overfitted.
    0:55:59 But, like, if you have a model with, you know, two parameters, and it explains a trillion observations, which is basically, uh, so, uh, yeah, the, the, the dark matter model, I think, has, like, 14 parameters.
    0:56:15 And it explains petabytes of data, um, that, that, that, that the astronomers have, um, you can think of a theory, uh, like, one way to think about, um, uh, physical mathematical theory, uh, theory is, it’s, it’s, it’s a compression of, of the, of the universe, um, and a data compression.
    0:56:24 So, you know, you have these petabytes of observations, you’d like to compress it to a model, which you can describe in five pages and specify a certain number of parameters.
    0:56:31 And if it can fit to reasonable accuracy, you know, almost all of your observations, I mean, the more compression that you make, the better your theory.
    0:56:37 In fact, one of the great surprises of our universe and of everything in it is that it’s compressible at all.
    0:56:39 That’s the unreasonable effectiveness of mathematics.
    0:56:40 Yeah.
    0:56:41 Einstein had a quote like that.
    0:56:44 The, the most incomprehensible thing about the universe is that it is comprehensible.
    0:56:45 Right.
    0:56:49 And not just comprehensibly, you can do an equation like E equals mc squared.
    0:56:52 There is actually some mathematical possible explanation for that.
    0:56:56 Um, so there’s this phenomenon in mathematics called universality.
    0:57:01 So many complex systems at the macroscale are coming out of lots of tiny interactions at the macroscale.
    0:57:11 And normally because of the common form of explosion, you would think that, uh, the macroscale equations must be like infinitely exponentially more complicated than, than the, uh, the macroscale ones.
    0:57:14 And they are, if you want to solve them completely exactly.
    0:57:22 Like if you want to model, um, all the atoms in a box of, of air, uh, that’s like Avogadro’s number is humongous, right?
    0:57:23 There’s a huge number of particles.
    0:57:26 If you actually have to track each one, it would be ridiculous.
    0:57:34 But certain laws emerge at the macroscopic scale that almost don’t depend on what’s going on at the macroscale or only depend on a very similar number of parameters.
    0:57:44 So if you want to model a gas, um, of, you know, quintillion particles in a box, you just need to know its temperature and pressure and volume and a few parameters, like five or six.
    0:57:51 And it models almost everything you need to know about these 10 to 23 or whatever particles.
    0:58:05 Um, so we, we have, um, we, we don’t understand universality anywhere near as we would like mathematically, but there are much simpler toy models where we do, um, have a good understanding of why universality occurs.
    0:58:14 Um, um, most basic one is, is the central limit theorem that explains why the bell curve shows up everywhere in nature, that so many things are distributed by, that was called a Gaussian distribution.
    0:58:18 The famous bell curve, uh, there’s not even a meme with this curve.
    0:58:22 And even the meme applies broadly, the universality to the meme.
    0:58:26 Yes, you can go meta, uh, if you like, but there are many, many processes.
    0:58:33 For example, you can, you can take lots and lots of independent, um, random variables and average them together, um, uh, in, in various ways.
    0:58:40 You can take a simple average or more complicated average, and we can prove in various cases that, that these, these bell curves, these Gaussians emerge.
    0:58:42 And it is a satisfying, satisfying explanation.
    0:58:44 Um, sometimes they don’t.
    0:58:51 Um, so, so if you have many different inputs and they will correlate it in some systemic way, then you can get something very far from a bell curve show up.
    0:58:54 Uh, and this is also important to know when a situation fails.
    0:58:58 So universality is not a 100% reliable thing to rely on.
    0:59:03 Um, that, um, um, that the global financial crisis was a famous example of this.
    0:59:14 Uh, people thought that, uh, um, mortgage defaults, um, um, had this sort of, um, Gaussian type behavior that, that if you, if you ask, if a population of, of, uh, you know,
    0:59:22 100,000 Americans with mortgages, that’s what, what proportion of the world default in the mortgages, um, if everything was decorrelated, it would be a nice bell curve.
    0:59:26 And, and, and like, you can, you can, you can, you can manage risk with options and derivatives and so forth.
    0:59:29 And, um, and it is a very beautiful theory.
    0:59:37 Um, but if there are systemic shocks in the economy, uh, that can push everybody to default at the same time, uh, that’s very non-Gaussing behavior.
    0:59:42 Um, and, uh, this wasn’t fully accounted for in, uh, 2008.
    0:59:48 Um, now I think there’s some more awareness that this is a systemic risk is actually a much bigger issue.
    0:59:53 And, uh, just because the model is pretty, uh, and nice, uh, it may not match reality.
    0:59:59 And so, so the mathematics of working out what models do is really important.
    1:00:07 Um, but, um, also the, the science of validating when the models fit reality and when they don’t, um, I mean, that you need both.
    1:00:20 Um, and, but mathematics can help because it can, for example, these central limit theorems, it tells you that if you have certain axioms, like, like, like, uh, non-correlation, that if all the inputs were not correlated to each other, um, then you have this constant behavior.
    1:00:24 If things are fine, it tells you where to look for weaknesses in the model.
    1:00:40 So if you have a mathematical understanding of central limit theorem and someone proposes to use these Gaussian copulas or whatever to model, um, default risk, um, if you’re mathematically, um, trained, you would say, okay, but what are the systemic correlation between all your inputs?
    1:00:45 And so then, then you can ask the economists, you know, how, how, how much risk is that?
    1:00:47 Um, and then you can, you can, you can go look for that.
    1:00:51 So there’s always this, this, this synergy between science and mathematics.
    1:00:54 A little bit on the topic of universality.
    1:01:03 You’re known and celebrated for working across an incredible breadth of mathematics reminiscent of Hilbert a century ago.
    1:01:11 In fact, the great Fields Medal winning mathematician, Tim Gowers has said that you are the closest thing we get to Hilbert.
    1:01:15 He’s a colleague of yours.
    1:01:16 Good friend.
    1:01:21 But anyway, so you are known for this ability to go both deep and broad in mathematics.
    1:01:30 So you’re the perfect person to ask, do you think there are threads that connect all the disparate areas of mathematics?
    1:01:35 Is there a kind of deep underlying structure, uh, to all of mathematics?
    1:01:49 There’s certainly a lot of connecting threads, um, and a lot of the progress of mathematics has, can be represented by taking by stories of two fields of mathematics that were previously not connected and finding connections.
    1:01:53 Um, an ancient example is, um, geometry and number theory.
    1:01:57 You know, so, so, so in the times of ancient Greeks, these were considered different subjects.
    1:01:59 Um, I mean, mathematicians worked on both.
    1:02:04 You know, you could, uh, work both on geometry most famously, but also on numbers.
    1:02:08 Um, but they were not really considered related.
    1:02:15 Um, I mean, a little bit like, you know, you could say that, that this length was five times this length because you could take five copies of this length and so forth.
    1:02:26 But it wasn’t until Descartes who really realized that, uh, who developed, what we now call analytic geometry, that you can, you can parametrize the plane, a geometric object by, um, by two real numbers.
    1:02:31 Every point can be, and so geometric problems can be turned into, into problems about numbers.
    1:02:36 Um, and today this feels almost trivial.
    1:02:39 Like there’s, there’s, there’s, there’s no content to list.
    1:02:45 Like, of course, uh, you, you know, um, the plane is X, X, and Y, and of course that’s what we teach and it’s internalized.
    1:02:51 Um, but it was an important development that these, these two fields are, uh, will unify.
    1:02:55 Um, and this process has just gone on throughout mathematics over and over again.
    1:03:00 Algebra and geometry were separated and now we have a spirit algebraic geometry that connects them and over and over again.
    1:03:04 And that’s certainly the type of mathematics that I enjoy the most.
    1:03:07 So I think there’s sort of different styles to being a mathematician.
    1:03:13 I think hedgehogs and fox, a fox knows many things a little bit, but a hedgehog knows one thing very, very well.
    1:03:17 Um, and in mathematics, there’s definitely both hedgehogs and foxes.
    1:03:20 Um, and then there’s people who are kind of, uh, who can play both roles.
    1:03:27 Um, and I think ideal collaboration between mathematicians involves very, you need some diversity.
    1:03:31 Like, um, a fox working with many hedgehogs or vice versa.
    1:03:35 So, yeah, but I identify mostly as a fox, uh, certainly.
    1:03:48 I, I, I like, uh, arbitrage somehow, you know, like, like, um, learning how one field works, learning the tricks of that wheel and then going to another field, which people don’t think is related, but I can, I can adapt the tricks.
    1:03:51 So see the connections between the fields.
    1:03:52 Yeah.
    1:03:55 So there are other mathematicians who are far deeper than I am.
    1:03:57 Like, they’re really, they’re really hedgehogs.
    1:04:04 You know, they, they, they know everything about one field and they’re much faster and, and, and more effective in that field, but I can, I can give them these extra tools.
    1:04:10 I mean, you’ve said that you can be both the hedgehog and the, and the fox, depending on the context and depending on the collaboration.
    1:04:17 So what can you, if it’s at all possible, speak to the difference between those two ways of thinking about a problem?
    1:04:25 Say you’re encountering a new problem, you know, searching for the connections versus like very singular focus.
    1:04:30 I’m much more comfortable with, with the, uh, the, uh, the fox paradigm.
    1:04:35 So, um, yeah, I, I like looking for analogies, narratives.
    1:04:37 Um, I, I spend a lot of time.
    1:04:41 If there’s a result, I see it in one field and I like the result.
    1:04:43 It’s a cool result, but I don’t like the proof.
    1:04:47 Like it uses types of mathematics that I’m not super familiar with.
    1:04:51 Um, I often try to reprove it myself using the tools that I favor.
    1:04:53 Um, often my proof is worse.
    1:05:00 Um, but, um, by the exercise of doing so, um, I can say, oh, now I can see what the other proof was trying.
    1:05:07 Um, and from that, I can get some understanding of, of the tools that are used in, in that field.
    1:05:13 So it’s very exploratory, very doing crazy things in crazy fields and like reinventing the wheel a lot.
    1:05:13 Yeah.
    1:05:23 Whereas so the hedgehog style is, uh, I think much more scholarly, you know, you, you, you very knowledge-based, you, you, you, you, you stay up to speed on like all the developments in this field.
    1:05:29 You, you know, all the history, um, you have a very good understanding of, of exactly the strengths and weaknesses of, of each particular technique.
    1:05:37 Um, yeah, uh, I think you, you’d rely a lot more on sort of calculation than sort of trying to find narratives.
    1:05:43 Um, so yeah, I mean, I could do that too, but, uh, there are other people who are extremely good at that.
    1:05:52 Let’s step back and, uh, uh, maybe look at the, the, a bit of a romanticized version of mathematics.
    1:06:03 So, uh, I think you’ve said that early on in your life, uh, math was more like a puzzle solving activity when you were, uh, young.
    1:06:10 When did you first encounter a problem or proof where you realized math can have a kind of elegance and beauty to it?
    1:06:14 That’s a good question.
    1:06:20 Um, when I came to graduate school, uh, in Princeton, um, so John Conway was there at the time.
    1:06:21 He passed away a few years ago.
    1:06:27 But, uh, I remember one of the very first research talks I went to was a talk by Conway on what he called extreme proof.
    1:06:33 So Conway had just had this, this amazing way of thinking about all kinds of things in a, in a way that you wouldn’t normally think of.
    1:06:42 So, um, he thought of proofs themselves as occupying some sort of space, you know, so, so, um, if you want to prove something, let’s say that there’s infinitely many primes.
    1:06:45 Okay, there will be different proofs, but you could, you could rank them in different axes.
    1:06:50 Like some proofs are elegant, some proofs are long, some proofs are, uh, um, are elementary and so forth.
    1:06:52 Um, and so this is cloud.
    1:06:55 So the space of all proofs itself has some sort of shape.
    1:07:00 Um, and so he was interested in, in extreme points of this shape.
    1:07:08 Like out of all, all these proofs, what is one of those, the shortest at the expense of everything, everything else or, or the most elementary or, or whatever.
    1:07:12 Um, and so he gave some examples of well-known theorems.
    1:07:16 And then he would give what he thought was, was the extreme proof, um, in these different aspects.
    1:07:38 Um, um, I, I just found that really eyeopening, um, that, that, um, you know, it’s, it’s not just getting a proof for a result was interesting, but, but once you have that proof, you know, trying to, to, uh, to optimize it in various ways, um, that, that proof, um, uh, proofing itself had some craftsmanship to it.
    1:07:41 Um, it’s something for my writing style.
    1:07:49 Um, that, you know, like when you do your, your math assignments and as your undergraduate, your homework and so forth, you’re sort of encouraged to just write down any proof that works.
    1:07:50 Okay.
    1:07:50 Okay.
    1:07:53 And they hand it in, they get a, get a, get a, as long as it gets a tick mark, you, you move on.
    1:08:01 Um, but if you want your, your, your results to actually be influential and be read by people, um, it can’t just be correct.
    1:08:09 It should also, um, be a pleasure to read, you know, um, motivated, um, be adaptable to, to generalize to other things.
    1:08:12 Um, it’s the same in many other disciplines, like, like coding.
    1:08:14 And there’s a, uh, there’s a lot of analogies between math and coding.
    1:08:16 I like analogies if you haven’t noticed.
    1:08:28 Um, but, um, you know, like you can code something spaghettical that works for a certain task and it’s quick and dirty and it works, but, uh, there’s lots of good principles for, for, um, writing a code.
    1:08:32 Well, so that other people can use it, build upon it and so on and has fewer bugs and whatever.
    1:08:37 Um, and there’s similar things with mathematics, so.
    1:08:45 Yeah, the, first of all, there’s so many beautiful things there and Kama is one of the great minds, uh, in mathematics ever and computer science.
    1:08:49 Uh, just even considering the space of proofs.
    1:08:49 Yeah.
    1:08:54 And saying, okay, what does this space look like and what are the extremes?
    1:09:01 Uh, like you mentioned, coding is an analogy is interesting because there’s also this activity called, uh, code golf.
    1:09:02 Oh yeah, yeah, yeah.
    1:09:11 Which I also find beautiful and fun, uh, where people use different programming languages to try to write the shortest possible program that accomplishes a particular task.
    1:09:11 Yeah.
    1:09:13 And I believe there’s even competitions on this.
    1:09:14 Yeah, yeah, yeah, yeah.
    1:09:25 And, uh, it’s also a nice way to stress test, not just the, sort of the programs or in this case, the proofs,
    1:09:31 but also the different languages, maybe that’s a different notation or whatever to use to, to accomplish a different task.
    1:09:31 Yeah, you learn a lot.
    1:09:42 I mean, it may seem like a frivolous exercise, but it can generate all these insights, which if you didn’t have this artificial, um, objective to, to, to pursue, you, you might not see.
    1:09:47 What do you use the most beautiful or elegant equation in mathematics?
    1:09:53 I mean, one of the things that people often look to in, in beauty is the simplicity.
    1:10:04 So if you look at E equals MC squared, so when, when a few concepts come together, that’s why the Euler identity is often considered, uh, the most beautiful equation in mathematics.
    1:10:08 Do you, do you find beauty in that one, in the Euler identity?
    1:10:08 Yeah.
    1:10:16 Well, as I said, I mean, what I find most appealing is, is connections between different things that you, um, so the, if you, uh, if the pi i equals minus one.
    1:10:19 Um, so yeah, people are, oh, these are all the fundamental constants.
    1:10:19 Okay.
    1:10:21 That, that, that’s, I mean, that’s cute.
    1:10:37 Um, but, but to me, so the exponential function was, or to measure exponential growth, you know, so the compound interest or decay, you know, anything which is continuously growing, continuously decreasing growth and decay or dilation or contraction is modeled by the exponential function.
    1:10:42 Um, whereas pi, uh, comes around from circles and rotation, right?
    1:10:45 If you want to rotate a needle, for example, a hundred degrees, uh, you need to rotate by pi radians.
    1:10:53 And i, complex numbers, represents the swapping between you and imaginary axes, so a 90 degree rotation, so a change in direction.
    1:10:58 So the exponential function represents growth and decay in the direction that you already are.
    1:11:09 Um, when you stick an i in the exponential, now it’s, it’s, instead of motion in the same direction as your current position, it’s motion as a right angle as your current position, so rotation.
    1:11:17 Um, and then, so, e to pi equals minus one tells you that if you rotate for a time pi, you end up at the other direction.
    1:11:25 So it unifies geometry through dilation and exponential growth, or dynamics, through this act of, of complexification, rotation by, by, by, by i.
    1:11:28 So it, it, it connects together all these tools, mathematics, you know, yeah.
    1:11:36 That thing was geometry and complex and complex and, um, the complex numbers, they’re all considered almost, yeah, they’re all next door neighbors in mathematics because of this identity.
    1:11:46 Do you, do you think the thing you mentioned is cute, the, the, the, the collision of notations from these disparate fields, um, is just a frivolous side effect?
    1:11:53 Or do you think there is legitimate, like, value in one notation, all the, our old friends come together in the night?
    1:11:56 Right, well, it’s, it’s, it’s confirmation that you have the right concepts.
    1:12:11 Um, so, when you first study anything, um, you, you have to measure things and give them names, um, and initially, sometimes, you’re, because your, your model is, again, too far off from reality, you give the wrong things the best names.
    1:12:14 And you only find out later what’s, what’s really important.
    1:12:15 Physicists can do this sometimes.
    1:12:17 I mean, but it turns out, okay.
    1:12:22 So, actually, with physics, so, e equals mc squared, okay, so, uh, one of the, the big things was the e, right?
    1:12:31 So, when, when Aristotle first came up with his laws of, of motion and then, and then, um, Galileo and Newton and so forth, you know, they saw the things they could, they could measure.
    1:12:34 They could measure mass and acceleration and force and so forth.
    1:12:39 And so, Newtonian mechanics, for example, you know, I think it was MA was the famous, uh, Newton’s second law of motion.
    1:12:41 So, those were the, the primary objects.
    1:12:43 So, they gave them the central billing in the theory.
    1:12:50 It was only later, after people started analyzing these equations, that there always seemed to be these quantities that were conserved.
    1:12:52 Um, so, in particular, momentum and energy.
    1:12:57 Um, uh, and it’s not obvious that things happen in energy.
    1:13:01 Like, it’s not something you can directly measure the same way you can measure mass and, and, and velocity and so forth.
    1:13:04 But over time, people realized that this was actually a really fundamental concept.
    1:13:14 Hamilton, eventually, in the 19th century, reformulated Newton’s laws of physics into what’s called Hamiltonian mechanics, where the energy, which is now called the Hamiltonian, was the dominant object.
    1:13:21 Once you know how to measure the Hamiltonian of any system, you can describe completely the dynamics, like what happens to all the states.
    1:13:25 Like, it’s, um, it, it really was a central actor, which was not obvious initially.
    1:13:33 Um, and this, uh, helped actually, uh, this change of perspective really helped when quantum mechanics came along.
    1:13:46 Uh, because, um, the early physicists who studied quantum mechanics, they had a lot of trouble trying to adapt their Newtonian thinking, because, you know, everything was a particle and so forth to, to, to quantum mechanics.
    1:13:55 Um, and, um, and, um, but, um, but again, once you specify it, you specify the entire dynamics, you know, and it’s really, really hard to, to give an answer to that.
    1:14:09 Um, but it turns out that the Hamiltonian, which was so, um, secretly behind the scenes in classical mechanics, also is the key, uh, object in, um, um, um, in quantum mechanics, that there’s, there’s also an object called Hamiltonian.
    1:14:10 It’s a different type of object.
    1:14:12 It’s what’s called an operator rather than, than a function.
    1:14:16 But, um, and, um, but again, once you specify it, you specify the entire dynamics.
    1:14:22 So, there’s something called Schrodinger’s equation that tells you exactly how quantum systems evolve once you have the Hamiltonian.
    1:14:29 So, side by side, they look completely different objects, you know, like, so one involves particles, one involves waves, and so forth.
    1:14:35 But with this centrality, you could start actually transferring a lot of intuition and facts from classical mechanics to quantum mechanics.
    1:14:39 So, for example, in classical mechanics, there’s this thing called Noether’s theorem.
    1:14:43 Every time there’s a symmetry in a physical system, there is a conservation law.
    1:14:46 So, the laws of physics are translation invariant.
    1:14:52 Like, if I move 10 steps to the left, I experience the same laws of physics as if I was here, and that corresponds to conservation momentum.
    1:14:57 Um, if I turn around by, by some angle, again, I experience the same laws of physics.
    1:14:59 Uh, this corresponds to the conservation angle of momentum.
    1:15:03 If I wait for 10 minutes, um, I still have the same laws of physics.
    1:15:04 Um, so there’s time transition invariance.
    1:15:06 This corresponds to the law of conservation of energy.
    1:15:11 Um, so there’s this fundamental connection between symmetry and conservation.
    1:15:16 Um, and that’s also true in quantum mechanics, even though the equations are completely different.
    1:15:19 But because they’re both coming from the Hamiltonian, the Hamiltonian controls everything.
    1:15:23 Um, every time the Hamiltonian has a symmetry, the equations will have a conservation law.
    1:15:31 Um, so it’s, it’s, it’s, it’s, it’s, it’s, once you have the right language, it actually makes things, um, a lot, a lot cleaner.
    1:15:37 One of the points why we can’t unify quantum mechanics and general relativity yet, we haven’t figured out what the fundamental objects are.
    1:15:42 Like, for example, we have to give up the notion of space and time being these almost Euclidean type spaces.
    1:15:49 And it has to be, um, you know, and, you know, we kind of know that at very tiny scales, uh, um, there’s going to be quantum fluctuations.
    1:15:56 There’s a space, space, time foam, um, and trying to, to use Cartesian coordinates X, Y, Z is going to be, it’s, it’s, it’s, it’s, it’s a non-starter.
    1:16:05 But we don’t know how to, what to replace it with, um, we don’t actually have the mathematical, um, um, concepts.
    1:16:08 The analog or Hamiltonian that sort of organized everything.
    1:16:18 Does your gut say that there is a theory of everything, so this is even possible to unify, to find this language that unifies general relativity and quantum mechanics?
    1:16:19 I believe so.
    1:16:24 I mean, the history of physics has been that of unification, much like mathematics, um, over the years.
    1:16:28 You know, electricity and magnetism were separate theories, and then Maxwell unified them.
    1:16:33 You know, Newton unified the motions of the heavens for the motions of objects on the earth and so forth.
    1:16:35 So it should happen.
    1:16:41 It’s just that the, um, uh, again, to go back to this model of the observations and theory.
    1:16:44 Part of our problem is that physics is a victim of its own success.
    1:16:51 That our two big theories of, of, of physics, general relativity and quantum mechanics are so, are so good now.
    1:17:10 So together, they cover 99.9% of sort of all the observations we can make, um, and you have to, like, either go to extremely insane particle accelerations or, or the early universe or, or things that are really hard to measure, um, in order to get any deviation from either of these two theories to the point where you can actually figure out how to, how to combine them together.
    1:17:18 Um, but I have faith that we, you know, we’ve, we’ve, we’ve been doing this for centuries and we’ve made progress before and there’s no reason why we should stop.
    1:17:23 Do you think you will be a mathematician that develops a theory of everything?
    1:17:35 What often happens is that when the physicists need, uh, um, some theory of mathematics, there’s often some precursor that the mathematicians, um, worked out earlier.
    1:17:45 So when Einstein started realizing that space was curved, he went to some mathematician and asked, you know, is there, is there some theory of curved space that the mathematicians already came up with that could be useful?
    1:17:48 And he said, oh yeah, there’s a, I think, uh, Riemann came up with something.
    1:18:00 Um, and so yeah, Riemann had developed Riemannian geometry, um, which is precisely, um, you know, a theory of spaces that are curved in, in various general ways, which turned out to be almost exactly what was needed, um, for Einstein’s theory.
    1:18:03 This is going back to Wiggins’ unreasonable effectiveness on mathematics.
    1:18:11 I think the theories that work well if they explain the universe tend to also involve the same mathematical objects that work well to solve mathematical problems.
    1:18:14 Ultimately, they’re just sort of both ways of organizing data.
    1:18:16 Um, in, in, in, in, in useful ways.
    1:18:22 It just feels like you might need to go to some weird land that’s very hard to, to intuit.
    1:18:24 Like, you know, you have like string theory.
    1:18:27 Yeah, that, that’s, that was, that was a leading candidate for many decades.
    1:18:32 I think it’s slowly pulling out of fashion because it’s, it’s not matching experiment.
    1:18:37 So one of the big challenges, of course, like you said, is experiment is very tough.
    1:18:37 Yes.
    1:18:41 Because of the, how effective both theories are.
    1:18:49 But the other is like, just, you know, you’re talking about, you’re not just deviating from space-time.
    1:18:51 You’re going into like some crazy number of dimensions.
    1:18:52 Yeah.
    1:18:58 You’re doing all kinds of weird stuff that, to us, we’ve gone so far from this flat earth that we started at.
    1:19:09 And now we’re just, it’s, it’s very hard to use our limited descendants of, uh, uh, cognition to intuit what that reality really is like.
    1:19:16 This is why analogies are so important, you know, I mean, so yeah, the round earth is not intuitive because we’re stuck on it.
    1:19:23 Um, but you know, but you, you, you, you know, but round objects in general, we have pretty good intuition over, uh, and we have intuition about light works and so forth.
    1:19:35 And like, it’s, it’s actually a good exercise to actually work out how eclipses and phases of, of the sun and the moon and so forth that can be really easily explained by, by, by, by round earth and round moon, you know, um, and models.
    1:19:42 Um, and, and you can just take, you know, a basketball and a golf ball and a, and a light source and actually do these things yourself.
    1:19:46 Um, so the intuition is there, um, but yeah, you have to transfer it.
    1:19:54 That is a big leap intellectual for us to go to from flat to round earth because, you know, our life is mostly lived in flat land.
    1:19:55 Yeah.
    1:19:56 To load that information.
    1:19:57 And we’re all like, take it for granted.
    1:20:03 We take so many things for granted because science has established a lot of evidence for this kind of thing.
    1:20:06 But, you know, we’re in a round rock.
    1:20:07 Yeah.
    1:20:09 Flying through space.
    1:20:09 Yeah.
    1:20:10 Yeah.
    1:20:11 That’s a big leap.
    1:20:15 And you have to take a chain of those leaps the more and more and more we progress.
    1:20:15 Right.
    1:20:15 Yeah.
    1:20:24 So modern science is maybe, again, a victim of its own success is that, you know, in order to be more accurate, it has to, to move further and further away from your initial intuition.
    1:20:30 And so, um, for someone who hasn’t gone through the whole process of science education, it looks more and more suspicious because of that.
    1:20:33 So, you know, we, we, we need, we need more grounding.
    1:20:41 I mean, I think, um, I mean, you know, there are, there are scientists who do excellent outreach, um, but there’s, there’s, there’s, there’s, there’s, there’s, there’s lots of science things that you can do at home.
    1:20:42 I mean, there’s lots of YouTube videos.
    1:20:50 I did a YouTube video recently with Grant Sanderson that we talked about earlier that, uh, you know, how the ancient Greeks were able to measure things like the distance to the moon, distance to the earth.
    1:20:54 And, you know, using techniques that you could, you could also replicate yourself.
    1:21:00 Um, it doesn’t all have to be like fancy space telescopes and, and very intimidating mathematics.
    1:21:01 Yeah.
    1:21:02 That’s, uh, I highly recommend that.
    1:21:06 I believe you give a lecture and you also did an incredible video with Grant.
    1:21:14 It’s a beautiful experience to try to put yourself in the mind of a person from that time shrouded in mystery.
    1:21:14 Right.
    1:21:19 You know, you’re like on this planet, you don’t know the shape of it, the size of it.
    1:21:24 You see some stars, you see some, you see some things and you try to like localize yourself in this world.
    1:21:25 Yeah.
    1:21:25 Yeah.
    1:21:28 And try to make some kind of general statements about distance to places.
    1:21:30 Change of perspective is really important.
    1:21:31 You say travel burdens the mind.
    1:21:36 This is intellectual travel, you know, put yourself in the mind of the ancient Greeks or, or some other.
    1:21:42 Put some, some other time period, make hypotheses, spherical cows, whatever, you know, speculate.
    1:21:47 Um, and you know, this is, this is what mathematicians do and some artists do actually.
    1:21:52 It’s just incredible that given the extreme constraints, you could still say very powerful things.
    1:21:54 That’s why it’s inspiring.
    1:22:01 Looking back in history, how much can be figured out when you don’t have much to figure out stuff with.
    1:22:05 If you propose axioms, then the mathematics lets you follow those axioms to their conclusions.
    1:22:09 And sometimes you can get quite a lot, quite a long way from, you know, initial hypotheses.
    1:22:12 If we stay in the land of the weird, you mentioned general relativity.
    1:22:18 You’ve, uh, you’ve contributed, uh, to the mathematical understanding of Einstein’s field equations.
    1:22:19 Can you explain this work?
    1:22:30 And, uh, from a sort of mathematical standpoint, uh, what, what aspects of general relativity are intriguing to you, challenging to you?
    1:22:32 I have worked on some equations.
    1:22:44 There’s something called the, the wave maps equation, or the sigma field model, which is not quite the equation of space-time gravity itself, but of certain fields that might exist on top of space-time.
    1:22:51 Um, so, uh, Einstein’s equations of relativity just describes space and time itself, um, but then there’s other fields that live on top of that.
    1:23:01 Uh, there’s the electromagnetic field, um, there’s, uh, things called Yang-Mills fields, and there’s this whole hierarchy of different equations, of which Einstein is considered one of the most nonlinear and difficult.
    1:23:05 But, uh, relatively low on the hierarchy was this thing called the wave maps equation.
    1:23:10 So, it’s a wave which, at any given point, uh, is fixed to be, like, on a sphere.
    1:23:17 Um, so, uh, I can think of a bunch of arrows in space and time, and, and, and, and, and, yeah, so it’s pointing in, in different directions.
    1:23:19 Um, but they propagate like waves.
    1:23:26 If, if, if you wiggle an arrow, it will propagate and create and make all the arrows move, kind of like, uh, sheaves of wheat in the wheat field.
    1:23:34 And I was interested in the global regularity problem, again, for this question, like, is it possible for, for all the energy here to, to, to, to collect at a point?
    1:23:40 So, the equation I considered was actually what’s called a critical equation, where it’s actually, the behavior at all scales is roughly the same.
    1:23:48 Um, and I was able barely to show that, um, that you couldn’t actually force a scenario where all the energy concentrated at one point.
    1:23:53 That the energy had to disperse a little bit, and the moment it disperse a little bit, it would, it would, it would, it would stay regular.
    1:23:55 Yeah, this was back in 2000.
    1:23:58 That was part of why I got interested in Larry Stokes afterwards, actually.
    1:24:02 Yeah, so, I developed some techniques to, um, to solve that problem.
    1:24:07 So, part of it is, it was, um, this problem is really non-linear, uh, because of the curvature of the sphere.
    1:24:12 Um, there’s, there was a certain non-linear effect, which was a non-perturbative effect.
    1:24:17 It was, when you sort of looked at it normally, it looked larger than the linear effects of the wave equation.
    1:24:22 Um, and so, it was hard to, to keep things under control, even when the energy was small.
    1:24:24 But I developed what’s called a gauge transformation.
    1:24:30 So, the equation is kind of like an evolution of, of, of sheaves of wheat, and they’re all bending back and forth.
    1:24:32 And so, there’s a lot of motion.
    1:24:41 Um, but like, if you imagine, like, stabilizing the flow by attaching little cameras at different points in space, which are trying to move in a way that captures most of the motion.
    1:24:45 Um, and under this sort of stabilized flow, the flow becomes a lot more linear.
    1:24:52 I discovered a way to transform the, the equation to reduce the amount of non-linear effects.
    1:24:55 Um, and then I was able to, to, to, to solve the equation.
    1:25:03 I found this transformation while visiting my aunt in Australia, and I was trying to understand the dynamics of all these fields, and I, I couldn’t do it with pen and paper.
    1:25:07 Um, and I had not enough facility for computers to do any computer simulations.
    1:25:21 So, I ended up closing my eyes, being on, on the floor, and just imagining myself to actually be the specter field, and rolling around to try to, to see how to change coordinates in such a way that somehow things in all directions would behave in a reasonably linear fashion.
    1:25:27 And, uh, yeah, my aunt walked in on me while I was doing that, and she was asking, what am I, what am I doing, doing this?
    1:25:29 It’s complicated, is the answer.
    1:25:32 Yeah, yeah, and, you know, she said, okay, fine, you know, you’re a young man, I don’t ask questions.
    1:25:39 I, I, I have to ask about the, you know, um, how do you approach solving difficult problems?
    1:25:51 What, if it’s possible to go inside your mind when you’re thinking, are you visualizing in your mind the mathematical objects?
    1:25:52 Symbols, maybe?
    1:25:56 What are you visualizing in your mind usually when you’re thinking?
    1:25:57 Um, a lot of pen and paper.
    1:26:02 One thing you pick up as a mathematician is sort of, uh, I call it cheating strategically.
    1:26:10 Um, so, uh, the, the beauty of mathematics is that, is that you get to change the rule, change the problem, change the rules as you wish.
    1:26:13 Uh, like this, you don’t get to do this for any other field.
    1:26:20 Like, you know, if, if you’re an engineer and someone says, build a bridge over this river, you can’t say, I want to build this bridge over here instead, or I want to build it out of paper instead of steel.
    1:26:23 Um, but, um, you can, you can, you can do whatever you want.
    1:26:31 Um, it’s, it’s like trying to solve a computer game where you can, there’s unlimited cheat codes available.
    1:26:37 Uh, and so, you know, you, you, you can, you can set this, there’s a dimension that’s large.
    1:26:38 I’ll set it to one.
    1:26:39 I’d solve the one-dimensional problem first.
    1:26:41 There’s a main term and an error term.
    1:26:43 I’m going to make a spherical car assumption.
    1:26:44 I’ll assume the error term is zero.
    1:26:49 And so the way you should solve these problems is, is not in sort of this Ironman mode where
    1:26:50 you make things maximally difficult.
    1:26:56 Um, but actually the way you should, you should approach any reasonable math problem is that
    1:27:00 you, if there are 10 things that are making life difficult, find a version of the problem
    1:27:02 that turns off nine of the difficulties, but only keeps one of them.
    1:27:08 Um, and so that, um, and then that just figured, so you, you, you, you install nine cheats.
    1:27:09 Okay.
    1:27:12 If you saw 10 cheats, then, then the game is trivial, but you saw nine cheats, you solve
    1:27:15 one problem that, that, that, that, that, that teaches you how to deal with that particular
    1:27:16 difficulty.
    1:27:19 And then you turn that one off and you turn someone else, something else on, and then you
    1:27:20 solve that one.
    1:27:24 And after you, you know how to solve the 10 problems, 10 difficulties separately, then you
    1:27:26 have to start merging them a few at a time.
    1:27:32 Um, I, I, as a kid, I watched a lot of these Hong Kong action movies, um, from my
    1:27:33 culture.
    1:27:37 Um, and, uh, one thing is that every time it’s a fight scene, you know, so maybe the
    1:27:43 hero gets swarmed by a hundred bad guy goons or whatever, but it will always be choreographed
    1:27:46 so that you’d always be only fighting one person at a time and then it would defeat that person
    1:27:47 and move on.
    1:27:50 And, and because of that, they could, they could defeat all of them.
    1:27:50 Right.
    1:27:55 But whereas if they had fought a bit more intelligently and just swarmed the guy at once, uh, it would
    1:28:00 make for much, uh, much worse choreo, uh, cinema, but, uh, but they would win.
    1:28:04 Are you usually a pen and paper?
    1:28:07 Are you working, uh, with computer and LaTeX?
    1:28:09 I’m mostly pen and paper actually.
    1:28:11 So in my office, I have four giant blackboards.
    1:28:16 Um, and sometimes I just have to write everything I know about the problem on the four blackboards
    1:28:19 and then sit my couch and just sort of see the whole thing.
    1:28:23 Is it all symbols like notation or is there some drawings?
    1:28:27 Oh, there’s a lot of drawing and a lot of bespoke doodles that, uh, only makes sense to
    1:28:27 me.
    1:28:32 Um, I mean, and, and, and that’s the beauty of blackboards you raise and it’s, it’s, it’s,
    1:28:33 it’s very organic thing.
    1:28:38 Um, I’m beginning to use more and more computers, um, partly because AI makes it much easier
    1:28:43 to do simple coding things that, you know, if I wanted to plot a function before, which
    1:28:46 is moderately complicated as an iteration or something, you know, I’d have to, to remember
    1:28:50 how to set up a Python program and, and, and, and, and, and how does a for loop work and,
    1:28:53 and, and debug it and it would take two hours and so forth.
    1:28:58 And, and now I can do it in 10, 15 minutes as much, um, yeah, I’m, I’m using more and
    1:29:00 more, uh, computers to do simple explorations.
    1:29:03 Let’s talk about AI a little bit if we could.
    1:29:09 So, um, maybe a good entry point is just talking about computer assisted proofs in general.
    1:29:18 Can you describe the lean formal proof programming language and how it can help as a proof assistant
    1:29:24 and maybe how you started using it and how, uh, it has helped you.
    1:29:31 So, um, lean is a computer language, um, much like sort of standard languages like Python and C and so forth.
    1:29:36 Except that in most languages, the focus is on producing executable code.
    1:29:41 Lines of code do things, you know, they, they flip bits or they make a robot move or, or they, they deliver
    1:29:43 your text on the internet or something.
    1:29:46 Um, so lean is a language that can also do that.
    1:29:51 Uh, it can also be run as a standard, uh, traditional language, but it can also produce
    1:29:52 certificates.
    1:29:56 So a software language like Python might do a computation and give you the answer is seven.
    1:29:56 Okay.
    1:30:01 Then it does the sum of three plus four is equal to seven, but, uh, lean can produce not just
    1:30:05 the answer, but, but a proof that, uh, how it got the, the answer of seven as three plus
    1:30:11 four, uh, and all the steps involved in, in, in, in, in, um, so it’s, so it creates these
    1:30:14 more complicated objects, not just statements, but statements with proofs attached to them.
    1:30:20 Um, and, um, every line of code is just a way of piecing together previous statements
    1:30:21 to, to create new ones.
    1:30:23 So the idea is not new.
    1:30:24 These things are called proof assistance.
    1:30:29 And so they provide languages for which you can create quite complicated, um, intricate,
    1:30:30 um, mathematical proofs.
    1:30:37 And, um, they produce these certificates that give a 100% guarantee that your arguments are
    1:30:37 correct.
    1:30:42 If you trust the compiler of lean, but they made the compiler really small and you can,
    1:30:43 there are several different compilers available for the same.
    1:30:49 Can you give people some intuition about the, the difference between writing on pen and paper
    1:30:52 versus using lean programming language?
    1:30:55 How hard is it to formalize a statement?
    1:30:59 So lean, a lot of mathematicians were involved in the design of lean.
    1:31:05 So it’s, it’s designed so that, um, individual lines of code resemble individual lines of
    1:31:05 mathematical argument.
    1:31:07 Like you might want to introduce a variable.
    1:31:08 You want to, want to prove that contradiction.
    1:31:13 You want your, um, um, there are various standard things that you can do and, and it’s, it’s
    1:31:13 written.
    1:31:17 So ideally it should like a one-to-one correspondence in practice.
    1:31:22 It isn’t because lean is like explaining a proof to extremely pedantic colleague who will,
    1:31:24 will point out, okay, did you really mean this?
    1:31:26 Like what, what happens if this is zero?
    1:31:26 Okay.
    1:31:28 Um, did you, how do you justify this?
    1:31:34 Um, so lean has a lot of automation in it, um, to try to, to, uh, to be less annoying.
    1:31:38 Um, so for example, um, every mathematical object has to come of a type.
    1:31:45 Like if I, if I talk about X, is X a real number or, um, a natural number or, or a function
    1:31:45 or something?
    1:31:50 Um, if you write things informally, um, it’s often in some context.
    1:31:56 You say, you know, um, clearly X is equal to, uh, let X be the sum of Y and Z and Y and
    1:31:57 Z were already real numbers.
    1:31:58 So X should also be a real number.
    1:32:00 Um, so lean can do a lot of that.
    1:32:05 Um, but every so often it says, wait a minute, uh, can you tell me more about what this object
    1:32:07 is, uh, what, what type of object it is?
    1:32:12 You have to think more, um, at a philosophical level, well, not just sort of computations
    1:32:16 that you’re doing, but sort of what each object actually, um, is in some sense.
    1:32:22 Is he using something like LLMs to do, uh, the type inference or like you mentioned with
    1:32:22 a real number?
    1:32:26 It’s, it’s using much more traditional, what’s called good old fashioned AI.
    1:32:29 You can represent all these things as trees and there’s always algorithm to match one tree
    1:32:30 to another tree.
    1:32:36 So it’s actually doable to figure out if something is, uh, a real number or a natural number.
    1:32:39 Every object sort of comes with a history of where it came from and you can, you can kind
    1:32:39 of trace.
    1:32:40 Oh, I see.
    1:32:41 Um, yeah.
    1:32:43 So it’s, it’s, it’s designed for reliability.
    1:32:47 So, uh, modern AIs are not used in, it’s a disjoint technology.
    1:32:50 People are beginning to use AIs on top of lean.
    1:32:55 So when a mathematician tries to program, um, improvement in lean, um, often there’s
    1:32:59 a step, okay, now I want to use, um, the fundamental calculus, say, okay, to do the next step.
    1:33:05 So the lean developers have built this, this massive project called Metholib, a collection
    1:33:08 of tens of thousands of useful facts about methodical objects.
    1:33:12 And somewhere in there is the fundamental calculus, but you need to find it.
    1:33:15 So a lot, the bottleneck now is actually lemma search.
    1:33:20 You know, there’s a tool that, that you know is in there somewhere and you need to find
    1:33:20 it.
    1:33:24 Um, and so you can, there are various search engines specialized for Metholib that you can
    1:33:24 do.
    1:33:28 Um, but there’s now these large language models that you can say, okay, um, I need the fundamental
    1:33:29 calculus at this point.
    1:33:34 And it was like, okay, um, uh, for example, um, when I code, I have GitHub Copilot installed
    1:33:39 as a plugin to my IDE and it scans my text and it sees what I need.
    1:33:41 It says, you know, I might even type it.
    1:33:43 Now I need to use the fundamental calculus.
    1:33:43 Okay.
    1:33:45 And then it might suggest, okay, try this.
    1:33:48 And like maybe 25% of the time it works exactly.
    1:33:49 And then another.
    1:33:53 10, 50% of the time it doesn’t quite work, but it’s close enough that I can say, oh yeah,
    1:33:55 if I just change it here and here, it will work.
    1:33:57 And then like half the time it gives me complete rubbish.
    1:34:02 Um, so, but people are beginning to use AIs a little bit on top.
    1:34:09 Um, mostly on the level of basically fancy autocomplete, um, that, uh, you can type half of one line
    1:34:10 of a proof and it will find, it will tell you.
    1:34:10 Yeah.
    1:34:16 But, but, but a fancy, especially fancy with the sort of capital letter F is, uh, uh, remove
    1:34:17 some of the friction.
    1:34:18 Yeah.
    1:34:22 What a mathematician might feel when they move from pen and paper to formalizing.
    1:34:23 Yes.
    1:34:23 Yeah.
    1:34:27 So right now I estimate that the effort, time and effort taken to formalize a proof is about
    1:34:29 10 times the amount taken to, to write it out.
    1:34:30 Yeah.
    1:34:35 So it’s doable, but, uh, you don’t, it’s, it’s annoying.
    1:34:38 But doesn’t it like kill the whole vibe of being a mathematician?
    1:34:39 Yeah.
    1:34:42 So, I mean, having a pedantic coworker, right?
    1:34:42 Yeah.
    1:34:44 If that was the only aspect of it.
    1:34:44 Okay.
    1:34:46 But, um, okay.
    1:34:49 So there’s some, there’s some cases where it’s actually more pleasant to do things formally.
    1:34:54 So there was a theorem I formalized and there was a certain constant 12, um, that, that
    1:34:56 came out at, um, in, in the final statement.
    1:34:57 And so this 12 had to be carried all through the proof.
    1:35:00 Um, and like everything had to be checked.
    1:35:03 I did go through all the, all these other numbers that had to be consistent with this
    1:35:04 final number 12.
    1:35:07 And then, so we wrote a paper through this theorem with this number 12.
    1:35:10 And then a few weeks later, someone said, oh, we can actually improve this 12 to an 11
    1:35:12 by reworking some of these steps.
    1:35:16 And when this happens with pen and paper, um, like every time you change a parameter, you
    1:35:20 have to check line by line that every single line of your proof still works.
    1:35:23 And there can be subtle things that you didn’t quite realize some properties on number
    1:35:25 12 that you didn’t even realize that you were taking advantage of.
    1:35:27 So a proof can break down at a subtle place.
    1:35:31 Um, so we had formalized the proof with this constant 12.
    1:35:35 And then when this, this new paper came out, uh, we said, oh, let’s, uh, so that took like
    1:35:39 three weeks to formalize, uh, and like 20 people to formalize this, this, this original proof.
    1:35:44 I said, oh, but now let’s, let’s, um, uh, uh, let’s update the 12 to 11.
    1:35:49 And what you can do with lean, so you just, in your headline theorem, you change the 12 to 11,
    1:35:54 you run the compiler and like of the thousands of lines of code you have, 90% of them still
    1:35:55 work.
    1:35:57 And there’s a couple that are lined in red.
    1:36:01 Now I can’t justify these steps, but it immediately isolates which steps you need to change, but you
    1:36:03 can skip over everything, which, which works just fine.
    1:36:09 Um, and if you program things correctly, um, with good programming practices, most of your
    1:36:09 lines will not be read.
    1:36:14 Um, and there’ll just be a few places where you, I mean, if you don’t hard code your constants,
    1:36:18 but you sort of, uh, um, um, you use smart tactics and so forth.
    1:36:22 Uh, you can, you can, you can localize, um, the things you need to change to, to a very
    1:36:24 small, um, period of time.
    1:36:28 So it’s like within a day or two, we had updated our proof because this is very quick process.
    1:36:30 You, um, you make a change.
    1:36:33 There are 10 things now that don’t work for each one.
    1:36:36 You make a change and now there’s five more things that don’t work, but, but the process
    1:36:39 converges much more smoothly than with pen and paper.
    1:36:40 So that’s for writing.
    1:36:41 Are you able to read it?
    1:36:46 Like if somebody else has a proof, are you able to like, how, what’s, what’s the, uh, versus
    1:36:47 paper?
    1:36:47 Yeah.
    1:36:52 So the proofs are longer, but each individual piece is easier to read.
    1:36:58 So, um, if you take a math paper and you jump to page 27 and you look at paragraph six and
    1:37:04 you have a line of, of, of text or math, I often can’t read it immediately because it assumes
    1:37:08 various definitions, which I had to go back and, and maybe on 10 pages earlier, this was
    1:37:12 defined and this, um, the proof is scattered all over the place and you basically are forced
    1:37:13 to read fairly sequentially.
    1:37:19 Um, it’s, it’s not like say a novel where like, you know, in theory, you could open up
    1:37:20 a novel halfway through and start reading.
    1:37:24 There’s a lot of context, but when a proof and lean, if you put your cursor on a line of
    1:37:29 code, every single object there, you can hover over it and it would say what it is, where it
    1:37:30 came from, where the stuff is justified.
    1:37:34 You can trace things back much easier than sort of flipping through a math paper.
    1:37:39 So one thing that lean really enables is actually collaborating on proofs at a really atomic
    1:37:41 scale that you really couldn’t do in the past.
    1:37:45 So traditionally a pen and paper, um, when you want to collaborate with another mathematician,
    1:37:50 um, either you do it at a blackboard where you, um, you can really interact, but if you’re
    1:37:54 doing it sort of by email or something, um, basically, yeah, you have to segment it.
    1:37:58 So I’m going to, I’m going to finish section three, you do section four, but, uh, you can’t
    1:38:02 really sort of work on the same thing, uh, collaborative at the same time.
    1:38:06 But with lean, you can be trying to formalize some portion of the proof and say, oh, I got
    1:38:07 stuck at line 67 here.
    1:38:10 I need to prove this thing, but it doesn’t quite work.
    1:38:12 Here’s the, like the three lines of code I’m having trouble with.
    1:38:16 Um, but because all the context is there, someone else can say, oh, okay, I recognize what you
    1:38:17 need to do.
    1:38:22 You need to apply this trick or this tool and you can do extremely atomic level conversations.
    1:38:27 So because of lean, I can collaborate, you know, with dozens of people across the world.
    1:38:29 Most of them I don’t, have never met in person.
    1:38:34 Um, and I may not know actually even whether they’re, um, how reliable they are in, in,
    1:38:38 in their, um, um, in, in the proof they give me, but lean gives me a certificate of, of,
    1:38:39 of trust.
    1:38:42 Um, so I can do, I can do trustless mathematics.
    1:38:44 So there’s so many interesting questions.
    1:38:49 There’s one, you’re, you’re known for being a great collaborator.
    1:38:56 So what is the right way to approach solving a difficult problem in mathematics when you’re
    1:38:57 collaborating?
    1:39:02 Are you doing a divide and conquer type of thing or are you brains, are you focused on
    1:39:05 a particular part and you’re brainstorming?
    1:39:07 There’s always a brainstorming process first.
    1:39:07 Yeah.
    1:39:12 So math research projects sort of by their nature, when you start, you don’t really know how to
    1:39:13 do the problem.
    1:39:17 Um, it’s not like an engineering project where somehow the theory has been established for
    1:39:20 decades and it’s implementation is the main difficulty.
    1:39:22 You have to figure out even what is the right path.
    1:39:27 So, so this is what I said about, about cheating first, you know, um, it’s like, um, to go
    1:39:31 back to the bridge building analogy, you know, so first assume you have infinite budget and
    1:39:34 like unlimited amounts of, of, of, of workforce and so forth.
    1:39:35 Now can you, can you build this bridge?
    1:39:35 Okay.
    1:39:36 Okay.
    1:39:38 Now have infinite budget, but only finite workforce.
    1:39:39 All right.
    1:39:39 Now can you do that?
    1:39:40 And so what?
    1:39:45 Um, so, uh, I mean, of course, no, no engineer can actually do this.
    1:39:47 Like I said, you have fixed requirements.
    1:39:47 Yes.
    1:39:52 There’s this sort of jam sessions or at the beginning where you try all kinds of crazy things and
    1:39:55 you, you, you make all these assumptions that are unrealistic, but you plan to fix later.
    1:40:01 Um, and you try to see if there’s even some skeleton of an approach that might work.
    1:40:06 Um, and then hopefully that breaks up the problem into smaller sub problems, which you don’t know
    1:40:09 how to do, but then you, uh, you focus on, on the sub ones.
    1:40:13 And sometimes different collaborators are better at, at working on, on certain things.
    1:40:18 Um, so one of my theorems I’m known for is a theorem of Ben Green, which is now called
    1:40:19 the Green Tau theorem.
    1:40:23 Um, it’s a statement that the primes contain algorithmic progressions of any length.
    1:40:25 So it was a modification of this theorem already.
    1:40:30 And the way we collaborated was that Ben had already proven a similar result for progressions
    1:40:31 of length three.
    1:40:35 Um, he showed that sets like the primes contain lots and lots of progressions of length three.
    1:40:39 Um, even, and even, um, subsets of the prime, certain subsets do.
    1:40:43 Um, but his techniques only worked for, um, for length three progressions.
    1:40:44 They didn’t work for longer progressions.
    1:40:49 Um, but I had these techniques coming from a gothic theory, which is something that I had
    1:40:52 been playing with and, and, uh, I knew better than Ben at the time.
    1:40:58 Um, and so, um, if I could justify certain randomness properties of some set relating to
    1:41:03 the primes, like there’s, there’s a certain technical condition, which if I could have it,
    1:41:06 if Ben could supply me this fact, I could give, I could conclude the theorem.
    1:41:12 But I, what I asked was a really difficult question in number theory, which, um, he said,
    1:41:13 no, there’s no way we can prove this.
    1:41:17 Can you, so he said, can you prove your part of the theorem using a weaker hypothesis that
    1:41:18 I have a chance to prove it?
    1:41:21 And he proposed something which he could prove, but it was too weak for me.
    1:41:23 Uh, I can’t use this.
    1:41:28 Um, so there’s this, there was this conversation going back and forth, um, it’s sort of a
    1:41:29 different cheats too.
    1:41:29 Yeah.
    1:41:30 Yeah.
    1:41:31 I want to cheat more.
    1:41:31 He wants to cheat less.
    1:41:32 Yeah.
    1:41:37 Uh, but eventually we found a, a, a, a, a, a property, which a, he could prove in B I could
    1:41:37 use.
    1:41:39 Um, and then we, we could prove our view.
    1:41:44 Uh, and, um, yeah, so there’s, there’s, there’s a, there are all kinds of dynamics, you
    1:41:44 know?
    1:41:49 I mean, it’s, it’s, it’s every, every, um, collaboration has a, has a, has some story.
    1:41:50 It’s no two of the same.
    1:41:55 And then on the flip side of that, like you mentioned with lean programming, now that’s
    1:42:00 almost like a different story because you can do, you can create, I think you’ve mentioned
    1:42:07 a kind of a blueprint for a problem and then you can really do a divide and conquer with lean
    1:42:13 where you’re working on separate parts and they’re using the computer system proof
    1:42:16 checker essentially to make sure that everything is correct along the way.
    1:42:19 So it makes everything compatible and, uh, yeah, and trustable.
    1:42:20 Um, yeah.
    1:42:26 So currently only a few mathematical projects can be cut up in this way at the current state
    1:42:26 of the art.
    1:42:30 Most of the lean activity is on formalizing proofs that have already been proven by humans.
    1:42:34 A math paper basically is a boop, a blueprint in a sense.
    1:42:38 It is taking a difficult statement, like big theorem and breaking up into me a hundred little
    1:42:45 lemurs, um, but often not all written with enough detail that each one can be sort of directly
    1:42:45 formalized.
    1:42:52 A blueprint is like a really pedantically written version of a paper where every step is explained
    1:42:54 as, as much detail as, as, as possible.
    1:43:00 And to try and make each step kind of self-contained, um, and, or depending on only a very specific
    1:43:05 number of previous statements that have been proven so that each node of this blueprint
    1:43:08 graph that gets generated can be tackled independently of all the others.
    1:43:10 And you don’t even need to know how the whole thing works.
    1:43:14 Um, so it’s like a modern supply chain, you know, like if you want to create an iPhone or
    1:43:20 some other complicated object, um, no one person can, can build a single object, but you can
    1:43:24 have specialists who, who just, if they’re given some widgets from some other company, they
    1:43:26 can combine them together to form a slightly bigger widget.
    1:43:31 I think that’s a really exciting possibility because you can have, if you can find problems
    1:43:37 that could be broken down this way, then you can have, you know, thousands of contributors,
    1:43:37 right?
    1:43:37 Yes.
    1:43:38 They’ll be completely distributed.
    1:43:42 So I told you before about the split between theoretical and experimental mathematics.
    1:43:45 And right now, most mathematics is theoretical and only a tiny bit is experimental.
    1:43:50 I think the platform that lean and other software tools, uh, so, um, GitHub and things like
    1:43:56 that, um, allow, uh, they will allow experimental mathematics to be, to scale up, um, to a much
    1:43:57 greater degree than we can do now.
    1:44:04 So right now, if you want to, um, um, do any mathematical exploration, uh, of some mathematical
    1:44:06 pattern or something, you need some code to write out the pattern.
    1:44:10 And I mean, sometimes there are some computer algebra packages that help, but often it’s
    1:44:13 just one mathematician coding lots and lots of Python or whatever.
    1:44:19 And because coding is such an error prone activity, it’s not practical to allow other people
    1:44:23 to collaborate with you on writing a module for your code, because if one of the modules has
    1:44:25 a bug in it, the whole thing is unreliable.
    1:44:33 Um, so it’s, these are, uh, so you get these bespoke, uh, spaghetti code that written by non-professional
    1:44:37 programmers, but mathematicians, you know, and they’re clunky and, and, and slow.
    1:44:42 And, um, um, and so because of that, it’s, it’s, it’s hard to, to really mass produce
    1:44:43 experimental results.
    1:44:50 Um, but, um, yeah, but I think with lean, I mean, so I’m already starting some projects
    1:44:54 where we are not just experimenting with data, but experimenting with proofs.
    1:44:56 So I have this project called the Equational Theories Project.
    1:45:00 Basically, we generated about 22 million little problems in abstract algebra.
    1:45:02 Maybe I should back up and tell you what, what the project is.
    1:45:03 Okay.
    1:45:07 So abstract algebra studies operations like multiplication and addition and their abstract properties.
    1:45:08 Okay.
    1:45:10 So multiplication, for example, is commutative.
    1:45:12 X times Y is always Y times X, at least for numbers.
    1:45:14 Um, and it’s also associative.
    1:45:17 X times Y times Z is the same as X times Y times Z.
    1:45:23 Um, so, um, these operations obey some laws that don’t obey others.
    1:45:25 For example, X times X is not always equal to X.
    1:45:26 So that law is not always true.
    1:45:30 So given any, any operation, it obeys some laws and not others.
    1:45:36 Um, and so we generated about 4,000 of these possible laws of algebra that certain operations
    1:45:36 can satisfy.
    1:45:39 And our question is, which laws imply which other ones?
    1:45:43 Um, so for example, does commutativity imply associativity?
    1:45:47 And the answer is no, because it turns out you can describe an operation which obeys the
    1:45:49 commutative law, but doesn’t obey the associative law.
    1:45:53 So by producing an example, you can, you can show that commutativity does not imply associativity.
    1:45:57 But some other laws do imply other laws by substitution and so forth.
    1:45:59 Uh, and you can write down some, some algebraic proof.
    1:46:04 So we look at all the pairs between these 4,000 laws and there’s over 22, 22 million of these
    1:46:04 pairs.
    1:46:07 And for each pair, we ask, does this law imply this law?
    1:46:10 If so, give a, give, uh, give a proof.
    1:46:11 If not, give a counterexample.
    1:46:17 Um, so 22 million problems, each one of which you could give to like an undergraduate
    1:46:19 algebra student and they had a decent chance of solving the problem.
    1:46:23 Although there are a few, at least 22 million, there are like a hundred or so that are really
    1:46:24 quite hard.
    1:46:24 Okay.
    1:46:25 But a lot are easy.
    1:46:30 And the project was just to, to work out, to determine the entire graph, like, like which
    1:46:30 ones imply which other ones.
    1:46:32 That’s an incredible project, by the way.
    1:46:33 Such a good idea.
    1:46:37 Such a good test of the very thing we’ve been talking about on a scale that’s remarkable.
    1:46:37 Yeah.
    1:46:39 So it would not have been feasible.
    1:46:43 You know, I mean, the state of the art in the literature was like, you know, 15 equations
    1:46:46 and sort of how they imply, that’s sort of at the limit of what a human with pen and paper
    1:46:46 can do.
    1:46:48 So, so you need to scale it up.
    1:46:54 So you need to crowdsource, but you also need to trust all the, um, you know, I, I mean,
    1:46:57 no one person can check 22 million of these proofs.
    1:46:59 You need to be computerized.
    1:47:02 And so it only became possible with, with lean.
    1:47:05 Um, we were hoping to use a lot of AI as well.
    1:47:07 Um, so the project is almost complete.
    1:47:10 Um, so all these 22 million, all but two had been settled.
    1:47:15 Um, and, uh, well, actually, and of those two, uh, we have a pen and paper proof of the
    1:47:17 two, uh, and we were formalizing it.
    1:47:22 In fact, I was, this morning I was working on, um, so we’re almost done on this.
    1:47:23 Um, it’s incredible.
    1:47:23 Yeah.
    1:47:26 How many people were able to get, uh,
    1:47:30 which in mathematics is considered a huge number.
    1:47:31 It’s a huge number.
    1:47:32 That’s crazy.
    1:47:32 Yeah.
    1:47:37 So we’re going to have a paper of 50 authors, uh, and a big appendix of food contributor.
    1:47:37 What?
    1:47:41 Here’s an interesting question, not to maybe speak even more generally about it.
    1:47:48 When you have this pool of people, is there a way to, uh, organize the contributions by level
    1:47:50 of expertise of the people, of the contributors?
    1:47:51 Now, okay.
    1:47:56 Uh, I’m asking a lot of pothead questions here, but I’m imagining.
    1:47:59 A bunch of humans, and maybe in the future, some AIs.
    1:48:06 Can there be, like, an ELO rating type of situation where, like, a gamification of this?
    1:48:10 The beauty of, of these lean projects is that automatically you get all this data, you know?
    1:48:14 So, like, like, everything’s uploaded to this GitHub, and GitHub tracks who contributed what.
    1:48:20 Um, so you could generate statistics from, at any, at any later point in time, you could say,
    1:48:23 oh, this person contributed this many lines of code or whatever.
    1:48:25 I mean, these are very crude metrics.
    1:48:28 Um, I would, I would definitely not want this to become, like, you know, part of your 10-year
    1:48:29 review or something.
    1:48:36 Um, but, um, I mean, I think already in, in, in enterprise computing, right, people do use
    1:48:41 some of these metrics as part of, of the assessment of, of, uh, performance of, of an employee.
    1:48:45 Um, again, this is the direction which is a bit scary for academics to go down.
    1:48:48 We, we, we, we don’t like metrics so much.
    1:48:55 And yet, academics use metrics, they just use old ones, number of papers.
    1:48:59 Yeah, yeah, it’s true, it’s true that, yeah, I mean, um.
    1:49:04 It feels like this is a metric while flawed is, is going in the, more in the right direction,
    1:49:04 right?
    1:49:05 Yeah.
    1:49:08 It’s an interesting, I mean, at least it’s a very interesting metric.
    1:49:10 Yeah, I think it’s interesting to study.
    1:49:13 I mean, I think you can, you can do studies of, of, of whether these are better predictors.
    1:49:15 Um, there’s this problem called Goodhart’s Law.
    1:49:19 If a statistic is actually used to incentivize performance, it becomes gamed.
    1:49:21 Um, and then it is no longer a useful measure.
    1:49:25 Oh, humans, always, yeah, yeah, no, I mean, it’s, it’s, it’s rational.
    1:49:28 So what we’ve done for this project is, is self-report.
    1:49:34 So, um, there are actually these standard categories, um, from the sciences of what types of contributions
    1:49:34 people give.
    1:49:40 So there’s, there’s a concept and validation and resources and, and, and, and, and coding and so forth.
    1:49:43 Um, so we, we, we, there’s a standard list of 12 or so categories.
    1:49:48 Um, and we just ask each contributor to this big matrix of all the, of all the authors and
    1:49:51 all the categories just to tick the boxes where they think that they contributed.
    1:49:57 Um, and just give a rough idea, you know, like, oh, so you did some coding and, and, uh, and
    1:50:01 you provided some compute, but you didn’t do any of the pen and paper verification or whatever.
    1:50:03 And I think that that works out.
    1:50:06 Traditionally, mathematicians just order alphabetically by surname.
    1:50:10 So we don’t have this tradition as in the sciences of, you know, lead author and second
    1:50:14 author and so forth, like, which we’re proud of, you know, we make all the authors equal
    1:50:17 status, but it doesn’t quite scale to this size.
    1:50:21 So a decade ago, I was involved in these things called polymath projects.
    1:50:24 It was the crowdsourcing mathematics, but without the lean component.
    1:50:29 So it was limited by, you needed a human moderator to actually check that all the contributions coming
    1:50:29 in were actually valid.
    1:50:32 Um, and this was a huge bottleneck actually.
    1:50:39 Um, but still we had projects that were, you know, 10 authors or so, but we had decided
    1:50:44 at the time, um, not to try to decide who did what, um, but to have a single pseudonym.
    1:50:50 Um, so we created this fictional character called DHJ polymath in the spirit of Bobaki.
    1:50:55 Bobaki is the pseudonym for a famous group of mathematicians in the 20th century.
    1:50:58 But, um, and so the paper was also authored on the pseudonym.
    1:50:59 So none of us got the author credit.
    1:51:03 Um, this actually turned out to be not so great for a couple of reasons.
    1:51:08 So, so one is that if you actually wanted to be considered for 10 years or whatever, you
    1:51:14 could not use this paper in your, uh, uh, as you’re submitted as one of your publications
    1:51:16 because it was, you didn’t have the formal author credit.
    1:51:23 Um, um, but the other thing that we’ve recognized much later is that when people referred to
    1:51:27 these projects, they naturally refer to the most famous person who was involved in the
    1:51:27 project.
    1:51:28 Oh yeah.
    1:51:29 So this was Tim Gower’s polymath project.
    1:51:34 This was Terrence Towers’ polymath project and not mentioned the, the other 19 or whatever
    1:51:35 people that were involved.
    1:51:36 Ah, yeah.
    1:51:40 So we’re trying something different this time around where we have, everyone’s an author,
    1:51:44 um, but we will have an appendix with this matrix and we’ll see how that works.
    1:51:47 I mean, uh, so both projects are incredible.
    1:51:52 Just the fact that you’re involved in such huge collaborations, but I think I saw a talk from
    1:51:56 Kevin Buzzard about, uh, the lean programming language is a few years ago and he was saying
    1:51:59 that, uh, this might be the future of mathematics.
    1:52:05 And so it’s also exciting that you’re embracing, uh, one of the greatest mathematicians in the
    1:52:10 world embracing this, what seems like the paving of the future of mathematics.
    1:52:18 Um, so I have to ask you here about the integration of AI into this whole process.
    1:52:24 So DeepMind’s alpha proof was trained using reinforcement learning, um, both failed and
    1:52:27 successful formal lean proofs of IMO problems.
    1:52:31 So this is sort of high level high school.
    1:52:32 Oh, very high level.
    1:52:32 Yes.
    1:52:35 Very high level, high school level mathematics problems.
    1:52:36 What do you think about the system?
    1:52:41 And maybe what is the gap between this system that is able to prove the high school level
    1:52:46 problems versus gradual level, uh, problems?
    1:52:46 Yeah.
    1:52:52 The difficulty increases exponentially with the, the number of steps involved in the proof is
    1:52:53 a commentorial explosion.
    1:52:57 So the thing of large language models is, is that they make mistakes.
    1:53:02 And so if a proof has got 20 steps and your offline board has a 10% failure rate, um, at
    1:53:07 each step, um, of, of going in the wrong direction, like, uh, it’s, it’s just extremely unlikely
    1:53:09 to actually, uh, reach the end.
    1:53:16 Actually, uh, just to take a small tangent here is how hard is the problem of mapping from natural
    1:53:18 language to the formal program?
    1:53:20 Well, yeah, it’s extremely hard.
    1:53:23 Actually, um, natural language, you know, it’s very fault tolerant.
    1:53:27 Um, like you can make a few minor grammatical errors and a speaker in the second language
    1:53:28 can get some idea of what you’re saying.
    1:53:31 Um, yeah, but, but formal language, yeah.
    1:53:35 You know, if you get one little thing wrong, um, I think that the whole thing is, is, is,
    1:53:36 is nonsense.
    1:53:39 Um, even formal to formal is, is, is very hard.
    1:53:42 There are different incompatible, um, uh, proof of system languages.
    1:53:45 Uh, there’s lean, but also cock and Isabel and so forth.
    1:53:48 And actually even converting from a formal language to formal language, um,
    1:53:51 is, uh, it’s, uh, it’s an unsolved, it’s an unsolved problem.
    1:53:52 That is fascinating.
    1:53:53 Okay.
    1:54:02 So, uh, but once you have an informal language, they’re using, um, their RL train model.
    1:54:07 So something, something akin to alpha zero that they use to go to then try to come up with
    1:54:07 proofs.
    1:54:08 They also have a model.
    1:54:11 I believe it’s a separate model for geometric problems.
    1:54:14 So what impresses you about the system?
    1:54:17 And, um, what do you think is the gap?
    1:54:18 Yeah.
    1:54:21 We talked earlier about things that are amazing over time, become kind of normalized.
    1:54:26 Um, so yeah, now somehow it’s, of course, geometry is a silver book problem.
    1:54:26 Right.
    1:54:27 That’s true.
    1:54:27 That’s true.
    1:54:29 I mean, it’s still beautiful.
    1:54:29 Yeah.
    1:54:29 Yeah.
    1:54:31 No, it’s, it’s, it’s, it’s a great work.
    1:54:32 It shows what’s possible.
    1:54:35 I mean, um, it’s, it, um, the approach doesn’t scale currently.
    1:54:36 Yeah.
    1:54:41 Three days of Google’s servers, server time to sort of one, uh, high school math from there.
    1:54:46 This, this is not a scalable, uh, prospect, um, especially with the exponential increase in,
    1:54:51 um, as, as the complexity, um, increases, which mentioned that they got a silver metal performance.
    1:54:57 The equivalent of, I mean, so first of all, they took way more time than was, uh, allotted.
    1:55:01 Um, and they had this assistance where, where the humans started helped by, by formalizing.
    1:55:07 Um, but, uh, also they’re giving us those full marks for the solution, which I guess is formally
    1:55:08 verified.
    1:55:09 So I guess that that’s, that’s fair.
    1:55:16 Um, uh, um, there, there are efforts, there was, there will be a proposal at some point to actually
    1:55:22 have an, an AI math Olympiad where at the same time as the human contestants get the, the actual
    1:55:28 Olympiad, um, problems, AIs will also be given the same problems with the same time period.
    1:55:31 Um, and the outputs will have to be graded by the same judges.
    1:55:36 Um, um, um, and which means that will have to be written in natural language rather than
    1:55:37 formal language.
    1:55:38 Oh, I hope that happens.
    1:55:40 I hope that this IMO happens.
    1:55:41 I hope, I hope next one.
    1:55:42 It won’t happen this IMO.
    1:55:45 The performance is not good enough in, in, in the time period.
    1:55:51 And, and, uh, um, but there are smaller competitions, uh, there are competitions where the, the answer
    1:55:54 is a, is a number rather than a long form proof.
    1:56:00 Um, and that’s, that’s, um, AI is actually a lot better at, um, problems where there’s a
    1:56:01 specific numerical answer.
    1:56:06 Um, cause it’s, it’s, it’s easy to, to, to, uh, to reinforce, to reinforce some learning
    1:56:06 on it.
    1:56:06 Yeah.
    1:56:07 You got the right answer.
    1:56:08 You got the wrong answer.
    1:56:12 Uh, it is, it’s, it’s a very clear signal, but a long form proof either has to be formal
    1:56:16 and then the lean can give it thumbs up, thumbs down, or it’s informal.
    1:56:22 Um, but then you need a human to grade it to tell, uh, and if you’re trying to do a billions
    1:56:27 of, of reinforcement learning, um, you know, um, um, runs, you’re not, you can’t hire enough
    1:56:29 humans to, uh, to grade those.
    1:56:33 Um, I mean, it’s already hard enough for, for the last language to do reinforcement learning
    1:56:38 on, on just the regular text that people get, but now we actually hire people, not just
    1:56:42 give thumbs up, thumbs down, but actually check the, the output mathematically.
    1:56:43 Yeah.
    1:56:44 Uh, that’s too expensive.
    1:56:51 So if we, uh, just explore this possible future, what, what, what is the thing that humans do
    1:56:54 that’s most special in, uh, in mathematics?
    1:56:59 So that you could see AI, uh, not cracking for a while.
    1:57:05 Well, so inventing new theories, so coming up with new conjectures versus, uh, proving the
    1:57:13 conjectures, building new abstractions, new representations, maybe, uh, an AI turner style
    1:57:16 with, uh, seeing new connections between disparate fields.
    1:57:17 That’s a good question.
    1:57:21 Um, I think the nature of what mathematicians do over time has changed a lot.
    1:57:26 Um, you know, um, so a thousand years ago, mathematicians had to compute the date of Easter,
    1:57:31 uh, and those really complicated, uh, calculations, you know, but it’s all automated, been automated
    1:57:32 centuries.
    1:57:33 Uh, we don’t need that anymore.
    1:57:37 So, you know, they used to navigate, to do spherical navigation, spherical trigonometry
    1:57:42 to navigate how to get from, from, um, the old world to the new or something, a very complicated
    1:57:42 calculation.
    1:57:47 Again, we’d been automated, um, you know, even a lot of undergraduate mathematics, even before
    1:57:52 AI, um, like Wolfram Alpha, for example, uh, it’s, it’s not a language model, but it can
    1:57:54 solve a lot of undergraduate level math tasks.
    1:58:00 So on the computational side, verifying routine things like having a problem and, um, and say,
    1:58:02 here’s a problem in partial differential equations.
    1:58:04 Could you solve it using any of the 20 standard techniques?
    1:58:07 Um, and they say, yes, I’ve tried all 20.
    1:58:10 I hear that 100 different permutations and, and here’s my results.
    1:58:13 Um, and that type of thing, I think it will work very well.
    1:58:20 Um, type of scaling to once you solve one problem to, to make the AI attack a hundred adjacent
    1:58:20 problems.
    1:58:29 Um, the things that humans do still, so, so where the AI really struggles right now, um, is knowing
    1:58:33 when it’s made a wrong turn, um, and it can say, oh, I’m going to solve this problem.
    1:58:37 I’m going to split up this problem into, um, into these two cases.
    1:58:38 I’m going to try this technique.
    1:58:43 And, um, sometimes if you’re lucky and it’s a simple problem, it’s the right technique and
    1:58:43 you solve the problem.
    1:58:46 And sometimes it will get, it will have a problem.
    1:58:49 It would, it would propose an approach which is just complete nonsense.
    1:58:52 Um, and, but like, it looks like a proof.
    1:58:56 Um, so this is one annoying thing about LLM generated mathematics.
    1:59:03 So, um, yeah, we, we, we’ve had human generated mathematics as very low quality, um, uh, like,
    1:59:06 you know, submissions for people who don’t have the formal training and so forth.
    1:59:09 But if a human proof is bad, you can tell it’s bad pretty quickly.
    1:59:15 It makes really basic mistakes, but the AI generator proofs, they can look superficially
    1:59:16 flawless.
    1:59:19 Uh, and it’s partly because that’s what the reinforcement learning has like you train them
    1:59:25 to do, uh, to, to make things, to, to produce text that looks like, um, uh, what is correct,
    1:59:26 which for many applications is good enough.
    1:59:30 Um, uh, so the errors often really subtle.
    1:59:34 And then when you spot them that they’re really stupid, um, like, you know, like no
    1:59:35 human would have actually made that mistake.
    1:59:35 Yeah.
    1:59:40 It’s actually really frustrating in the programming context because I program a lot and yeah, when
    1:59:45 a human makes when a low quality code, there’s something called code smell, right?
    1:59:51 You can tell, you can tell immediately like there’s signs, but with, with AI generate code
    1:59:53 and then you’re right.
    1:59:59 Eventually you find an obvious dumb thing that just looks like good code.
    1:59:59 Yeah.
    2:00:04 So, um, it’s very tricky to, and frustrating for some reason to have to work.
    2:00:04 Yeah.
    2:00:08 So the sense of smell, this is, this is, this is one thing that humans have.
    2:00:16 Um, and there’s a metaphorical mathematical smell that, uh, this is not clear how to get
    2:00:17 the AI to duplicate that.
    2:00:24 Eventually, um, I mean, so the, the, the way, um, alpha zero and so forth to make progress
    2:00:28 on go and chess and so forth is, is in some sense they have developed a sense of smell
    2:00:31 for go and chess positions, you know, that, that this position is good for white.
    2:00:32 It’s good for black.
    2:00:34 Um, they can’t enunciate why.
    2:00:39 Um, but just having that, that sense of smell lets them strategize.
    2:00:45 So if AI’s gained that ability to sort of assess a viability of certain proof strategies, so
    2:00:50 you can say, uh, I’m going to try to, to break up this problem into two small subtasks and
    2:00:52 they can say, oh, this looks good.
    2:00:55 The two tasks look like they’re simpler tasks than, than your main task.
    2:00:57 And they still got a good chance of being true.
    2:00:58 Um, so this is good to try.
    2:01:02 Or no, if you’ve, you’ve made the problem worse because each of the two sub problems
    2:01:05 is actually harder than your original problem, which is actually what normally happens if
    2:01:07 you try a random, uh, thing to try.
    2:01:11 Normally you actually, it’s very easy to transform a problem into an even harder problem.
    2:01:14 Very rarely do you transform into a simpler problem.
    2:01:21 Um, yeah, so if they can pick up a sense of smell, then they could maybe start competing
    2:01:23 with, uh, uh, human level mathematicians.
    2:01:27 So, so this is a hard question, but not competing, but collaborating.
    2:01:27 Yeah.
    2:01:28 If, okay.
    2:01:29 Hypothetical.
    2:01:36 If I gave you an oracle that was able to do some aspect of what you do and you could just
    2:01:36 collaborate with it.
    2:01:36 Yeah.
    2:01:37 Yeah.
    2:01:37 Yeah.
    2:01:40 What would that oracle, what would you like that oracle to be able to do?
    2:01:44 Would you like it to, uh, maybe be a verifier, like check?
    2:01:45 Mm-hmm.
    2:01:51 Do the codes, like you’re, yes, uh, uh, Professor Tao, this is the correct, this is a good,
    2:01:54 this is a promising fruitful direction.
    2:01:54 Yeah.
    2:01:54 Yeah.
    2:01:55 Yeah.
    2:02:01 Or, or would you like it to, uh, generate possible proofs and then you see which one is
    2:02:02 the right one.
    2:02:08 Um, or would you like it to maybe generate different representation, different, totally different
    2:02:10 ways of seeing this problem?
    2:02:10 Yeah.
    2:02:11 I think all of the above.
    2:02:15 Um, a lot of it is, we don’t know how to use these tools because, because it,
    2:02:21 it’s a paradigm that it’s not, um, yeah, we have not had in the past assistants that are
    2:02:28 competent enough to understand complex instructions, um, that can work at massive scale, but are
    2:02:29 also unreliable.
    2:02:36 Uh, it’s, it’s an interesting, uh, a bit unreliable in subtle ways whilst we, whilst providing sufficiently
    2:02:37 good output.
    2:02:39 Um, it’s, um, it’s an interesting combination.
    2:02:43 Um, you know, I mean, you have, you have like graduate students that you work with who
    2:02:45 are kind of like this, but not at scale.
    2:02:51 Um, you know, and, and, and we had previous software tools that, um, can work at scale,
    2:02:52 but, but very narrow.
    2:02:58 Um, so we have to figure out how to, how to use, um, I mean, um, so Tim Goward, actually,
    2:03:03 you mentioned, he actually foresaw like in, in 2000, he was envisioning what mathematics
    2:03:06 would look like in, in actually two and a half decades.
    2:03:14 And yeah, he, he wrote in his, in his article, like a hypothetical conversation between a mathematical
    2:03:18 assistant of the future, um, and himself, you know, trying to solve a problem and they
    2:03:22 would have to have a conversation that sometimes the human would propose an idea and the AI
    2:03:24 would, would, uh, evaluate it.
    2:03:29 Uh, and sometimes the AI would propose an idea, um, and, uh, and sometimes a competition
    2:03:32 was required and the AI would just go and say, okay, I’ve, I’ve checked the 100 cases
    2:03:37 needed here, or, um, uh, the first, uh, you, you said this is true for all N, I’ve checked
    2:03:42 the N up to 100, um, and it looks good so far, or hang on, there’s a problem at N equals 46.
    2:03:47 And so just a free form conversation where you don’t know in advance where things are going
    2:03:52 to go, but just based on, on, I think ideas could propose on both sides, calculations could
    2:03:52 propose on both sides.
    2:03:57 I’ve had conversations with AI where I say, okay, let’s, we’re going to collaborate to solve
    2:03:58 this math problem.
    2:03:59 And it’s a problem that I already know a solution to.
    2:04:01 So I, I try to prompt it.
    2:04:01 Okay.
    2:04:02 So here’s the problem.
    2:04:06 I suggest using this tool and it will find this, this lovely argument using a completely
    2:04:10 different tool, which eventually goes into the weeds and say, no, no, no, try using this.
    2:04:10 Okay.
    2:04:14 And it might start using this and then it’ll go back to the tool that I wanted to do before.
    2:04:18 Um, and like you have to keep railroading it, um, onto the path you want.
    2:04:21 And I could eventually force it to give the proof I wanted.
    2:04:26 Um, but it was like herding cats, um, like, and the amount of personal effort I had to
    2:04:31 take to not just sort of prompt it, but also check its output because it, after a lot of
    2:04:32 what it looked like, it was going to work.
    2:04:36 I know there’s a problem on 917 and basically arguing with it.
    2:04:40 Um, like it was more exhausting than doing it, uh, unassisted.
    2:04:43 So like, but that’s the current state of the art.
    2:04:49 I wonder if there’s, there’s a phase shift that happens to where it’s no longer feels
    2:04:53 like herding cats and maybe it’ll surprise us how quickly that comes.
    2:04:55 I believe so.
    2:04:59 Um, so in formalization, I mentioned before that it takes 10 times longer to formalize
    2:05:04 a proof than to write it by hand with these modern AI tools and also just better tooling
    2:05:10 that the lean, um, um, developers are doing a great job adding more and more features and
    2:05:11 making it user friendly.
    2:05:13 It’s going on from nine to eight to seven.
    2:05:14 Okay.
    2:05:14 No big deal.
    2:05:17 But one day it will drop below one.
    2:05:24 Um, and that’s the phase shift because suddenly, um, it makes sense when you write a paper to,
    2:05:29 to write it in lean first, uh, or through a conversation with AI, which is generally, um,
    2:05:30 on the fly with you.
    2:05:35 And it becomes natural for journals to accept, uh, you know, maybe they’ll offer an expedite
    2:05:40 refereeing, you know, that if, if a paper has already been formalized in lean, um,
    2:05:44 they’ll just ask the referee to comment on, on the significance of the results and how
    2:05:48 it connects to literature and not worry so much about the correctness, um, because that’s
    2:05:49 been certified.
    2:05:53 Um, papers are getting longer and longer in mathematics and like it’s harder and harder
    2:05:57 to get good refereeing for, um, the really long ones, unless they’re really important.
    2:06:01 Uh, it is actually an issue which, and the formalization is coming in at just the right
    2:06:03 time for this to be.
    2:06:07 And the easier and easier it gets because of the tooling and all the other factors, then
    2:06:11 you’re going to see much more like math lib will grow potentially exponentially.
    2:06:15 It’s a, it’s a, it’s a, it’s a virtuous, uh, cycle.
    2:06:15 Okay.
    2:06:19 I mean, one phase shift of this type that happened in the past was, uh, the adoption of LaTeX.
    2:06:22 So, so LaTeX is this typesetting language that all musicians use now.
    2:06:26 So in the past, people use all kinds of word processors and typewriters and whatever.
    2:06:31 But at some point LaTeX became easier to use than all other competitors.
    2:06:36 And like people would switch, you know, within a few years, like it was just a dramatic, um,
    2:06:47 it’s a wild out there question, but what, what year, how far away are we from a, uh, AI system
    2:06:52 being a collaborator on a proof that wins the Fields Medal?
    2:06:53 So that level.
    2:06:55 Okay.
    2:06:57 Um, well, it depends on the level of collaboration.
    2:07:00 I mean, no, like it deserves to be, to get the Fields Medal.
    2:07:02 Like, so half and half.
    2:07:05 Already, like I can imagine if it was for a metal winning paper,
    2:07:09 having some AI systems in writing it, you know, uh, just, you know,
    2:07:13 like the autocomplete alone is already, I, I use it, like it speeds up my, my own writing.
    2:07:18 Um, um, like, you know, you, you, you can have a theorem and you have a proof and the proof
    2:07:22 has three cases and I, I write down the proof of the first case and the autocomplete just
    2:07:24 suggests that now here’s how the proof of the second case could work.
    2:07:26 And like, it was exactly correct.
    2:07:26 That was great.
    2:07:29 Saved me like five, 10 minutes of, uh, of, of typing.
    2:07:32 But in that case, the AI system doesn’t get the Fields Medal.
    2:07:33 No.
    2:07:40 Are we talking 20 years, 50 years, a hundred years?
    2:07:41 What do you think?
    2:07:41 Okay.
    2:07:47 Uh, so I, I gave a prediction in print, but so by 2026, which is now next year, um, there
    2:07:52 will be math collaborations, you know, where the AI, so not Fields Medal winning, but, but
    2:07:53 like actual research level math papers.
    2:07:57 Like published ideas that are in part generated by AI.
    2:08:02 Um, maybe not the ideas, but at least, uh, some of the computations, um, um, um, the
    2:08:03 verifications.
    2:08:03 Yeah.
    2:08:03 I mean,
    2:08:04 that, that already happened.
    2:08:05 That’s already happened.
    2:08:05 Yeah.
    2:08:12 There are, there are problems that were solved, uh, by a complicated process, conversing with
    2:08:13 AI to propose things.
    2:08:16 And then the human goes and tries it and it, and then kind of comes like, doesn’t work.
    2:08:19 Um, but it was a different idea.
    2:08:22 Um, it, it’s, it’s hard to disentangle exactly.
    2:08:28 Um, there are certainly math results, which could only have been accomplished because there
    2:08:30 was a math, math, human mathematician and an AI involved.
    2:08:35 Um, but, uh, it’s hard to sort of disentangle credit.
    2:08:43 Um, I mean, these tools, they, they do not, uh, replicate all the skills needed to do mathematics,
    2:08:47 but they can replicate sort of some non-trivial percentage of them, you know, 30, 40%.
    2:08:49 So they can fill in gaps.
    2:08:56 Um, you know, so, uh, coding is, is, is, is a, is a good example, you know, so I, I, um, um,
    2:08:57 it’s annoying for me to, to code in Python.
    2:09:01 I’m not, I’m not a native, um, no professional, um, programmer.
    2:09:08 Um, but, um, the, with AI that the, the, the, the friction cost of, of doing it is, is, is
    2:09:08 much reduced.
    2:09:10 Uh, so it, it fills in that gap for me.
    2:09:14 Um, AI is getting quite good at literature review.
    2:09:18 Um, I mean, there’s still a problem with, um, hallucinating, you know, the references that
    2:09:19 don’t exist.
    2:09:22 Um, but this, I think is a silverware problem.
    2:09:27 Uh, if you train in the right way and so forth, you can, you can, and, um, and verify, um,
    2:09:33 you know, using the internet, um, you, you know, um, you should in a few years get the
    2:09:37 point where you, you have a, a lemma that you need and, uh, say, has anyone proven this
    2:09:38 lemma before?
    2:09:43 And we will do basically a fancy web search AI system and say, yeah, yeah, there are these
    2:09:45 six papers where something similar has happened.
    2:09:49 And I mean, you can ask you right now and it will give you six papers of which maybe one
    2:09:51 is legitimate and relevant.
    2:09:56 one exists, but it’s not relevant and for a hallucinated, um, it has a non-zero success
    2:10:01 rate right now, but, uh, it’s, there’s so much garbage, uh, so much, the signal to noise
    2:10:07 ratio is so poor that it’s, it’s, um, it’s most helpful when you already somewhat know the
    2:10:07 literature.
    2:10:13 Um, and you just need to be prompted to be reminded of a paper that was really subconsciously
    2:10:13 in your memory.
    2:10:17 Or it’s just helping you discover new, you were not even aware of, but is the correct
    2:10:18 citation.
    2:10:19 Yeah.
    2:10:24 That’s, yeah, that it can sometimes do, but, but when it does, it’s, it’s buried in, in a
    2:10:25 list of options to which the other.
    2:10:26 That are bad.
    2:10:26 Yeah.
    2:10:30 I mean, being able to automatically generate a related work section that is correct.
    2:10:31 Yeah.
    2:10:36 That’s actually a beautiful thing that might be another phase shift because it assigns credit
    2:10:37 correctly.
    2:10:37 Yeah.
    2:10:38 It does.
    2:10:40 It breaks you out of the silos of.
    2:10:40 Yeah.
    2:10:40 Yeah.
    2:10:40 Yeah.
    2:10:40 Yeah.
    2:10:44 No, I mean, yeah, no, there’s a big hump to overcome right now.
    2:10:49 I mean, it’s, it’s, it’s like self-driving cars, you know, the safety margin has to be really
    2:10:52 high for it to be, um, uh, to be feasible.
    2:10:56 So yeah, so there’s a last mile problem, um, with a lot of AI applications.
    2:11:03 Um, that, uh, you know, they can develop tools that work 20%, 80% of the time, but it’s still
    2:11:04 not good enough.
    2:11:07 Um, and in fact, even worse than good in some ways.
    2:11:13 I mean, another way of asking the feels metal question is what year do you think you’ll wake
    2:11:15 up and be like real surprised?
    2:11:22 You read the headline, the news or something happened that AI did like, you know, real breakthrough
    2:11:23 something.
    2:11:25 It doesn’t, you know, like feels metal, even hypothesis.
    2:11:31 It could be like really just this alpha zero moment would go that kind of thing.
    2:11:31 Right.
    2:11:39 Um, yeah, this, this decade, I can, I can see it like making a conjecture between two unrelated
    2:11:41 two, two things that people thought was unrelated.
    2:11:42 Oh, interesting.
    2:11:43 Generating a conjecture.
    2:11:45 That’s a beautiful conjecture.
    2:11:45 Yeah.
    2:11:49 And, and actually has a real chance of being correct and, and, and meaningful.
    2:11:55 And, um, because that’s actually kind of doable, I suppose, but the word of the data is, it’s
    2:11:57 for, yeah, yeah, no, that would be truly amazing.
    2:12:00 Um, the current models struggle a lot.
    2:12:04 I mean, so, um, a version of this is, um, I mean, the physicists have a dream of getting
    2:12:06 the AIs to discover new laws of physics.
    2:12:10 Um, uh, you know, the, the, the dream is you just feed it all this data.
    2:12:11 Okay.
    2:12:15 Uh, and, and, and this is a, here, here is a new patent that we didn’t see before, but it
    2:12:18 actually even struggled with the current state of the art, even struggles to discover old
    2:12:20 laws of physics, um, from the data.
    2:12:25 I mean, uh, or if it does, uh, there’s a big concern of contamination that it did it only
    2:12:29 because it’s like somewhere in its training data, it already somehow knew, um, you know,
    2:12:32 Boyle’s law or whatever you’re trying to, to, to reconstruct.
    2:12:37 Um, part of it is that we don’t have the right type of training data for this.
    2:12:41 Um, yeah, so for laws of physics, like we, we don’t have like a million different universes
    2:12:42 with a million different laws of nature.
    2:12:50 Um, and, um, like a lot of what we’re missing in math is actually the negative space.
    2:12:55 So we have published things of things that people have been able to prove, um, and conjectures
    2:13:00 that ended up being verified, um, or maybe counterexamples produced, but, um, we don’t
    2:13:05 have data on, on things that were proposed and they’re kind of a good thing to try, but then
    2:13:09 people quickly realized that it was the wrong conjecture and then they, they said, oh, but
    2:13:13 we, we should actually change, um, our claim to modify it in this way to actually make it
    2:13:14 more plausible.
    2:13:20 Um, there’s, there’s a trial and error process, which is a real integral part of human mathematical
    2:13:23 discovery, which we don’t record because it’s embarrassing.
    2:13:26 Uh, we make mistakes and, and we only like to publish our, our wins.
    2:13:31 Um, and, uh, the AI has no access to this data to train on.
    2:13:38 Um, I sometimes joke that basically AI has to go through, um, a grad school and actually,
    2:13:44 you know, go to grad courses, do the assignments, go to office hours, make mistakes, um, get advice
    2:13:46 on how to correct the mistakes and learn from that.
    2:13:52 Let me, uh, ask you, if I may, about, uh, Grigori Perlman.
    2:13:57 You mentioned that you try to be careful in your work and not let a problem completely consume
    2:13:58 you.
    2:14:02 Just, you’ve really fallen in love with the problem and really cannot rest until you solve
    2:14:03 it.
    2:14:07 But you also hasted to add that sometimes this approach actually can be very successful.
    2:14:14 An example you gave is Grigori Perlman who proved the Poincare conjecture and did so by
    2:14:19 working alone for seven years with basically little contact with the outside world.
    2:14:26 Can you explain this one millennial prize problem that’s been solved, Poincare conjecture, and
    2:14:30 maybe speak to the journey that Grigori Perlman’s been on?
    2:14:34 All right, so it’s, it’s a question about curved spaces.
    2:14:35 Earth is a good example.
    2:14:36 So Earth, you can think of as a 2D surface.
    2:14:40 In just being round, you know, it could maybe be a torus with a hole in it or kind of many
    2:14:40 holes.
    2:14:46 And there are many different topologies, a priori, that, that a surface could have, um, even if
    2:14:49 you assume that it’s, it’s bounded and, and, and, and smooth and so forth.
    2:14:54 So we’ve, we have figured out how to classify surfaces as a first approximation, uh, everything’s
    2:14:56 determined by something called the genus, how many holes it has.
    2:14:59 So a sphere has genus zero, a donut has genus one and so forth.
    2:15:03 And one way you can tell the surfaces apart, probably the sphere has, which is called simply
    2:15:04 connected.
    2:15:09 If you take any closed loop on the sphere, like a big closed loop of rope, you can contract
    2:15:11 it to a point and while staying on the surface.
    2:15:14 And the sphere has this property, but a torus doesn’t.
    2:15:18 And if you’re on a torus and you take a rope that goes around, say the, the outer diameter
    2:15:21 torus, there’s no way it can’t get through the hole.
    2:15:23 There’s no way to, to contract it to a point.
    2:15:29 So it turns out that the, the, the sphere is the only surface with this property of contract
    2:15:31 ability, I mean, up to like continuous deformations of the sphere.
    2:15:35 So, um, so things that I want to call topologically, um, equivalent of the sphere.
    2:15:38 So Poincare asked the same question, higher dimensions.
    2:15:43 Um, so this, it becomes hard to visualize, uh, because, um, surface you can think of as embedded
    2:15:47 in three dimensions, but as a curved free space, we don’t have good intuition of
    2:15:49 four D space to, to, to, to limit.
    2:15:52 And then there are also three D spaces that can’t even fit into four dimensions.
    2:15:54 You need five or six or, or higher.
    2:15:59 But anyway, uh, mathematically you can still pose this question that if you have a bounded
    2:16:03 three dimensional space now, which is also has this simply connected property that every
    2:16:04 loop can be contracted.
    2:16:06 Can you turn it into a three dimensional version of the sphere?
    2:16:08 And so this is the Poincare conjecture.
    2:16:11 Weirdly in higher dimensions, four and five, it was actually easier.
    2:16:14 So, uh, it was solved first in higher dimensions.
    2:16:16 There’s somehow more room to do the deformation.
    2:16:19 It’s easier to, to, to move things around to the sphere.
    2:16:21 But three was really hard.
    2:16:23 So people tried many approaches.
    2:16:27 There’s sort of commentary approaches where you chop up the, the surface into little triangles
    2:16:31 or tetrahedra and you, you just try to argue based on how the faces interact each other.
    2:16:35 Um, there were, um, algebraic approaches.
    2:16:38 Uh, there’s, there’s various algebraic objects, uh, like things called the fundamental group
    2:16:43 that you can attach to these homology and cohomology and, and, and, and all these very
    2:16:44 fancy tools.
    2:16:45 Um, they also didn’t quite work.
    2:16:51 Um, but Richard Hamilton’s proposed a, um, partial differential equations approach.
    2:16:56 So you take, um, you take, so the problem is that you’re, so you have this object, which
    2:17:02 is sort of secretly is a sphere, but it’s given to you in a, in a, in a, in a weird way.
    2:17:05 So it’s like, I think of a ball that’s been kind of crumpled up and twisted.
    2:17:06 And it’s not obvious that it’s a ball.
    2:17:12 Um, but, um, like if you, if you have some sort of surface, which is, which is a deformed
    2:17:18 sphere, you could, um, uh, you could, for example, think of it as the surface of a balloon.
    2:17:19 You could try to inflate it.
    2:17:20 You blow it up.
    2:17:25 Um, and naturally as you fill it with air, um, the, the wrinkles will sort of smooth out
    2:17:28 and it will turn into, um, um, a nice round sphere.
    2:17:31 Um, uh, unless of course it was a torus or something, in which case it would get stuck
    2:17:32 at some point.
    2:17:35 Like if you inflate a torus, there would, there’d be a point in the middle.
    2:17:38 When the inner ring shrinks to zero, you get, you get a singularity and you can’t
    2:17:39 blow up any further.
    2:17:40 Uh, you can’t flow any further.
    2:17:45 So he created this flow, which is now called Ritchie flow, which is a way of taking an
    2:17:49 arbitrary surface or, or space and smoothing it out to make it rounder and rounder, to
    2:17:50 make it look like a sphere.
    2:17:56 And he wanted to show that either, uh, this process would give you a sphere or it would
    2:17:56 create a singularity.
    2:18:00 Um, I can very much like how PDEs, either they have global regularity or finite and
    2:18:01 blow up.
    2:18:03 I can basically, it’s almost exactly the same thing.
    2:18:04 It’s all connected.
    2:18:10 Um, and so, and, and he showed that for two dimensions, two dimensional surfaces, um, uh, if you start
    2:18:13 with something connected, no singularity is ever formed.
    2:18:16 Um, you, you never ran into trouble and you could flow and it will give you a sphere.
    2:18:19 And it, so he got a new proof of the two dimensional result.
    2:18:23 Well, by the way, that’s a beautiful explanation of Ritchie flow and its application in this context.
    2:18:25 How difficult is the mathematics here?
    2:18:26 Like for the 2D case?
    2:18:27 Yeah.
    2:18:27 Yeah.
    2:18:32 These are quite sophisticated equations on par with the Einstein equations, slightly simpler,
    2:18:38 but, um, um, yeah, but, but they were considered hard nonlinear equations to solve.
    2:18:41 Um, and there’s lots of special tricks in 2D that, that, that helped.
    2:18:46 But in 3D, the problem was that, uh, this equation was actually supercritical.
    2:18:47 So it has the same problems as Navier-Stokes.
    2:18:52 As you blow up, um, maybe the curvature could get concentrated in finer and smaller, smaller
    2:18:52 regions.
    2:18:57 And it, um, it looked more and more nonlinear and things just look worse and worse.
    2:19:00 And there could be all kinds of singularities that showed up.
    2:19:05 Um, some singularities, um, like if, uh, there’s these things called neck pinches where, where,
    2:19:11 where the, uh, the surface sort of behaves like a, like a, like a barbell and it, it pinches
    2:19:11 at a point.
    2:19:14 Some, some singularities are simple enough that you can sort of see what to do next.
    2:19:17 You just make a snip and then you can turn one surface into two and evolve them separately.
    2:19:22 But there was, there was a, the, the prospect that there’s some really nasty, like knotted
    2:19:28 singularities showed up that you, you couldn’t see how to, um, resolve in any way, uh, that
    2:19:29 you couldn’t do any surgery to.
    2:19:34 Um, so you need to classify all the singularities, like what are all the possible ways that things
    2:19:34 can go wrong?
    2:19:40 Um, so what Perlman did was, first of all, he, he made the problem, he turned the problem
    2:19:41 from a supercritical problem to a critical problem.
    2:19:47 Um, I said before about how, um, the invention of the, of, of energy, the Hamiltonian, like
    2:19:50 really clarified, um, Newtonian mechanics.
    2:19:54 Um, uh, so he introduced, uh, something which is now called Perlman’s reduced volume and
    2:19:55 Perlman’s entropy.
    2:20:00 Um, he introduced new quantities, kind of like energy that looked the same at every single
    2:20:04 scale and turned the problem into a critical one where the nonlinearities actually suddenly
    2:20:06 looked a lot less scary than they did before.
    2:20:10 Um, and then he had to solve, he still had to analyze the singularities of this critical
    2:20:10 problem.
    2:20:14 Uh, and that itself was a problem similar to this wavemaps thing I worked on, actually.
    2:20:19 Um, so on the, on the level of difficulty of that, so he managed to classify all the singularities
    2:20:22 of this problem and show how to apply surgery to each of these.
    2:20:31 So, um, quite, uh, like a lot of really ambitious steps, um, and like, like nothing that a large
    2:20:37 language model today, for example, could, I mean, um, at best, uh, I could imagine a model
    2:20:41 proposing this idea as one of hundreds of different things to try.
    2:20:45 Um, but the other 99 would be complete dead ends, but you’d only find out after months
    2:20:52 of work, he must’ve had some sense that this was the right track to pursue because it takes
    2:20:53 years to get them from A to B.
    2:20:58 So you’ve done, like you said, actually, you see, even strictly mathematically, but more
    2:21:05 broadly in terms of the process, you’ve done similarly difficult things.
    2:21:08 What, what can you infer from the process he was going through?
    2:21:09 Cause he was doing it alone.
    2:21:12 What are some low points in a process like that?
    2:21:17 When you start to like, you’ve mentioned hardship, like, uh, AI doesn’t know when it’s
    2:21:18 failing.
    2:21:19 What happens to you?
    2:21:24 You’re sitting in your office when you realize the thing you did the last few
    2:21:27 days, maybe weeks is a failure.
    2:21:28 Well, for me, I switched to a different problem.
    2:21:32 Uh, so, uh, I’m, I’m, I’m, I’m a fox.
    2:21:32 I’m not a hedgehog.
    2:21:36 But you legitimately, that is a break that you can take is, is to step away and look at
    2:21:37 a different problem.
    2:21:37 Yeah.
    2:21:39 You can modify the problem too.
    2:21:43 Um, I mean, um, yeah, you can ask them if, if there’s a specific thing that’s blocking
    2:21:49 you at that, just, um, some bad case keeps showing up that, that, that for which your
    2:21:53 tool doesn’t work, you can just assume by fiat this, this bad case doesn’t occur.
    2:21:58 So you, you do some magical thinking, um, for the, you know, but, but strategically,
    2:21:58 okay.
    2:22:02 For the point to see if the rest of the argument goes through, um, if there’s multiple problems,
    2:22:05 uh, with, with, with your approach, then maybe you just give up.
    2:22:05 Okay.
    2:22:09 But if this is the only problem that, you know, then everything else checks out, then it’s
    2:22:10 still worth fighting.
    2:22:17 Um, so yeah, you have to do some, some, so forward reconnaissance sometimes to, uh, you
    2:22:17 know.
    2:22:20 And that is sometimes productive to assume like, okay, we’ll figure it out.
    2:22:21 Oh yeah.
    2:22:21 Yeah.
    2:22:22 Eventually.
    2:22:24 Um, sometimes actually it’s, it’s even productive to make mistakes.
    2:22:31 So, um, one of the, I mean, um, there was a project which actually, uh, we won some prizes
    2:22:36 for actually, but, uh, before other people, um, we worked on this PD problem again, actually
    2:22:37 this blow off regularity type problem.
    2:22:39 Um, and it was considered very hard.
    2:22:45 Um, Jean Bougain, um, uh, who was another field’s methodist who worked on a special case
    2:22:47 of this, but he could not solve the general case.
    2:22:51 Um, and we worked on this problem for two months and we found, we thought we solved it.
    2:22:56 We, we had this, this cute argument that if anything fit and we were excited, uh, we were
    2:22:59 planning celebration to all get together and have champagne or something.
    2:23:02 Um, and we started writing it up.
    2:23:06 Um, and one of, one of us, not me actually, but another co-author said, oh,
    2:23:11 um, in this, in this lemma here, we, um, we have to estimate these 13 terms that, that
    2:23:15 show up in this expansion and we estimate 12 of them, but in our notes, I can’t find the
    2:23:16 estimation of the 13th.
    2:23:17 Can you, can someone supply that?
    2:23:19 And I said, sure, I’ll look at this.
    2:23:21 And like you said, yeah, we didn’t cover that.
    2:23:22 We completely omitted this term.
    2:23:25 And this term turned out to be worse than the other 12 terms put together.
    2:23:27 Um, in fact, we could not estimate this term.
    2:23:30 Um, and we tried for a few more months and all different permutations.
    2:23:34 And there was always this one thing, one term that we could not control.
    2:23:38 Um, and so like, um, this was very frustrating.
    2:23:44 Um, but because we had already invested months and months of evidence already, um, we stuck
    2:23:47 at this, which we tried increasingly desperate things and crazy things.
    2:23:52 Um, and after two years, we found that approach is somewhat different, but quite a bit from
    2:23:57 our initial, um, strategy, which did actually didn’t generate these problematic terms and, and
    2:23:58 actually solve the problem.
    2:24:03 So we, we solved the problem after two years, but if we hadn’t had that initial full storm
    2:24:07 of nearly solving the problem, we would have given up by month two or something and worked
    2:24:08 on an easier problem.
    2:24:13 Um, yeah, if we had known it would take two years, not sure we would have started the project.
    2:24:14 Yeah.
    2:24:18 Sometimes actually having the incorrect, you know, it’s, it’s like Columbus traveling to the
    2:24:22 new world, the incorrect version of a measurement of the size of the earth.
    2:24:27 Um, he thought he was going to find a new trade route to India, uh, or at least that was how
    2:24:28 he sold it in his prospectus.
    2:24:35 I mean, it could be that he secretly knew, but just on a psychological element, do you have
    2:24:41 like emotional or like self doubt that just overwhelms you moments like that?
    2:24:44 You know, cause this stuff, it feels like math.
    2:24:51 It’s, it’s so engrossing that like it can break you when you like invest so much yourself
    2:24:53 on the problem and then it turns out wrong.
    2:24:58 You could start to similar way chess has broken some people.
    2:24:58 Yeah.
    2:25:04 Um, I, I think different mathematicians have different levels of emotional investment in
    2:25:05 what they do.
    2:25:08 I mean, I think for some people, it’s just a job, you know, you, you have a problem and
    2:25:10 if it doesn’t work out, you, you, you go on the next one.
    2:25:12 Um, yeah.
    2:25:18 So the fact that you can always move on to another problem, um, it reduces the emotional
    2:25:18 connection.
    2:25:23 I mean, there are cases, you know, so there are certain problems that are what are called
    2:25:27 mathematical diseases where, where, where, where just latch onto that one problem and
    2:25:30 they spend years and years thinking about nothing but that one problem.
    2:25:34 And, um, you know, maybe the, the career suffers and so forth.
    2:25:36 You say, oh, but I’ll get this big win.
    2:25:42 This will, you know, once I, once I finish this problem, I will make up for all the years
    2:25:44 of, of, of, of lost opportunity.
    2:25:51 And that’s, that’s, I mean, occasionally, occasionally it works, but I, I, um, I really don’t recommend
    2:25:53 it for people who have the right fortitude.
    2:25:54 Yeah.
    2:25:57 So I, I, I’ve never been super invested in any one problem.
    2:26:01 Um, one thing that helps is that we don’t need to call our problems in advance.
    2:26:07 Uh, um, well, uh, when we do grant proposals, uh, we sort of say we will, we will study this
    2:26:12 set of problems, but even though we don’t promise definitely by five years, I will supply a proof
    2:26:13 of all these things.
    2:26:18 You know, um, you promise to make some progress or discover some interesting phenomena.
    2:26:23 Uh, and maybe you don’t solve the problem, but you find some related problem that you can
    2:26:23 say something new about.
    2:26:26 Uh, and that’s, that’s a much more feasible task.
    2:26:29 But I’m sure for you, there’s problems like this.
    2:26:36 You have, you have, um, made so much progress towards the hardest problems in the history of
    2:26:37 mathematics.
    2:26:41 So is there, is there a problem that just haunts you?
    2:26:46 It sits there in the dark corners, you know, twin prime conjecture, Riemann hypothesis,
    2:26:47 global conjecture.
    2:26:52 Twin prime, that sounds, well, again, so, I mean, the problem is like Riemann hypothesis,
    2:26:53 those are so far out of reach.
    2:26:55 Do you think so?
    2:26:57 Yeah, there’s no even viable strategy.
    2:27:03 Like, even if I activate all my, all the cheats that I know of in this problem, like it, there’s
    2:27:04 just still no way to get made to be.
    2:27:12 Um, like it’s, it’s, um, I think it needs a breakthrough in another area of mathematics to
    2:27:12 happen first.
    2:27:17 And for someone to recognize that it would be a useful thing to transport into this problem.
    2:27:22 So we, we should maybe step back for a little bit and just talk about prime numbers.
    2:27:22 Okay.
    2:27:25 So they’re often referred to as the atoms of mathematics.
    2:27:30 Can you just speak to the structure that these, uh, atoms provide?
    2:27:34 The natural numbers have two basic operations attached to them, addition and multiplication.
    2:27:38 Um, so if you want to generate the natural numbers, you can do one of two things.
    2:27:41 You can just start with one and add one to itself over and over again.
    2:27:42 And that generates you the natural numbers.
    2:27:46 So additively, they’re very easy to generate one, two, three, four, five, or you can take
    2:27:47 the prime number.
    2:27:49 If you want to generate multiplicatively, you can take all the prime numbers, two, three,
    2:27:51 five, seven, and multiply them all together.
    2:27:55 Um, and together, they, they, they gives you all the, the, the natural numbers, except maybe
    2:27:56 for one.
    2:28:00 So there are these two separate ways of thinking about the natural numbers from an additive
    2:28:02 point of view and a more multiplicative point of view.
    2:28:05 Um, and separately, they’re not so bad.
    2:28:10 Um, so like any question about that natural numbers that only was addition is relatively
    2:28:10 easy to solve.
    2:28:14 And any question that only was multiplication is relatively easy to solve.
    2:28:17 Um, but what has been frustrating is that you combine the two together.
    2:28:23 Um, and suddenly you get the extremely rich, I mean, we know that there are statements in
    2:28:25 number theory that are actually as undecidable.
    2:28:27 There are certain polynomials in some number of variables.
    2:28:29 Is there a solution in the natural numbers?
    2:28:31 And the answer depends on, on an undecidable statement.
    2:28:36 Um, like, like whether, um, the axioms of mathematics are consistent or not.
    2:28:43 Um, but, um, yeah, but even the, the simplest problems that combine something more applicative
    2:28:48 such as the primes with something additive such as shifting by two, uh, separately, we understand
    2:28:52 both of them well, but if you ask, when you shift the prime by two, do you, can you get
    2:28:54 a, how often can you get another prime?
    2:28:58 We, it’s been amazingly hard to relate the two.
    2:29:04 And we should say that the twin prime conjecture is just that it posits that there are infinitely
    2:29:06 many pairs of prime numbers that differ by two.
    2:29:13 Now, the interesting thing is that you have been very successful at pushing forward the
    2:29:18 field and answering these complicated questions, uh, of this variety.
    2:29:23 Like you mentioned the green tile theorem, it proves that prime numbers contain arithmetic
    2:29:24 progressions of any length.
    2:29:24 Right.
    2:29:27 It’s just mind blowing that you can prove something like that.
    2:29:27 Right.
    2:29:28 Yeah.
    2:29:33 So what we’ve realized because of this, this, this type of research is that there’s different
    2:29:36 patterns have different levels of, uh, interstructibility.
    2:29:41 Um, so, so what makes the twin prime conjecture hard is that you can take all the primes in
    2:29:44 the world, you know, three, five, seven, 11, so forth.
    2:29:46 There are some twins in there.
    2:29:51 11 and 13 is a twin prime, pair of twin primes and so forth, but you could easily, if you
    2:29:56 wanted to, um, redact the primes to get rid of, to get rid of the, um, these twins.
    2:30:00 Like the twins, they show up and there are infinitely many of them, but they’re actually reasonably
    2:30:01 sparse.
    2:30:04 Um, not, there’s, there’s not, I mean, initially there’s quite a few, but once you got to the
    2:30:07 millions, trillions, they become rarer and rarer.
    2:30:12 And you could actually just, you know, if, if, if someone was given access to the database
    2:30:15 of primes, you just edit out a few, a few primes here and there, they could make the twin
    2:30:20 prime conjecture false by just removing like 0.01% of the primes or something, um, just
    2:30:23 well, well chosen to, to, um, to do this.
    2:30:30 And so you could present a censored database of the primes, which passes all of the statistical
    2:30:34 tests of the primes, you know, that it obeys things like the polynomial theorem and other
    2:30:37 things about the primes, but it doesn’t contain any trim primes anymore.
    2:30:40 Um, and this is a real obstacle for the twin prime conjecture.
    2:30:48 It means that any proof strategy to actually find twin primes in the actual primes must fail
    2:30:51 when applied to these slightly edited primes.
    2:30:57 And so it must be some very, um, subtle, delicate feature of the primes that you can’t just get
    2:31:00 from like, like aggregate statistical analysis.
    2:31:01 Okay.
    2:31:02 So that’s all.
    2:31:02 Yeah.
    2:31:06 On the other hand, I think progressions has turned out to be much more robust.
    2:31:10 um, like you can take the primes and you can eliminate 99% of the primes actually, you
    2:31:13 know, and you can take, take any 99% you want.
    2:31:17 And, uh, it turns out, and another thing we proved is that you still get as make progressions.
    2:31:22 Um, as make progressions are much, you know, they’re like cockroaches of arbitrary length.
    2:31:23 Yes, yes.
    2:31:24 That’s crazy.
    2:31:25 Yeah.
    2:31:30 I mean, so, so, uh, for, for people who don’t know arithmetic progressions is a sequence of
    2:31:31 numbers that differ by some fixed amount.
    2:31:32 Yeah.
    2:31:34 But it’s again, like it’s, it’s an infinite monkey type phenomenon.
    2:31:38 For any fixed length of your set, you don’t get arbitrary lengths of progressions.
    2:31:40 You only get quite short progressions.
    2:31:43 But you’re saying twin primes is not an infinite monkey phenomenon.
    2:31:47 I mean, it’s a very subtle, it’s still an infinite monkey phenomenon.
    2:31:47 Right.
    2:31:48 Yeah.
    2:31:53 If the primes were really genuinely random, if the primes were generated by monkeys, um,
    2:31:56 then yes, in fact, the infinite monkey theorem would.
    2:32:02 Oh, but you’re saying that twin prime is, it doesn’t, you can’t use the same tools.
    2:32:04 Like the, it doesn’t appear random almost.
    2:32:05 Well, we don’t know.
    2:32:09 Uh, yeah, we, we, we, we believe the primes behave like a random set.
    2:32:14 And so the reason why we care about the twin prime conjecture is it’s a test case for whether
    2:32:19 we can genuinely confidently say with, with 0% chance of error that the primes behave like
    2:32:20 a random set.
    2:32:20 Okay.
    2:32:23 Random, yeah, random versions of the primes we know contain twins.
    2:32:29 Um, at least we’re, we’re, we’re 100% probably, uh, or probably tending to 100% as you go
    2:32:30 out further and further.
    2:32:32 Um, yeah.
    2:32:36 So the primes we believe that they’re random, um, the reason why ethnic progressions are
    2:32:41 indestructible is that regardless of whether you’re saying it looks random or looks, um,
    2:32:46 structured, like periodic, in both cases, um, ethnic progressions appear, but for different
    2:32:47 reasons.
    2:32:51 Um, and this is basically all the ways in which the thing, uh, there are many proofs
    2:32:55 of, of these sort of ethnic progression epithereums, and they’re all proven by some sort of dichotomy
    2:32:57 where your set is either structured or random.
    2:33:00 And in both cases you can say something and then you put the two together.
    2:33:06 Um, but in twin primes, if, if the primes are random, then you’re happy, you win.
    2:33:11 If your primes are structured, they can be structured in a specific way that eliminates the
    2:33:12 twin, the twins.
    2:33:15 Uh, and we can’t rule out that one conspiracy.
    2:33:20 And yet you were able to make a, as I understand, progress on the K-tuple version.
    2:33:21 Right.
    2:33:21 Yeah.
    2:33:25 So, um, the, the one funny thing about conspiracies is that any one conspiracy theory is really
    2:33:26 hard to disprove.
    2:33:27 Uh-huh.
    2:33:30 That, you know, if you believe the world is run by lizards, you say, here’s some evidence
    2:33:34 that, that it, it, not run by lizards, but that, that evidence was planted by lizards.
    2:33:38 So, um, you may have encountered, uh, uh, this kind of phenomenon.
    2:33:38 Yeah.
    2:33:44 So, like, like, um, a pure, like, there’s, there’s almost no way to, um, definitively rule out
    2:33:44 a conspiracy.
    2:33:49 And the same is true in mathematics, that a conspiracy is solely devoted to eliminating
    2:33:50 twin primes.
    2:33:53 You know, like, you would, you would have to also infiltrate other areas of mathematics
    2:33:56 to sort of, but, but like, it could be made consistent, at least as far as we know.
    2:34:02 But there’s a weird phenomenon that you can make one, um, uh, one conspiracy rule out other
    2:34:03 conspiracies.
    2:34:07 So, you know, if the, if the world is, is run by lizards, they can’t also be run by aliens.
    2:34:08 Right.
    2:34:09 Right.
    2:34:12 So one unreasonable thing is, is, is, is, is hard to disprove, but, but more than one,
    2:34:13 there are, there are tools.
    2:34:15 Um, so, yeah.
    2:34:19 So, for example, we, we know there’s infinitely many primes that are, um, uh, no two, which
    2:34:24 are, um, so the infinite pairs of primes which differ by at most, uh, um, 246 actually
    2:34:26 is, is, is, is, is, is, is the current.
    2:34:27 So there’s like a bound.
    2:34:27 Yes.
    2:34:28 On the.
    2:34:28 Right.
    2:34:33 So like there’s twin primes, there’s things called cousin primes that differ by, by four.
    2:34:35 Um, there’s things called sexy primes that differ by six.
    2:34:37 Uh, what are sexy primes?
    2:34:38 Primes that differ by six.
    2:34:42 The name, the name is much less, it costs as much less exciting than the name suggests.
    2:34:42 Got it.
    2:34:48 Um, so you can make a conspiracy rule out one of these, but like once you have like 50
    2:34:50 of them, it turns out that you can’t rule out all of them at once.
    2:34:54 It just, it requires too much energy somehow in this conspiracy space.
    2:34:56 How do you do the bound part?
    2:35:02 How do you, how do you develop a bound for the difference between the primes that there’s
    2:35:03 an infinite number of?
    2:35:05 So it’s ultimately based on, uh, what’s called the pigeonhole principle.
    2:35:09 Um, so the pigeonhole principle, uh, it’s a statement that if you have a number of pigeons
    2:35:14 and they all have to go into pigeonholes and you have more pigeons than pigeonholes, then
    2:35:16 one of the pigeonholes has to have at least two pigeons in.
    2:35:17 So there has to be two pigeons that are close together.
    2:35:22 So for instance, if you have a hundred numbers and they all range from one to a thousand,
    2:35:27 um, two of them have to be at most 10 apart because you can divide up the numbers from one
    2:35:29 to a hundred into 100 pigeonholes.
    2:35:33 Let’s, let’s say if you have a hundred, if you have 101 numbers, 101 numbers, then two
    2:35:37 of them have to be, uh, distance less than 10 apart because two of them have to belong to
    2:35:37 the same pigeonhole.
    2:35:43 So it’s a basic, um, basic feature of, uh, a basic principle in mathematics.
    2:35:48 Um, so it doesn’t quite work with the primes already because the primes get sparser and sparser
    2:35:51 as you go out, that, that fewer and fewer numbers are prime.
    2:35:56 But it turns out that there’s a way to assign weights to the, to, to numbers.
    2:36:00 Like, um, so there are numbers that are kind of almost prime, uh, but they’re not, they,
    2:36:04 they don’t have no factors at all other than themselves in one, but they have very few
    2:36:05 factors.
    2:36:09 Um, and it turns out that we understand almost primes a lot better than we can assign
    2:36:09 primes.
    2:36:14 Um, and so, for example, it was known for a long time that there were twin almost primes.
    2:36:15 This has been worked out.
    2:36:17 So almost primes are something we can’t understand.
    2:36:22 So you can actually restrict attention to a suitable set of almost primes.
    2:36:30 And, uh, whereas the primes are very sparse overall, uh, relative to the almost primes, they
    2:36:31 actually are much less sparse.
    2:36:35 They may, um, you can set up a set of almost primes where the primes are density like, say,
    2:36:35 one percent.
    2:36:41 Um, and that gives you a shot at proving by applying some sort of original principle that,
    2:36:43 that there’s pairs of primes that are just only a hundred, a hundred apart.
    2:36:47 But in order to prove the twin prime conjecture, you need to get the density of primes in something
    2:36:49 almost up to, up to a threshold of 50%.
    2:36:52 Um, once you get up to 50%, you will get twin primes.
    2:36:54 But, uh, unfortunately there are barriers.
    2:37:00 Um, we know that, that no matter what kind of good set of almost primes you pick, the density
    2:37:01 of primes can never get above 50%.
    2:37:03 It’s called the parity barrier.
    2:37:05 Um, and I would love to find, yeah.
    2:37:09 So one of my long-term dreams is to find a way to breach that barrier because it would
    2:37:13 open up not only the twin prime conjecture, the Goldbach conjecture, and many other problems
    2:37:18 in number theory are currently blocked because our current techniques would require going beyond
    2:37:21 this theoretical, um, parity barrier.
    2:37:23 It’s like, it’s like, it’s like pulling past the speed of light.
    2:37:23 Yeah.
    2:37:27 So we should say a twin prime conjecture, one of the biggest problems in the history of
    2:37:32 mathematics, Goldbach conjecture also, um, they feel like next door neighbors.
    2:37:36 Uh, is there been days when you felt you saw the path?
    2:37:37 Oh yeah.
    2:37:39 Um, um, yeah.
    2:37:42 Uh, sometimes you try something and it works super well.
    2:37:48 Um, you, you, again, again, the sense of mathematical smell, uh, we talked about earlier, uh, you learn
    2:37:53 from experience when things are going too well because there are certain difficulties that
    2:37:54 you sort of have to encounter.
    2:38:01 Um, um, I think the way a colleague might put it is that, um, you know, like if, if you are
    2:38:06 on the streets of New York and you put in a blindfold and you put in a car and, and, uh, after some
    2:38:11 hours, um, you, the blindfold is off and you’re in Beijing, um, you know, I mean, that was too
    2:38:12 easy somehow.
    2:38:14 Like, like there was no ocean being crossed.
    2:38:19 Um, even if you don’t know exactly what, how, what, what was done, uh, you’re suspecting that
    2:38:20 there’s something that wasn’t right.
    2:38:26 But is that still in the back of your head to, do you return to these, to the prime, do you return
    2:38:29 to the prime numbers every once in a while to see?
    2:38:29 Yeah.
    2:38:33 When I have nothing better to do, which is less and less than I have, which is, I get busy
    2:38:37 with so many things these days, but yeah, when I have free time and I’m not, and I’m too
    2:38:40 frustrated to, to work on my sort of real research projects.
    2:38:44 And I also don’t want to do my administrative stuff, but I don’t want to do some errands
    2:38:44 for my family.
    2:38:48 Um, I can play with these, these things, um, for fun.
    2:38:50 Uh, and usually you get nowhere.
    2:38:50 Yeah.
    2:38:52 You have to learn to just say, okay, fine.
    2:38:54 I, once again, nothing happened.
    2:38:55 I will, I will move on.
    2:39:01 Um, yeah, very occasionally one of these problems I actually solved, uh, well, sometimes as you
    2:39:05 say, you think you solved it and then you’re euphoric for, uh, maybe 15 minutes.
    2:39:09 And then you think I should check this because this is too easy to be true.
    2:39:10 And it usually is.
    2:39:16 What’s your gut say about when these problems would be, uh, solved twin prime and go back?
    2:39:16 Prime.
    2:39:19 I think we’ll keep getting, keep getting more partial results.
    2:39:23 Um, it doesn’t need at least one.
    2:39:26 This parity barrier is, is the biggest remaining obstacle.
    2:39:31 Um, there are simpler versions of the conjecture where we are getting really close.
    2:39:38 Um, so I think we will, in 10 years, we will have many more, much closer results.
    2:39:39 We may not have the whole thing.
    2:39:40 Um, yeah.
    2:39:42 So twin primes is somewhat close.
    2:39:46 Riemann hypothesis, I have no, I mean, it has to happen by accident.
    2:39:51 I think, uh, so the Riemann hypothesis is a kind of more general conjecture about the distribution
    2:39:52 of prime numbers.
    2:39:52 Right.
    2:39:53 Yeah.
    2:39:56 It’s, it’s, it’s, it’s sort of viewed more applicatively, like for, for questions only
    2:40:01 involving multiplication, no addition, the primes really do behave as randomly as, as you
    2:40:01 could hope.
    2:40:07 So there’s a phenomenon in probability called square root cancellation that, um, you know,
    2:40:13 like if you want to poll say America upon some issue, um, and you, you ask one or two voters
    2:40:17 and you may have sampled a bad sample and then you get, you get a really imprecise, um, measurement
    2:40:22 of the, of the full average, but if you sample more and more people, the accuracy gets better
    2:40:22 and better.
    2:40:27 And the accuracy improves like the square root of the number of people you, uh, you sampled.
    2:40:31 So yeah, if you sample, um, a thousand people, you can get like a two, three percent margin
    2:40:31 of error.
    2:40:36 So in the same sense, if you measure the primes in a certain multiplicative sense, there’s a
    2:40:40 certain type of statistic you can measure and it’s called the Riemann’s data function and
    2:40:41 it fluctuates up and down.
    2:40:46 But in some sense, um, as you keep averaging more and more, if you sample more and more,
    2:40:48 the fluctuations should go down as if they were random.
    2:40:50 And there’s a very precise way to quantify that.
    2:40:54 And the Riemann hypothesis is a very elegant way that captures this.
    2:40:58 But, um, as with many other ways in mathematics, we have,
    2:41:02 very few tools to show that something really genuinely behaves like really random.
    2:41:06 And this is actually not just a little bit random, but it’s, it’s asking that it behaves
    2:41:08 as random as it actually random set.
    2:41:10 This, this, this square root cancellation.
    2:41:15 And we know because of things related to the parity problem, actually, that most of us,
    2:41:18 usual techniques cannot hope to settle this question.
    2:41:21 Um, the proof has to come out of left field.
    2:41:28 Um, yeah, but, uh, what that is, yeah, no one has any serious proposal.
    2:41:32 Um, yeah, and, and there’s, there’s various ways to sort of, as I said, you can modify the
    2:41:35 primes a little bit and you can destroy the Riemann hypothesis.
    2:41:38 Um, so like, it has to be very delicate.
    2:41:41 You can’t apply something that has huge margins of error.
    2:41:43 It has to just barely work.
    2:41:49 Um, and like, um, there’s like all these pits, pitfalls that you like dodge very adeptly.
    2:41:51 The prime numbers is just fascinating.
    2:41:52 Yeah, yeah, yeah.
    2:41:57 What, what to you is, um, most mysterious about the prime numbers?
    2:42:00 That’s a good question.
    2:42:03 So like conjecturally, we have a good model of them.
    2:42:06 I mean, like, as I said, I mean, they have certain patterns, like the primes are usually
    2:42:07 odd, for instance.
    2:42:11 But apart from these sort of obvious patterns, they behave very randomly and just assuming
    2:42:12 that they behave.
    2:42:16 So there’s something called the Kramer random model of the primes, that, that, that after
    2:42:18 a certain point, primes just behave like a random set.
    2:42:22 Um, and there’s various slight modifications to this model, but this has been a very good
    2:42:22 model.
    2:42:24 It matches the numerics.
    2:42:26 It tells us what to predict.
    2:42:28 Like I can tell you with complete certainty, the true and prime conjecture is true.
    2:42:31 Uh, the random model gives overwhelming odds it is true.
    2:42:32 I just can’t prove it.
    2:42:37 Most of our mathematics is optimized for solving things with patterns.
    2:42:46 Um, and the primes have this anti-patent, um, as do almost everything really, but we can’t
    2:42:46 prove that.
    2:42:47 Yeah.
    2:42:50 I guess it’s not mysterious that the primes be random, it’s kind of random because there’s
    2:42:57 sort of no reason for them to be, um, uh, to have any kind of secret pattern, but what is
    2:43:01 mysterious is what is the mechanism that really forces the randomness to happen.
    2:43:03 Uh, and this is just absent.
    2:43:09 Another incredibly surprisingly difficult problem is the collage conjecture.
    2:43:09 Oh, yes.
    2:43:17 Simple to state, beautiful to visualize in its simplicity, and yet extremely, uh, difficult
    2:43:18 to solve.
    2:43:20 And yet you have been able to make progress.
    2:43:26 Uh, uh, Paul Urdar said about the collage conjecture that mathematics may not be ready for
    2:43:27 such problems.
    2:43:32 Others have stated that it is an extraordinarily difficult problem, completely out of reach.
    2:43:35 This is in 2010, out of reach of present day mathematics.
    2:43:37 And yet you have made some progress.
    2:43:39 Why is it so difficult to make?
    2:43:41 Can you actually even explain what it is?
    2:43:41 Oh, yeah.
    2:43:41 Yeah.
    2:43:43 So it’s, it’s, it’s a problem that you can explain.
    2:43:49 Um, yeah, it, um, it helps with some, um, visual aids, but yeah.
    2:43:53 So you take any natural number, like say 13 and you apply the following procedure to it.
    2:43:58 So if it’s even, you divide it by two and if it’s odd, you multiply it by three and add
    2:43:59 one.
    2:44:01 So even numbers get smaller, odd numbers get bigger.
    2:44:04 So 13, uh, would become 40 because 13 times three is 39.
    2:44:05 Add one, you get 40.
    2:44:09 So it’s a simple process for odd numbers and even numbers.
    2:44:10 They’re both very easy operations.
    2:44:11 And then you put it together.
    2:44:13 It’s still reasonably simple.
    2:44:16 Um, but then you ask what happens when you iterate it.
    2:44:18 You take the output that you just got and feed it back in.
    2:44:20 So 13 becomes 40.
    2:44:22 40 is now even divided by two is 20.
    2:44:27 20 is still even divided by two, 10, five, and then five times three plus one is 16.
    2:44:29 And then eight, four, two, one.
    2:44:33 So, uh, and then from one, it goes one, four, two, one, four, two, one.
    2:44:33 It cycles forever.
    2:44:39 So the sequence I just described, um, you know, 13, 40, 20, 10, so both, uh, these are also
    2:44:44 called hailstone sequences because there’s an oversimplified model of, of hailstone formation,
    2:44:47 you know, which is not actually quite correct, but it’s still somehow taught to high school
    2:44:53 students as a first approximation is that, um, like a little nugget of ice gets, gets a nice
    2:44:53 crystal.
    2:44:57 It forms in a cloud and it goes up and down because of the wind.
    2:45:02 And sometimes when it’s cold, it requires a bit more mass and maybe it melts a little bit.
    2:45:06 And this process of going up and down creates this sort of partially melted ice, which eventually
    2:45:07 causes hailstone.
    2:45:09 And eventually it falls down to the earth.
    2:45:14 So the conjecture is that no matter how high you start up, like you take a number, which
    2:45:18 is in the millions or billions, you go, this process that goes up if you’re odd and down,
    2:45:22 if you’re even, it eventually goes down to earth all the time.
    2:45:27 No matter where you start with this very simple algorithm, you end up at one and you
    2:45:28 might climb for a while.
    2:45:28 Right.
    2:45:29 Yeah.
    2:45:30 So it’s now, yeah.
    2:45:33 If you plot it, um, these sequences, they look like Brownian motion.
    2:45:37 Um, they look like the stock market, you know, they just go up and down in a, in a seemingly
    2:45:38 random pattern.
    2:45:43 And in fact, usually that’s what happens that if you plug in a random number, you can actually
    2:45:46 prove that at least initially that it would look like, um, a random walk.
    2:45:49 Um, and that’s actually a random walk with a downward drift.
    2:45:55 Um, it’s like, if you’re always gambling on a roulette at the casino with odds slightly
    2:45:55 weighted against you.
    2:46:00 So sometimes you, you win, sometimes you lose, but over in the long run, you lose a bit more
    2:46:01 than you win.
    2:46:04 Um, and so normally your wallet will hit, will go to zero.
    2:46:06 Um, if you just keep playing over and over again.
    2:46:08 So statistically it makes sense.
    2:46:09 Yes.
    2:46:16 So, so the result that I, I proved roughly speaking is that, that statistically like 90% of all inputs
    2:46:21 would, would drift down to maybe not all the way to one, but to be much, much smaller
    2:46:22 than what you started.
    2:46:27 So it’s, it’s like, if I told you that if you go to a casino, most of the time you end
    2:46:31 up, if you keep playing up long enough, you end up with a smaller amount in your wallet
    2:46:31 than when you started.
    2:46:34 Um, that’s kind of like the, what the result that I proved.
    2:46:36 So why is that result?
    2:46:41 Like, can you continue down that thread to prove the full conjecture?
    2:46:46 Well, the, the problem is that, um, my, I used arguments from probability theory.
    2:46:48 Um, and there’s always this exceptional event.
    2:46:53 So, you know, so in probability we have this, this low, large numbers, um, which tells you
    2:46:58 things like if you play a casino with a, um, a game at a casino with a losing, um, expectation
    2:47:04 over time, you are guaranteed or almost surely with probably probability as close to 100% as
    2:47:05 you wish, you’re guaranteed to lose money.
    2:47:08 But there’s always this exceptional outlier.
    2:47:13 Like it is mathematically possible that even in the game is, is the odds are not in your
    2:47:13 favor.
    2:47:18 You could just keep winning slightly more often than you lose very much like how in Navier
    2:47:22 Stokes, it could be, you know, um, most of the time, um, your waves can disperse.
    2:47:27 There could be just one outlier choice of initial conditions that would lead you to blow up.
    2:47:34 And there could be one outlier choice of, um, um, a special number that you stick in that
    2:47:38 shoots off infinity while all other numbers crash to earth, uh, crash to one.
    2:47:44 Um, in fact, um, there’s some mathematicians, um, who’ve, uh, Alex Kontorovic, for instance,
    2:47:50 who’ve proposed that, um, that actually, um, these Kaldats, uh, iterations are like these
    2:47:51 cellular automator.
    2:47:55 Um, um, yeah, actually, if you look at what happened in binary, they do actually look a little
    2:47:57 bit like, like these game of life type patterns.
    2:48:03 Um, and in an analogy to how the game of life can create these, these massive, like self-applicating
    2:48:07 objects and so forth, possibly you could create some sort of heavier than air flying machine,
    2:48:13 a number, which is actually encoding this machine, which is just whose job it is to encode
    2:48:16 is to create a version of a cell, which is, which is larger.
    2:48:22 Heavier than air machine encoded in a number that flies forever.
    2:48:22 Yeah.
    2:48:25 So Conway, in fact, worked on, worked on this problem as well.
    2:48:25 Oh, wow.
    2:48:30 So Conway, um, so similar, in fact, that was one of my inspirations for the Navi, Navi Stokes
    2:48:35 project, that Conway studied generalizations of the Kaldats problem, where instead of
    2:48:39 multiplying by three and adding one or dividing by two, you have more complicated branching
    2:48:43 rules, but, but instead of having two cases, maybe you have 17 cases and then you go up
    2:48:43 and down.
    2:48:49 And he showed that once your iteration gets complicated enough, you can actually encode
    2:48:52 Turing machines and you can actually make these problems undecidable and do things like
    2:48:52 this.
    2:48:58 In fact, he invented a programming language for, uh, these kind of fractional linear transformations.
    2:49:06 And he showed that you can, um, you can, um, you can program if it was too incomplete, you
    2:49:10 could, you could, you could, uh, um, you could make a program that if, if your number you insert
    2:49:14 in was encoded as a prime, it would sink to zero, it would go down, otherwise it would go
    2:49:16 up, uh, and things like that.
    2:49:22 Um, so the general class of problems is, is really, uh, as complicated as all the mathematics.
    2:49:27 Some of the mystery of the cellular automata that we talked about, uh, having a mathematical
    2:49:33 framework to say anything about cellular automata, maybe the same kind of framework is required.
    2:49:33 Yeah.
    2:49:34 Yeah.
    2:49:34 Yeah.
    2:49:40 If you want to do it, not statistically, but you really want 100%, 100% of all inputs
    2:49:41 to, to, to, for the earth.
    2:49:41 Yeah.
    2:49:48 So what might be feasible is, is, is assisting 99%, you know, go to one, but like everything,
    2:49:50 you know, uh, that looks hard.
    2:49:56 What would you say is out of these within reach famous problems is the hardest problem we have
    2:49:57 today?
    2:49:58 Is the Riemann hypothesis?
    2:50:00 Riemann is up there.
    2:50:05 Um, P equals NP is, is a good one because like, uh, that’s, that’s, that’s a meta problem.
    2:50:10 Like if you solve that in the, um, in the positive sense that you can find a P equals NP algorithm,
    2:50:13 then potentially this solves a lot of other problems as well.
    2:50:17 And we should mention some of the conjectures we’ve been talking about, you know, a lot of
    2:50:19 stuff is built on top of them now.
    2:50:20 There’s ripple effects.
    2:50:23 P equals NP has more ripple effects than basically any other.
    2:50:23 Right.
    2:50:30 If the Riemann hypothesis is disproven, um, that’d be a big mental shock to the number of
    2:50:35 theorists, uh, but it would have follow on effects for, um, cryptography.
    2:50:41 Um, because a lot of cryptography uses number theory, um, uses number theory constructions
    2:50:42 involving primes and so forth.
    2:50:47 And, um, it relies very much on the intuition that number of those are built over many, many
    2:50:51 years of what operations involving primes behave randomly and what ones don’t.
    2:50:58 Um, and in particular, um, encryption, um, methods are designed to turn text with information
    2:51:02 on it into text, which is indistinguishable from, um, from random noise.
    2:51:08 So, um, and hence we believe to be almost impossible to crack, um, at least mathematically.
    2:51:16 Um, but, uh, if something as core to our beliefs as the Riemann hypothesis is wrong, it means that
    2:51:20 there are, there are actual patterns of the primes that we’re not aware of.
    2:51:25 And if there’s one, there’s probably going to be more, um, and suddenly a lot of our crypto
    2:51:26 systems are in doubt.
    2:51:27 Yeah.
    2:51:32 But then how do you then say stuff about the primes?
    2:51:33 Yeah.
    2:51:37 Like you’re going towards the, uh, collect conjecture again.
    2:51:41 Um, because if I, I, you, you want it to be random, right?
    2:51:42 You want it to be random.
    2:51:43 Yeah.
    2:51:46 So more broadly, I’m just looking for more tools, more ways to show that, that, that things
    2:51:47 are random.
    2:51:49 How do you prove a conspiracy doesn’t happen?
    2:51:49 Right.
    2:51:53 Is there any chance to you that P equals NP?
    2:51:56 Is there some, can you imagine a possible universe?
    2:51:57 It is possible.
    2:52:00 I mean, there’s, there’s various, uh, scenarios.
    2:52:04 I mean, there’s, there’s one where it is technically possible, but in fact, it’s never
    2:52:05 actually implementable.
    2:52:10 The evidence is sort of slightly pushing in favor of no, that we’d probably P is not equal
    2:52:10 to NP.
    2:52:14 I mean, it seems like it’s one of those cases seem more similar to Riemann hypothesis that
    2:52:19 I think the evidence is leaning pretty heavily on the no.
    2:52:21 Certainly more on the no than on the yes.
    2:52:25 The funny thing about P equals NP is that we have also a lot more obstructions than we do
    2:52:26 for almost any other problem.
    2:52:31 Um, so while there’s evidence, uh, we also have a lot of results ruling out many, many
    2:52:33 types of approaches to the problem.
    2:52:36 Uh, this is the one thing that the computer scientists have actually been very good at.
    2:52:39 It’s actually saying that, that certain approaches cannot work.
    2:52:40 No go theorems.
    2:52:41 It could be undecidable.
    2:52:42 We don’t, yeah, we don’t know.
    2:52:47 There’s a funny story I read that when you won the Fields Medal, somebody from the internet
    2:52:54 wrote you and asked, uh, you know, what are you going to do now that you’ve won this prestigious
    2:52:54 award?
    2:53:00 And then you just quickly, very humbly said that, you know, this, a shiny medal is not
    2:53:01 going to solve any of the problems I’m currently working on.
    2:53:04 So I’m just, I’m going to keep working on them.
    2:53:08 It’s just, first of all, it’s funny to me that you would answer an email in that context.
    2:53:14 And second of all, it, um, it just shows your humility, but anyway, uh, maybe you could speak
    2:53:21 to the Fields Medal, but it’s another way for me to ask, uh, about, uh, Gregorio Perlman.
    2:53:26 What do you think about him famously declining the Fields Medal and the millennial prize, which
    2:53:30 came with a $1 million of prize money?
    2:53:32 He stated that I’m not interested in money or fame.
    2:53:35 The prize is completely irrelevant for me.
    2:53:39 If the proof is correct, then no other recognition is needed.
    2:53:40 Yeah.
    2:53:45 No, he’s, he’s somewhat of an outlier, um, even among mathematicians who tend to, uh, to
    2:53:47 have, uh, somewhat idealistic views.
    2:53:48 Um, I’ve never met him.
    2:53:51 I think I’d be interested to meet him one day, but I never had the chance.
    2:53:54 I know people who met him, but he’s always had strong views about certain things.
    2:53:58 Um, you know, I mean, it’s, it’s not like he was completely isolated from the math community.
    2:54:01 I mean, he would, he would give talks and write papers and so forth.
    2:54:04 Um, but at some point he just decided not to engage with the rest of the community.
    2:54:07 There was, he was disillusioned or something.
    2:54:08 Um, I don’t know.
    2:54:15 Um, and he decided to, to, uh, uh, to peace out, uh, and, you know, collect mushrooms in
    2:54:15 St. Petersburg or something.
    2:54:17 And that’s, that’s fine.
    2:54:19 You know, you can, you can do that.
    2:54:21 Um, I mean, that’s another sort of flip side.
    2:54:25 I mean, we are not, a lot of problems that we solve, you know, they, some of them do have
    2:54:27 practical application and that’s, that’s great.
    2:54:33 But, uh, like if you stop thinking about a problem that you, you know, so he’s, he hasn’t published
    2:54:35 since in this field, but that’s fine.
    2:54:37 There’s many, many other people who’ve done so as well.
    2:54:39 Um, yeah.
    2:54:43 So I guess one thing I didn’t realize initially with the Fields Medal is that it, it sort of
    2:54:45 makes you part of the establishment.
    2:54:50 Um, you know, so, you know, most mathematicians, you know, there’s, uh, just career mathematicians,
    2:54:54 you know, you just focus on publishing your next paper, maybe getting one, just to promote
    2:54:59 one, one rank, you know, and, and starting a few projects, maybe taking some students or
    2:54:59 something.
    2:55:00 Yeah.
    2:55:04 But then suddenly people want your opinion on things and, uh, you have to think a little
    2:55:07 bit about, uh, you know, things that you might just so foolishly say because you know
    2:55:08 no one’s going to listen to you.
    2:55:10 Uh, it’s more important now.
    2:55:12 Is it constraining to you?
    2:55:14 Are you able to still have fun and be a rebel?
    2:55:18 And try crazy stuff and play with ideas.
    2:55:22 I have a lot less free time than I had previously.
    2:55:24 Um, I mean, mostly by choice.
    2:55:28 I mean, I, I, I can always see I have the option to sort of, uh, decline.
    2:55:29 So I decline a lot of things.
    2:55:33 I, I think I could decline even more, um, or I could acquire a reputation of being so
    2:55:35 unreliable that people don’t even ask anymore.
    2:55:39 Uh, this is, I love the different algorithms here.
    2:55:41 This is, it’s always an option.
    2:55:44 Um, but you know, um,
    2:55:49 There are things that are like, I mean, so I mean, I, I, I don’t spend as much time
    2:55:53 as I do as a postdoc, you know, just, just working on one problem at a time or, um, fooling
    2:55:54 around.
    2:55:59 I still do that a little bit, but yeah, as you’re advancing your career, some of the
    2:56:03 more soft skills, so math somehow front loads all the technical skills to the early stages
    2:56:03 of your career.
    2:56:05 Um, so, um, yeah.
    2:56:09 So it’s, uh, as a postdoc is published or perish, you’re, you’re, you’re, you’re incentivized
    2:56:14 to basically focus on, on proving very technical theorems to sort of prove yourself, um, as well
    2:56:15 as proof the theorems.
    2:56:22 Um, but then as, as you get more senior, you have to start, you know, mentoring and, and, and, and
    2:56:27 giving interviews, uh, and, uh, and trying to shape, um, direction of the field, both research
    2:56:27 wise.
    2:56:32 And, and, and, you know, uh, sometimes you have to, uh, uh, you know, do various administrative
    2:56:32 things.
    2:56:37 And it’s kind of the right social contract because you, you need to, to work in the trenches
    2:56:39 to see what can help mathematicians.
    2:56:43 The other side of the establishment sort of the, the, the really positive thing is that,
    2:56:48 um, you get to be a light that’s an inspiration to a lot of young mathematicians or young people
    2:56:50 that are just interested in mathematics.
    2:56:53 It’s like, yeah, it’s just how the human mind works.
    2:57:01 This is where I would probably, uh, say that I like the Fields Medal, that it does inspire
    2:57:03 a lot of young people somehow.
    2:57:05 I don’t, this is just how human brains work.
    2:57:10 At the same time, I also want to give sort of respect to somebody like Gregorio Perlman,
    2:57:14 who is critical of awards in his mind.
    2:57:19 Those are his principles and any human that’s able for their principles to like do the thing
    2:57:22 that most humans would not be able to do.
    2:57:24 It’s beautiful to see.
    2:57:26 Some recognition is, is necessarily important.
    2:57:31 And, uh, but yeah, it’s, it’s also important to not let these things take over your life
    2:57:36 and like only be concerned about, uh, getting the next big award or whatever.
    2:57:42 Um, I mean, yeah, so again, you see these people try to only solve like a really big math problems
    2:57:48 and not work on, on, on things that are less, uh, sexy, if you wish, but, but, but actually
    2:57:50 still interesting and instructive.
    2:57:55 As you say, like the way the human mind works, it’s, um, we understand things better when they’re
    2:57:56 attached to humans.
    2:58:01 Um, and also, uh, if they’re attached to a small number of humans, like I said, there’s,
    2:58:04 there’s the way our human mind is, is, is wired.
    2:58:09 We can comprehend the relationships between the 10 or 20 people, you know, but once you
    2:58:12 get beyond like a hundred people, I, this is, there’s a, there’s a limit.
    2:58:13 I figured there’s a name for it.
    2:58:16 Um, beyond which, uh, it just becomes the other.
    2:58:22 Um, and, uh, so we have, you have to simplify the pole mass of, you know, 99.9% of humanity
    2:58:22 becomes the other.
    2:58:27 Um, and, uh, and often these models are, are, are incorrect and this causes all kinds of
    2:58:28 problems.
    2:58:33 But, um, so yeah, so to humanize a subject, you know, if you identify a small number of
    2:58:37 people and say, you know, these are representative people of the subject, you know, role models,
    2:58:45 for example, um, that has some role, um, but it can also be, um, uh, yeah, too much of it
    2:58:51 can be harmful because it’s, I’ll be the first to say that my own career path is not that of
    2:58:52 a typical mathematician.
    2:58:56 Um, I, the very accelerated education, I skipped a lot of classes.
    2:59:01 Um, I think I was, had very fortunate mentoring opportunities, um, and I think I was at the
    2:59:07 right place at the right time just because someone does, doesn’t have my, um, trajectory, you
    2:59:09 know, doesn’t mean that they can’t be good mathematicians.
    2:59:11 I mean, they, they, they, they, they, they, they, they, they just in, in a very different
    2:59:14 style, uh, and we need people with a different style.
    2:59:21 Um, and, you know, even if, and sometimes too much focus is given on the, on the person who does
    2:59:26 the last step to complete, um, a project in mathematics or elsewhere, that’s, that’s really
    2:59:30 taken, you know, centuries or decades with lots and lots of putting on lots of previous work.
    2:59:34 Um, but that’s a story that’s difficult to tell, um, if you’re not an expert because, you
    2:59:38 know, it’s easier to just say one person did this one thing, you know, it makes for a much
    2:59:39 simpler history.
    2:59:47 I think on the whole, it, um, is a hugely positive thing to, to talk about Steve Jobs as a representative
    2:59:53 of Apple when I personally know, and of course, everybody knows the incredible design, the
    2:59:58 incredible engineering teams, just the individual humans on those teams.
    3:00:01 They’re not a team, they’re individual humans on a team.
    3:00:06 And there’s a lot of brilliance there, but it’s just a nice shorthand, like a very, like
    3:00:06 pie.
    3:00:07 Yeah.
    3:00:08 Steve Jobs.
    3:00:08 Yeah.
    3:00:08 Yeah.
    3:00:13 As a starting point, you see, you know, as a first approximation, that’s how you.
    3:00:16 And then read some biographies and then look into much deeper first approximation.
    3:00:17 Yeah.
    3:00:17 That’s right.
    3:00:21 Uh, so you mentioned you were a Princeton to, um, Andrew Wiles at that time.
    3:00:22 Oh yeah.
    3:00:23 He’s a professor there.
    3:00:26 It’s a funny moment how history is just all interconnected.
    3:00:29 And at that time he announced that he proved the Fermat’s last theorem.
    3:00:36 What did you think, maybe looking back now with more context about that moment in math history?
    3:00:37 Yeah.
    3:00:38 So I was a graduate student at the time.
    3:00:43 I mean, I, I vaguely remember, you know, there was press attention and, uh, um, we all
    3:00:47 had the same, um, uh, we had pigeonholes in the same mailroom, you know, so we were all
    3:00:51 pitching out mail and like suddenly Andrew Wiles’ mailbox exploded to be overflowing.
    3:00:53 That’s a good, that’s a good metric.
    3:00:54 Yeah.
    3:00:58 Um, you know, so yeah, we, we all talked about it at, at tea and so forth.
    3:01:01 I mean, we, we didn’t understand, most of us sort of understand the proof.
    3:01:04 Um, we understand sort of high level details.
    3:01:07 Um, like there’s an ongoing project to formalize it in lean, right?
    3:01:08 Kevin Buzzard is actually.
    3:01:08 Yeah.
    3:01:10 Can we take that small tangent?
    3:01:12 Is it, is it, how difficult is that?
    3:01:17 Cause as, as I understand the Fermat’s last, the, the proof for, uh, Fermat’s last theorem
    3:01:19 has like super complicated objects.
    3:01:20 Yeah.
    3:01:22 It’s really difficult to formalize now.
    3:01:22 Yeah.
    3:01:23 I guess, yeah, you’re right.
    3:01:26 The, the objects that they use, um, you can define them.
    3:01:28 Uh, so they’ve been defined in lean.
    3:01:28 Okay.
    3:01:31 So, so just defining what they are can be done.
    3:01:33 Uh, that’s really not trivial, but it’s been done.
    3:01:39 But there’s a lot of really basic facts about, um, these objects that have taken decades
    3:01:41 to prove that they’re, they’re in all these different math papers.
    3:01:44 And so lots of these have to be formalized as well.
    3:01:51 Um, Kevin’s, uh, Kevin Buzzard’s goal, actually, he has a five-year grant to formalize Fermat’s
    3:01:55 class theorem and his aim is that he doesn’t think he will be able to get all the way down
    3:02:00 to the basic axioms, but he wants to formalize it to the point where the only things that he
    3:02:05 needs to rely on as black boxes are things that were known by 1980 to, um, to number theories
    3:02:06 at the time.
    3:02:11 Um, and then some other person or some other work would have to be done to, to, to, to get
    3:02:11 from there.
    3:02:16 Um, so it’s, it’s a different area of mathematics than, um, the type of mathematics I’m used
    3:02:22 to, um, um, in analysis, which is kind of my area, um, the objects we study are kind of
    3:02:23 much closer to the ground.
    3:02:28 We study, I study things like prime numbers and, and, and functions and, and things that
    3:02:35 are within scope of a high school, um, uh, math education to at least, uh, define, um, yeah.
    3:02:39 But then there’s this very advanced algebraic side of number theory where people have been
    3:02:41 building structures upon structures for, for quite a while.
    3:02:44 Um, and it’s, it’s a very sturdy structure.
    3:02:48 There’s a, it’s, it’s, it’s been, it’s been very, um, at the base, at least it’s extremely
    3:02:50 well-developed with textbooks and so forth.
    3:02:56 But, um, um, it does get to the point where, um, if you’re, if you haven’t taken these years
    3:03:00 of study and you want to ask about what, what is going on at, um, like level six of this
    3:03:04 tower, you have to spend quite a bit of time before they can even get to the point where
    3:03:05 you can see, you see something you recognize.
    3:03:13 What inspires you about his journey that we similar, as we talked about seven years, mostly
    3:03:14 working in secret.
    3:03:15 Yeah.
    3:03:17 Uh, yes, that is a romantic, uh, yeah.
    3:03:22 So it kind of fits with the sort of the, the romantic image that I think people have of
    3:03:26 mathematicians to the extent that they think of anything at all as these kind of eccentric,
    3:03:28 uh, you know, wizards or something.
    3:03:34 Um, so that certainly kind of, uh, uh, accentuated that perspective.
    3:03:36 You know, I mean, it’s, it is a great achievement.
    3:03:42 His style of solving problems is so different from my own, um, but which is great.
    3:03:43 I mean, we, we need people like that.
    3:03:44 Can you speak to it?
    3:03:48 Like what, uh, in, in terms of like the, you like the collaborative.
    3:03:52 I like moving on from a problem if it’s giving too much difficulty.
    3:03:57 Um, but you need the people who have the tenacity and the fearlessness.
    3:04:01 Um, you know, I’ve, I’ve collaborated with, with people like that where I want to give
    3:04:05 up, uh, because the first approach that we tried didn’t work and the second one didn’t
    3:04:09 approach, but they’re convinced and they have the third, fourth and the fifth of what works.
    3:04:12 Um, and I’d have to eat my words.
    3:04:12 Okay.
    3:04:15 I didn’t think this was going to work, but yes, you were right all along.
    3:04:20 And we should say for people who don’t know, not only are you known for the brilliance of
    3:04:25 your work, but the incredible productivity, just the number of papers, which are all of very
    3:04:25 high quality.
    3:04:30 So there’s something to be said about being able to jump from topic to topic.
    3:04:31 Yeah.
    3:04:31 It works for me.
    3:04:32 Yeah.
    3:04:35 I mean, there are also people who are very productive and they, they focus very deeply
    3:04:35 on.
    3:04:36 Yeah.
    3:04:38 I think everyone has to find their own workflow.
    3:04:43 Um, like one thing, which is a shame in mathematics is that we have mathematics.
    3:04:46 There’s sort of a one size fits all approach to teaching, teaching mathematics.
    3:04:50 Um, and, you know, so we have a certain curriculum and so forth.
    3:04:54 I mean, you know, maybe like if you do math competitions or something, you get a slightly different
    3:04:54 experience.
    3:05:01 But, um, I think many people, um, they don’t find their, their native math language, uh, until
    3:05:03 very late or usually too late.
    3:05:08 So they, they, they, they stop doing mathematics and they have a bad experience with a teacher who’s
    3:05:10 trying to teach them one way to do mathematics that they don’t like it.
    3:05:18 Um, my theory is that, um, humans don’t come, evolution has not given us a math center of
    3:05:18 our brain directly.
    3:05:24 We have a vision center and a language center and some other centers, um, which have evolution
    3:05:26 as honed, but we, it doesn’t, we don’t have an innate sense of mathematics.
    3:05:35 Um, but our other centers are sophisticated enough that different people, uh, we, we, we can repurpose
    3:05:38 other areas of our brain to do mathematics.
    3:05:41 So some people have figured out how to use the visual center to do mathematics.
    3:05:43 And so they think, think very visually when they do mathematics.
    3:05:47 Some people have repurposed their, their language center and they think very symbolically.
    3:05:52 Um, you know, um, some people like if, if they are very competitive and they, they’re like
    3:05:57 gaming, there’s a type of, there’s a part of your brain that’s very good at, at, at, uh,
    3:06:01 at solving puzzles and games and, and, and, and that can be repurposed.
    3:06:07 But like when I talk to other mathematicians, you know, they don’t quite think that I can
    3:06:10 tell that they’re using some of the different styles of, of thinking than I am.
    3:06:14 I mean, not, not disjoint, but they, they may prefer visual.
    3:06:16 Like I’m, I, I, I don’t actually prefer visual so much.
    3:06:18 I need lots of visual aids myself.
    3:06:23 Um, you know, mathematics provides a common language so we can still talk to each other,
    3:06:25 even if we are thinking in different ways.
    3:06:31 But you can tell there’s a different set of subsystems being used in the thinking process.
    3:06:33 Like they, they take different paths.
    3:06:35 They’re very quick at things that I struggle with and vice versa.
    3:06:38 Um, and yet they still get to the same goal.
    3:06:44 Um, and yeah, but I mean, the way we educate, unless you have like a personalized tutor or
    3:06:48 something, I mean, education sort of just by initial scale has to be mass produced.
    3:06:52 You know, you have to teach the 30 kids, you know, they have 30 different styles.
    3:06:54 You can’t, you can’t teach 30 different ways.
    3:07:00 On that topic, what advice would you give to students, uh, young students who are struggling
    3:07:04 with math and, but are interested in it and would like to get better?
    3:07:06 Is there something in this?
    3:07:06 Yeah.
    3:07:10 Um, in this complicated educational context, what, what would you, yeah, it’s a tricky
    3:07:10 problem.
    3:07:15 One nice thing is that there are now lots of sources for my faculty enrichment outside the
    3:07:15 classroom.
    3:07:19 Um, so in, in, in my day, there already, there are math competitions.
    3:07:22 Um, and you know, they’re also like popular math books in the library.
    3:07:26 Um, yeah, but, but now you have, you know, YouTube, uh, there are, there are forums just
    3:07:32 devoted to solving, you know, math puzzles and, um, and math shows up in, in other places,
    3:07:36 you know, like, um, for example, there, there are hobbyists who play poker, uh, for fun.
    3:07:42 Uh, and, um, um, they, they, they, you know, they are for very specific reasons, are interested
    3:07:43 in very specific probability questions.
    3:07:50 Um, and, and, uh, they actually, you know, there’s a community of amateur probabilists in,
    3:07:53 in, in, in poker, um, in chess and baseball.
    3:07:58 I mean, there’s, there’s, there’s, uh, yeah, um, there’s math all over the place.
    3:08:03 Um, and I’m, I’m, I’m hoping actually with, uh, with these new sort of tools for lean and
    3:08:08 so forth, that actually we can incorporate the broader public into math research projects.
    3:08:12 Um, like this is almost, it doesn’t happen at all currently.
    3:08:17 So in the sciences, there’s some scope for citizen science, like astronomers, uh, they’re
    3:08:21 amateurs who would discover comets and there’s biologists, they’re people who could identify
    3:08:26 butterflies and so forth, um, and in method, um, there are a small number of activities
    3:08:30 where, um, amateur mathematicians can like discover new primes and so forth.
    3:08:36 But, but previously, because we have to verify every single contribution, um, like most mathematical
    3:08:40 research projects, it would not help to have input from the general public.
    3:08:45 In fact, it would, it would just be, be time consuming because just error checking and everything.
    3:08:50 Um, but you know, one thing about these formalization projects is that they are bringing together
    3:08:52 more, bringing in more people.
    3:08:56 So I’m sure that high school students have already contributed to some of these formalizing
    3:08:57 projects who contributed to MathLib.
    3:09:02 Um, you know, you don’t need to be a PhD holder to just work on one atomic thing.
    3:09:08 There’s something about the formalization here that also, as a very first step, opens it up
    3:09:10 to the programming community too.
    3:09:11 Yes.
    3:09:13 The people who are already comfortable with programming.
    3:09:18 It seems like programming is somehow maybe just the feeling, but it feels more accessible
    3:09:20 to folks than math.
    3:09:26 Math is seen as this like extreme, especially modern mathematics seen as this extremely difficult
    3:09:29 to enter area and programming is not.
    3:09:30 So that could be just an entry point.
    3:09:33 You can execute code and you can get results, you know, you can print a whole world pretty
    3:09:34 quickly.
    3:09:41 Um, you know, like if, uh, if programming was taught as an almost entirely theoretical subject
    3:09:47 where you just taught the computer science, the theory of functions and, and, and, and, and
    3:09:50 routines and so forth and, and outside of some, some very specialized homework assignments,
    3:09:56 you’re not actually programmed like on the weekend for fun or yeah, that would be as considered
    3:09:56 as hard as math.
    3:10:04 Um, yeah, so as I said, you know, there are communities of non-mathematicians where they’re
    3:10:08 deploying math for some very specific purpose, you know, like, like optimizing their poker game
    3:10:12 and, and for them, then math becomes fun for them.
    3:10:16 Uh, what advice would you give in general to young people, how to pick a career, how
    3:10:17 to find themselves?
    3:10:20 Like that’s a tough, tough, tough question.
    3:10:20 Yeah.
    3:10:25 So, um, there’s a lot of certainty now in the world, you know, I mean, I, there was this period
    3:10:30 after the war where, uh, at least in the West, you know, if you came from a good demographic,
    3:10:35 you, uh, you know, like you, there was a very stable path to it, to a good career.
    3:10:40 You go to college, you get an education, you pick one profession and you stick to it.
    3:10:42 It’s becoming much more of a thing of the past.
    3:10:46 So I think you just have to be adaptable and flexible.
    3:10:50 I think people will have to get skills that are transferable, you know, like, like learning
    3:10:53 one specific programming language or one specific subject of mathematics or something.
    3:10:58 It’s, it’s, it’s, that itself is not a super transferable skill, but sort of knowing how
    3:11:05 to, um, reason with, with abstract concepts or how to problem solve and things go wrong.
    3:11:10 So anyway, these are things which I think we will still need, even as our tools get better
    3:11:13 and, you know, you’ll, you’ll be working with AI as well and so forth.
    3:11:15 But actually you’re an interesting case study.
    3:11:22 I mean, you’re like one of the great living mathematicians, right?
    3:11:26 And then you had a way of doing things and then all of a sudden you start learning.
    3:11:31 I mean, first of all, you kept learning new fields, but you learn lean.
    3:11:33 That’s not, that’s a non-trivial thing to learn.
    3:11:38 Like that’s a, yeah, that’s a, for a lot of people, that’s an extremely uncomfortable
    3:11:39 leap to take, right?
    3:11:40 Yeah.
    3:11:41 A lot of mathematicians.
    3:11:44 First of all, I’ve always been interested in new ways due to mathematics.
    3:11:49 I, I, I feel like a lot of the ways we do things right now are inefficient.
    3:11:55 Um, I, I, I spent, I, me and my colleagues, we spend a lot of time doing very routine computations
    3:11:58 or doing things that other mathematicians would instantly know how to do.
    3:12:02 And we don’t know how to do them and why can’t we search and get a quick response and
    3:12:02 so on.
    3:12:07 So that’s why I’ve always been interested in exploring new workflows.
    3:12:13 About four or five years ago, I was on a committee where we had to ask for ideas for interesting
    3:12:14 workshops to run at a math institute.
    3:12:19 And at the time, Peter Schultz had just, uh, formalized one of his, his, um, new theorems.
    3:12:25 And, um, there’s some other developments in computer assisted proof that look quite interesting.
    3:12:29 And I said, oh, we should, we should, uh, um, we should run a workshop on this.
    3:12:29 This is a pretty good idea.
    3:12:33 Um, and then I was a bit too enthusiastic about this idea.
    3:12:36 So I, I got voluntold to actually run it.
    3:12:41 Um, so I did with a bunch of other people, Kevin Buzzard and Jordan Ellenberg and a bunch
    3:12:42 of other people.
    3:12:45 Um, and it was, it was a, a, a, a nice success.
    3:12:49 We brought together a bunch of mathematicians and computer scientists and other people.
    3:12:51 And, and we got up to speed on the state of the yard.
    3:12:57 Um, and it was really interesting, um, developments that, but most mathematicians didn’t know what
    3:12:57 was going on.
    3:13:02 Um, um, that lots of nice proofs of concept, you know, just sort of hints of, of what was
    3:13:02 going to happen.
    3:13:06 This was just before chat GBD, but there was even then there was one talk about language
    3:13:09 models and the potential, um, capability of those in the future.
    3:13:12 So that got me excited about the subject.
    3:13:16 So I started giving talks, um, about this is something we should, more of us should start
    3:13:22 looking at, um, now that I arranged the runner’s conference and then chat GPT came out and like
    3:13:23 suddenly AI was everywhere.
    3:13:28 And so, uh, I got interviewed a lot, um, about, about this topic.
    3:13:33 Um, and in particular, um, the interaction between AI and formal proof assistants.
    3:13:34 And I said, yeah, they should be combined.
    3:13:38 This, this is, this is, um, this is perfect synergy to happen here.
    3:13:42 And at some point I realized that I have to actually do not just talk the talk, but walk
    3:13:45 the walk, you know, like, you know, I don’t work in machine learning and I don’t work
    3:13:49 in proof formalization and there’s a limit to how much I can just rely on authority and
    3:13:51 saying, you know, I, I, I’m a, I’m a, I’m a mathematician.
    3:13:52 Just trust me.
    3:13:55 You know, when I say that this is going to change mathematics and I’m not doing it any, and I
    3:13:56 don’t do any of it myself.
    3:14:02 So I felt like I had to actually, uh, uh, justify it.
    3:14:07 You know, a lot of what I get into actually, um, I don’t quite see an advice as how much
    3:14:08 time I’m going to spend on it.
    3:14:14 And it’s only after I’m sort of waist deep in, in, in, in a project that I, I realized by
    3:14:14 that point I’m committed.
    3:14:19 Well, that’s deeply admirable that you’re willing to go into the fray, be in some small
    3:14:21 way, beginner, right?
    3:14:26 Or have some of the sort of challenges that a beginner would, right?
    3:14:29 It’s new, new concepts, new ways of thinking.
    3:14:36 Also, you know, sucking at a thing that others, I think, I think in that talk, you know, you
    3:14:40 could be a field metal winning mathematician and an undergrad knows something better.
    3:14:41 Yeah.
    3:14:47 Um, I think mathematics inherently, I mean, mathematics is so huge these days that nobody
    3:14:48 knows all of modern mathematics.
    3:14:55 Um, and inevitably we make mistakes and, um, you know, uh, you can’t cover up your mistakes
    3:14:57 with just sort of bravado.
    3:15:01 And, and, uh, I mean, because people will ask for your proofs and if you don’t have the
    3:15:02 proofs, you don’t have the proofs.
    3:15:03 Um, I don’t love math.
    3:15:04 Yeah.
    3:15:06 So it does keep us honest.
    3:15:11 I mean, not, I mean, you can still, uh, it’s not a perfect, uh, panacea, but I think, uh,
    3:15:16 uh, we do have more of a culture of admitting error than, cause we’re forced to all the time.
    3:15:18 Big, ridiculous question.
    3:15:19 I’m sorry for it.
    3:15:23 Once again, who is the greatest mathematician of all time?
    3:15:26 Maybe one who’s no longer with us.
    3:15:28 Uh, who are the candidates?
    3:15:32 Euler, Gauss, Newton, Ramanujan, Hilbert.
    3:15:35 So first of all, as I mentioned before, like there’s, there’s some time dependence.
    3:15:37 On the day.
    3:15:37 Yeah.
    3:15:41 Like, like if you, if you, if you, if you pop cumulatively over time, for example, Euclid
    3:15:44 like, like sort of like is, is, is one of the leading contenders.
    3:15:50 Um, and then maybe some unnamed anonymous mathematicians before that, um, you know, whoever came up with
    3:15:55 the concept of numbers, you know, um, do mathematicians today still feel the impact of
    3:16:00 Hilbert just directly of what everything that’s happened in the 20th century.
    3:16:00 Yeah.
    3:16:00 Yeah.
    3:16:01 Hilbert spaces.
    3:16:05 We have lots of things that are named after him, uh, of course, just the arrangement of
    3:16:07 mathematics and just the introduction of certain concepts.
    3:16:10 I mean, 23 problems have been extremely influential.
    3:16:16 There’s some strange power to the declaring which problems are hard to solve.
    3:16:18 The statement of the open problems.
    3:16:19 Yeah.
    3:16:22 I mean, you know, this is bystander effect everywhere.
    3:16:27 Like if, if no one says you should do X, everyone just sort of mills around waiting for somebody
    3:16:30 else to, to, uh, to do something and, and like nothing gets done.
    3:16:35 Um, so, and, and like, like it’s the point of, one thing that actually, uh, you have to
    3:16:39 teach undergraduates in mathematics is that you should always try something.
    3:16:45 So, um, you see a lot of paralysis, um, in an undergraduate trying a math problem.
    3:16:49 If they recognize that there’s a certain technique that, that can be applied, they will try it.
    3:16:53 But there are problems for which they see none of their standard techniques obviously applies.
    3:16:56 And the common reaction is then just paralysis.
    3:16:58 I don’t know what to do.
    3:17:01 I, oh, um, I think there’s a quote from the Simpsons.
    3:17:03 I’ve tried nothing and I’m all out of ideas.
    3:17:11 Um, so, you know, like the next step then is to try anything, like no matter how stupid, um, and in fact,
    3:17:12 it’s almost the stupid of the better.
    3:17:18 Um, which, you know, I’m, I think we’re just almost guaranteed to fail, but the way it fails
    3:17:19 is going to be instructive.
    3:17:23 Um, like it fails because you, you, you’re not at all taking into account this hypothesis.
    3:17:24 Oh, this hypothesis must be useful.
    3:17:25 That’s a clue.
    3:17:30 I think you also suggested somewhere this, this fascinating approach, which really stuck with
    3:17:32 me as they’re using it.
    3:17:32 It really works.
    3:17:35 I think you said it’s called structured procrastination.
    3:17:36 No, yes.
    3:17:38 It’s when you really don’t want to do a thing.
    3:17:41 Do you imagine a thing you don’t want to do more?
    3:17:42 Yes, yes, yes.
    3:17:43 That’s worse than that.
    3:17:47 And then in that way you procrastinate by not doing the thing that’s worse.
    3:17:48 Yeah, yeah.
    3:17:50 That’s a nice, it’s a nice hack.
    3:17:50 It actually works.
    3:17:52 Yeah, yeah.
    3:17:57 There’s, um, I mean, with anything like, you know, I mean, like you’ve, um, psychology is
    3:17:58 really important.
    3:18:02 Like you, you, you, you talk to athletes like marathon runners and so forth and, you know,
    3:18:05 and they talk about what’s the most important thing is that the training regimen or the diet
    3:18:06 and so forth.
    3:18:11 So much of it is like your psychology, um, you know, just tricking yourself to, to think that
    3:18:12 the problem is feasible.
    3:18:14 Um, so that you can be motivated to do it.
    3:18:19 Is there something our human mind will never be able to comprehend?
    3:18:23 Well, I sort of, I guess a mathematician, I mean, you know, it’s my induction.
    3:18:28 I, it’s really, there must be some, it’s a really large number that you can’t understand.
    3:18:30 That was the first thing that came to mind.
    3:18:36 So that, but even broadly, is there, are we, is there something about our mind that we’re
    3:18:40 going to be limited even with the help of mathematics?
    3:18:41 Well, okay.
    3:18:44 I mean, it’s like, how much augmentation are you willing?
    3:18:49 Like, for example, if I didn’t even have pen and paper, um, like if I had no technology
    3:18:50 whatsoever, okay.
    3:18:51 So I’ve not allowed blackboard, pen and paper.
    3:18:55 You’re already much more limited than you would be.
    3:18:56 Incredibly limited.
    3:18:57 Even language.
    3:18:58 The English language is a technology.
    3:19:02 Uh, it’s, uh, it’s one that’s been very internalized.
    3:19:03 So you’re right.
    3:19:07 They’re really, the, the, the, the formulation of the problem is incorrect because there really
    3:19:10 is no longer a, just a solo human.
    3:19:17 We’re already augmented in extremely complicated, intricate ways, right?
    3:19:17 Yeah.
    3:19:17 Yeah.
    3:19:19 So we’re already like a collective intelligence.
    3:19:20 Yes.
    3:19:20 Yeah.
    3:19:21 I guess.
    3:19:26 So humanity plural has much more intelligence in principle on his good days.
    3:19:29 than, than the individual humans put together.
    3:19:30 Uh, it can all have less.
    3:19:30 Okay.
    3:19:32 But, um, um, yeah.
    3:19:36 So yeah, math, math, math, math, math, math, the math community plural is, is, is incredibly
    3:19:43 super intelligent, uh, entity, um, that, uh, no single human mathematician can, can come
    3:19:44 close to, to, to replicating.
    3:19:47 You see it a little bit on these like question analysis sites.
    3:19:50 Um, uh, so this math overflow, which is the math version of stack overflow.
    3:19:55 And like, sometimes you get like this very quick responses to very difficult questions from
    3:19:56 the community.
    3:20:00 Um, and it’s, it’s, it’s, it’s a pleasure to watch actually as a, as an expert.
    3:20:06 I’m a fan spectator of that, uh, of that site, just seeing the brilliance of the different
    3:20:12 people there, um, the depth of knowledge that some people have and the willingness to engage
    3:20:15 in the, in the rigor and the nuance of the particular question.
    3:20:16 It’s pretty cool to watch.
    3:20:17 It’s fun.
    3:20:18 It’s almost like just fun to watch.
    3:20:23 Uh, what gives you hope about this whole thing we have going on human civilization?
    3:20:29 I think, uh, yeah, uh, the, uh, the younger generation is always like, like really creative
    3:20:30 and enthusiastic and, and inventive.
    3:20:36 Um, it’s a pleasure working with, with, with, uh, with, uh, with, uh, with young students.
    3:20:43 Um, you know, the, uh, the progress of science tells us that the problems that used to be really
    3:20:48 difficult can become extremely, you know, can become like trivial to solve, you know,
    3:20:54 I mean, like it was like navigation, you know, just, just knowing where you were on the planet
    3:20:55 was this horrendous problem.
    3:21:00 People, people died, um, you know, uh, or lost fortunes because they couldn’t navigate,
    3:21:03 you know, and we have devices in our pockets that do this automatically for us, like it’s
    3:21:06 a completely solved problem, you know?
    3:21:10 So things that are seem unfeasible for us now could be maybe just sort of homework exercises
    3:21:11 for things.
    3:21:16 Yeah, one of the things I find really sad about the finiteness of life is that I won’t
    3:21:21 get to see all the cool things we create as a civilization, you know, that, cause it, in
    3:21:26 the next hundred years, 200 years, just imagine showing, showing up in 200 years.
    3:21:26 Yeah.
    3:21:30 Well, already plenty has happened, you know, like if, if you could go back in time and talk
    3:21:32 to your, your teenage self or something, you know what I mean?
    3:21:33 Yeah.
    3:21:39 And just the internet and, and now AI, I mean, again, they’ve been into, they’re beginning
    3:21:42 to be internalized and say, yeah, of course, uh, and AI can understand our voice.
    3:21:47 And, and give reasonable, you know, slightly incorrect answers to, to any question, but
    3:21:49 you know, this was mind blowing even two years ago.
    3:21:55 And in the moment, it’s hilarious to watch on the internet and so on, the, the drama, uh,
    3:21:57 people take everything for granted very quickly.
    3:22:01 And then they, we humans seem to entertain ourselves with drama.
    3:22:06 Well, out of anything that’s created, somebody needs to take one opinion and another person
    3:22:08 needs to take an opposite opinion and argue with each other about it.
    3:22:13 But when you look at the arc of things, I mean, it’s just even in progress of robotics.
    3:22:18 Just to take a step back and be like, wow, this is beautiful that we humans are able to create
    3:22:18 this.
    3:22:19 Yeah.
    3:22:23 When the infrastructure and the culture is, is healthy, you know, the community of humans
    3:22:29 can be so much more intelligent and mature and, and, and rational than the individuals
    3:22:30 within it.
    3:22:35 Well, one place I can always count on rationality is the comment section of your blog, which
    3:22:36 I’m a big fan of.
    3:22:38 There’s a lot of really smart people there.
    3:22:43 And thank you, uh, of course, for, uh, for putting those ideas out on the blog.
    3:22:50 And it’s, I can’t tell you how, uh, honored I am that you would spend your time with me today.
    3:22:52 I was looking forward to this for a long time.
    3:22:54 Terry, I’m a huge fan.
    3:22:57 Um, you inspire me, you inspire millions of people.
    3:22:58 Thank you so much for talking.
    3:22:58 Thank you.
    3:22:58 It was a pleasure.
    3:23:02 Thanks for listening to this conversation with Terrence Tao.
    3:23:07 To support this podcast, please check out our sponsors in the description or at lexfreedman.com
    3:23:08 slash sponsors.
    3:23:13 And now let me leave you with some words from Galileo Galilei.
    3:23:19 Mathematics is the language with which God has written the universe.
    3:23:24 Thank you for listening and hope to see you next time.
    3:23:41 Thanks for listening and hope to see you next time.

    Terence Tao is widely considered to be one of the greatest mathematicians in history. He won the Fields Medal and the Breakthrough Prize in Mathematics, and has contributed to a wide range of fields from fluid dynamics with Navier-Stokes equations to mathematical physics & quantum mechanics, prime numbers & analytics number theory, harmonic analysis, compressed sensing, random matrix theory, combinatorics, and progress on many of the hardest problems in the history of mathematics.
    Thank you for listening ❤ Check out our sponsors: https://lexfridman.com/sponsors/ep472-sc
    See below for timestamps, transcript, and to give feedback, submit questions, contact Lex, etc.

    Transcript:
    https://lexfridman.com/terence-tao-transcript

    CONTACT LEX:
    Feedback – give feedback to Lex: https://lexfridman.com/survey
    AMA – submit questions, videos or call-in: https://lexfridman.com/ama
    Hiring – join our team: https://lexfridman.com/hiring
    Other – other ways to get in touch: https://lexfridman.com/contact

    EPISODE LINKS:
    Terence’s Blog: https://terrytao.wordpress.com/
    Terence’s YouTube: https://www.youtube.com/@TerenceTao27
    Terence’s Books: https://amzn.to/43H9Aiq

    SPONSORS:
    To support this podcast, check out our sponsors & get discounts:
    Notion: Note-taking and team collaboration.
    Go to https://notion.com/lex
    Shopify: Sell stuff online.
    Go to https://shopify.com/lex
    NetSuite: Business management software.
    Go to http://netsuite.com/lex
    LMNT: Zero-sugar electrolyte drink mix.
    Go to https://drinkLMNT.com/lex
    AG1: All-in-one daily nutrition drink.
    Go to https://drinkag1.com/lex

    OUTLINE:
    (00:00) – Introduction
    (00:36) – Sponsors, Comments, and Reflections
    (09:49) – First hard problem
    (15:16) – Navier–Stokes singularity
    (35:25) – Game of life
    (42:00) – Infinity
    (47:07) – Math vs Physics
    (53:26) – Nature of reality
    (1:16:08) – Theory of everything
    (1:22:09) – General relativity
    (1:25:37) – Solving difficult problems
    (1:29:00) – AI-assisted theorem proving
    (1:41:50) – Lean programming language
    (1:51:50) – DeepMind’s AlphaProof
    (1:56:45) – Human mathematicians vs AI
    (2:06:37) – AI winning the Fields Medal
    (2:13:47) – Grigori Perelman
    (2:26:29) – Twin Prime Conjecture
    (2:43:04) – Collatz conjecture
    (2:49:50) – P = NP
    (2:52:43) – Fields Medal
    (3:00:18) – Andrew Wiles and Fermat’s Last Theorem
    (3:04:15) – Productivity
    (3:06:54) – Advice for young people
    (3:15:17) – The greatest mathematician of all time

    PODCAST LINKS:
    – Podcast Website: https://lexfridman.com/podcast
    – Apple Podcasts: https://apple.co/2lwqZIr
    – Spotify: https://spoti.fi/2nEwCF8
    – RSS: https://lexfridman.com/feed/podcast/
    – Podcast Playlist: https://www.youtube.com/playlist?list=PLrAXtmErZgOdP_8GztsuKi9nrraNbKKp4
    – Clips Channel: https://www.youtube.com/lexclips