AI transcript
0:00:10 is computational.”
0:00:13 The through arc is my entire career.
0:00:21 I’ve been building digital twins of some financial or scientific or industrial reality.
0:00:26 We looked at that and thought, “Wow, we better do something about this very large, unhedged
0:00:28 position.”
0:00:31 That was the history of Dodd-Frank, like we don’t really know what went wrong in the
0:00:33 financial crisis.
0:00:34 So let’s just go regulate everything.
0:00:41 And I think 99% of it was red tape that did not make the world a better place.
0:00:45 This was one of the many early nuclear winters of AI.
0:00:47 I walked right into it.
0:00:48 Hello, everyone.
0:00:51 Welcome back to the A16Z podcast.
0:00:52 This is your host, Steph’s Man.
0:00:58 Now, today we have a very special episode from a new series called In the Vault.
0:01:02 This series features some of the most influential voices across the finance ecosystem, including
0:01:06 of course our guest today, Marty Chappetz.
0:01:10 Marty is now the partner and vice chairman of Sixth Street Partners, however, he’s long
0:01:15 had a knack for spotting how a healthy serving of technology can disrupt other industries.
0:01:20 From his PhD of Applied Artificial Intelligence to Medicine, to being one of the founding
0:01:23 engineers of the team that created SecDp.
0:01:27 That’s the software that perhaps couldn’t predict the global financial crisis, but famously
0:01:29 helped Goldman survive it.
0:01:34 So today, Marty sits down with A16Z general partner, David Haber, and they talk about
0:01:39 a lot more, including where the puck is moving in this new wave of technology and the role
0:01:41 of regulators and lawmakers within that.
0:01:45 And of course, if you like this episode, don’t forget to check out our new series, In the
0:01:46 Vault.
0:01:51 You can find that on our A16Z live feed, which we’ll also include in the show notes.
0:01:55 There you can find other episodes with global payment CEO Jeff Sloan and Marco Argenti, the
0:01:57 CIO of Goldman Sachs.
0:02:08 All right, David, take it away.
0:02:13 Hello and welcome to In the Vault, A16Z’s fintech podcast series, where we sit down
0:02:16 with the most influential leaders in financial services.
0:02:21 In these conversations, we offer a behind-the-scenes view of how these leaders guide and manage
0:02:24 some of the country’s most consequential companies.
0:02:28 We also dive into the key trends impacting the industry and, of course, discuss how AI
0:02:30 will shape the future.
0:02:33 Today we’re excited to have Marty Chavez on the show.
0:02:37 Marty is currently a partner and vice chairman of Sixth Street Partners, a global investment
0:02:41 firm with more than 75 billion in assets under management.
0:02:45 Prior to Sixth Street, Marty spent over two decades at Goldman Sachs, where he held a
0:02:50 variety of senior roles, including chief information officer, chief financial officer, head of
0:02:54 global markets and served as a senior partner on the firm’s management committee.
0:02:58 He was also one of the founding engineers behind the legendary software system SecDB,
0:03:03 which many believe helped Goldman avoid the worst of the global financial crisis.
0:03:07 In our conversation, Marty talks through the evolution of technology, in financial services,
0:03:10 and the potential impact of artificial intelligence.
0:03:11 Let’s get started.
0:03:15 As a reminder, the content here is for informational purposes only.
0:03:19 Should not be taken as legal business tax or investment advice, or be used to evaluate
0:03:24 any investment or security, and is not directed at any investors or potential investors in
0:03:26 any A-60Z fund.
0:03:29 For more details, please see a-60z.com/disclosures.
0:03:31 Awesome, Marty.
0:03:32 Thank you so much for being here.
0:03:33 We really appreciate it.
0:03:34 David, it’s a pleasure.
0:03:36 I’ve been looking forward to this.
0:03:38 Marty, you’ve had a fascinating career.
0:03:43 Obviously, you’ve played a really pivotal role in turning the Wall Street trading business
0:03:46 into a software business, especially during your time at Goldman Sachs and also now at
0:03:47 Sixth Street.
0:03:52 But you also serve on the boards of the Broad Institute, on Stanford Medicine, and a bunch
0:03:53 of amazing companies.
0:03:58 Maybe walk us through your career arc, and what is sort of the through line in those
0:03:59 experiences?
0:04:03 Well, let me talk about a few of the things I did, and then the arc will become apparent.
0:04:06 So, I grew up in Albuquerque, New Mexico.
0:04:13 I had a moment, really, like the movie The Graduate, when I was about 10, and my father
0:04:21 put his arm around my shoulder and said, “Martin, computers are the future, and you will be
0:04:23 really good at computers.”
0:04:29 And this was 1974, and it was maybe not obvious to everybody.
0:04:31 It was obvious to my father.
0:04:35 He was a technical illustrator at one of the national laboratories, and there was this
0:04:43 huge computer that they had just bought that his team used to draw these beautiful blueprints
0:04:48 for the weapons in the nuclear arsenal, and they really had the latest and greatest equipment
0:04:53 when it was very clunky and very expensive, and my dad knew where it was going.
0:04:59 So, in New Mexico, you don’t have a ton of choices, especially at that time.
0:05:03 It was basically tourism and the military-industrial complex.
0:05:10 And so, I went for the military-industrial complex, and my very first summer job when
0:05:15 I was 16 was at the Air Force Weapons Lab in Albuquerque.
0:05:22 The government had decided that blowing up bombs in the Nevada desert was really problematic
0:05:28 in a lot of ways, and some scientists had this idea, crazy at the time, that we could
0:05:34 simulate the explosion of bombs rather than actually detonating them.
0:05:39 And they had one of the early Cray One supercomputers, and so for a little computer geek kid, this
0:05:46 was an amazing opportunity and my very first job was working on these big Fortran programs
0:05:53 that would use Monte Carlo simulations, like an early baptism in that technique, and you
0:05:59 would simulate individual Compton electrons being scattered out of a neutron bomb explosion,
0:06:04 and then calculate the electromagnetic pulse that arose from all that scattering, and my
0:06:11 job was to convert this program from MKS units to electron rest mass units, and so that certainly
0:06:17 seemed more interesting to me than jobs in the tourism business, and so I did that, and
0:06:24 then the next big moment was I went to Harvard, was a kid, and I took sophomore standing.
0:06:26 And did you buy any chance?
0:06:27 Did you do sophomore standing?
0:06:31 I didn’t do sophomore standing, I also went to Harvard, I think we also studied, you studied
0:06:34 biochemistry, so yeah.
0:06:39 So you have to declare a major, a concentration right away if you take sophomore standing,
0:06:43 and I didn’t know that, and I didn’t know what major I was going to declare, I was going
0:06:48 to be some kind of science, for sure, and I went to the science center, and the science
0:06:54 professors were recruiting for their departments, and I remember Steve Harrison sitting opposite
0:06:57 a table saying, “What are you?”
0:07:04 And it was a little bit like a Hogwarts question, I suppose, and I said, “I’m a computer scientist,”
0:07:10 and I cannot believe he said this to me in 1981, but he said, “The future of the life
0:07:17 sciences is computational,” and that was amazing, right, and so profound, and so prescient,
0:07:22 and I thought, “Wow, this must be true,” and he said, “We’ll construct a biochem major
0:07:28 just for you, and we’ll emphasize simulation, we’ll emphasize building digital twins of
0:07:34 living systems,” and so I walked right into his lab, which was doing some of the early
0:07:42 work on x-ray crystallography of protein capsids and working to set up the protein data bank,
0:07:47 and who knew that, well, even back then, he wanted to solve the protein folding problem,
0:07:51 and I remember he said it might take 50 years, it might take 100 years, and we might never
0:07:55 figure it out, and that’s obviously really important, because that protein data bank
0:08:01 was the raw data for AlphaFold, which later came in and solved the problem, and so the
0:08:03 through arc is my entire career.
0:08:11 I’ve been building digital twins of some financial or scientific or industrial reality, and the
0:08:16 amazing thing about a digital twin is you can do all kinds of experiments, and you can
0:08:22 ask all kinds of questions that would be dangerous or impossible to ask or perform in reality,
0:08:28 and then you can change your actions based on the answers to those questions, and so for
0:08:33 Wall Street, if you’ve got a high-fidelity model of your trading business, which was
0:08:39 something that I, with many other people, worked on as part of a huge team that made
0:08:45 secDB happen, then you could take that model and you could ask all kinds of counterfactual
0:08:51 or what-if questions, and as the CEO of Goldman Sachs, Lloyd Blankfein, who really commissioned
0:08:58 and sponsored this work for decades, would say, “We are not predicting the future.
0:09:03 We are excellent predictors of the present,” and I’ve been doing some variation of that
0:09:04 ever since.
0:09:05 That’s fascinating.
0:09:09 I don’t want to spend more time kind of digging into secDB, because that was also a
0:09:13 pression decision, obviously, during the financial crisis, but maybe just go going back.
0:09:18 I know you ended up doing some graduate work in healthcare and in AI, kind of how did you
0:09:19 go from that into Wall Street?
0:09:24 Maybe walk us through that transition, because it’s not probably obvious, maybe for most,
0:09:28 and then would love to kind of dig into your time at Goldman and as a founder, et cetera.
0:09:36 I got so excited about these problems of building digital twins of biology that it seemed obvious
0:09:41 to me that continuing that in grad school was the right thing to do.
0:09:46 I actually wanted to go ahead and start making money, and I really owe it to my mom, who convinced
0:09:50 me that if I didn’t get a PhD then I wasn’t going to do it.
0:09:53 I’m sure she was right about that, and so I applied to Stanford.
0:10:01 That was my dream school, and so what happened is I was working on this program, Artificial
0:10:09 Intelligence in Medicine, that had originated at Stanford under Ted Shortliff, who was extremely
0:10:15 well known even back then for building one of the first expert systems to diagnose blood
0:10:18 bacterial infections.
0:10:26 I joined his program and we and a bunch of my colleagues in the program took his work
0:10:32 and thought, “Can we put this work, this expert system inference, in a formal Bayesian probabilistic
0:10:33 framework?”
0:10:38 The answer is you can, but the downside is it’s computationally intractable.
0:10:46 My PhD was finding fast randomized approximations to get provably nearly correct answers in
0:10:47 a shorter period of time.
0:10:52 This was amazing as a project to work on, but we realized pretty early on that the computers
0:10:58 were way too slow to get anywhere close to the kinds of problems we wanted to solve.
0:11:03 The actual problem of diagnosis in general internal medicine is you’ve got about a thousand
0:11:10 disease categories and about 10,000 various clinical laboratory findings or manifestations
0:11:12 or symptoms.
0:11:16 The joint probability distribution that you have to calculate is therefore on the order
0:11:20 of 1,000 to the 10,000, and this is a big problem.
0:11:26 We made some inroads, but it was clear that the computers were just not fast enough.
0:11:31 We were all despondent, and this was one of the many early nuclear winters of AI.
0:11:33 I walked right into it.
0:11:35 I stopped saying artificial intelligence.
0:11:37 I was embarrassed.
0:11:41 This is not anything like artificial intelligence.
0:11:48 A bunch of us were casting around looking for other things to do, and I didn’t feel too
0:11:54 special as I got a letter in my box at the department, and the letter was from a head
0:11:57 hunter that Goldman Sachs had engaged.
0:11:58 I remember the letter.
0:11:59 I probably have it somewhere.
0:12:04 It said, “I’ve been asked to make a list of entrepreneurs in Silicon Valley with PhDs
0:12:09 in computer science from Stanford, and you are on my list.”
0:12:15 In 1993, before LinkedIn, I had to go do some digging to construct that list.
0:12:22 I thought, “I’m broke, and AI isn’t going anywhere anytime soon, and I have no idea
0:12:26 what to do, and I have a bunch of college friends in New York, and I’ll scam this bank
0:12:32 for free trip,” and that’s how I ended up at Goldman Sachs, and it didn’t seem auspicious.
0:12:34 I just liked the idea.
0:12:37 They were doing a project that seemed insane.
0:12:45 The project was we’re going to build a distributed, transactionally protected, object-oriented
0:12:50 database that’s going to contain our foreign exchange trading business, which is inherently
0:12:55 a global business, so we can’t trade out of Excel spreadsheets, and we need somebody
0:13:02 to write a database from scratch in C, and fortunately, I had not taken the database
0:13:06 classes at Harvard, because if I had, I would have said, “That’s crazy.
0:13:10 Why would you write a database from scratch, and I don’t know anything about databases,”
0:13:17 and so I just had the fortune to join as the fourth engineer and the three-person core
0:13:22 SecDB design team, and then a very lucky move.
0:13:27 One day, the boss comes into my office and said, “The desk strategist for the commodities
0:13:29 business has resigned.
0:13:30 Congratulations.
0:13:36 You are the new commodity strategist, and go out onto the trading desk and introduce yourself.”
0:13:41 He was never going to introduce me to them, and we were kind of scared of them, to be
0:13:46 honest, and so there I was in the middle of the oil trading desk, kind of an odd place
0:13:54 for a gay Hispanic computer geek to be in 1994 Wall Street.
0:13:57 It’s such an amazing story, and one of my favorite lines, which I believe and I repeat
0:14:02 often, is that opportunities live between fields of expertise, and I personally love
0:14:03 exploring those intersections.
0:14:06 I feel like your career has sort of been at these intersections.
0:14:09 Maybe fast forward kind of into the financial crisis.
0:14:13 Famously, my understanding is that SecDB really helped the firm navigate that period, and
0:14:15 really same global stack.
0:14:21 So what was it about SecDB that was different than other Wall Street firms who lost billions
0:14:24 of millions of dollars in that moment, and how did you guys sort of navigate that?
0:14:25 Yes.
0:14:29 Well, this is where we’re going to start to get into the pop culture, because of course
0:14:33 you have to mention the big short when you start talking about these things, right?
0:14:41 And so, SecDB showed the legendary CFO of Goldman Sachs during the financial crisis,
0:14:48 David Vineer, that we and everybody else had a very large position in collateralized debt
0:14:52 obligations, CDOs that were rated AAA.
0:14:58 So in SecDB, it’s another thing, and it has a price, and that price can go up and down
0:15:02 and there’s simulations where it gets shocked according to probability distribution, and
0:15:09 then there’s nonparametric or scenario based shocks, and we looked at that and thought,
0:15:15 wow, we better do something about this very large unhedged position, namely, sell it down
0:15:16 or hedge it.
0:15:19 We didn’t know that the financial crisis was coming.
0:15:25 Of course, we got in the press and elsewhere accused of all kinds of crazy things.
0:15:30 Like, they were the only ones who hedged, so they must have known it was coming.
0:15:35 We were just predictors of the present and thought, better hedge this position, hence
0:15:36 the big short.
0:15:43 And the question was, if Lehman fails, what happens then?
0:15:53 And we talk about Lehman as if it is a single thing, we had risk on the books to 47 distinct
0:16:00 Lehman entities with complex subsidiary guarantee, non-guarantee, collateralized, non-collateralized
0:16:01 relationships.
0:16:07 And so, it was super complicated, but in SecDB, it was all in there, and you could just slip
0:16:08 it around.
0:16:11 You could just as easily run the report from the counterpart side.
0:16:13 Now, I make it sound like it was perfect.
0:16:14 It was a little less than perfect.
0:16:20 We had to write a lot of software that weekend, but the point is, we had everything in one
0:16:23 virtual place and it was a matter of bringing it together.
0:16:26 So, it’s also part of the legend, but it’s also factual.
0:16:37 We had our courier show up at Lehman’s headquarters within an hour of its filing bankruptcy protection
0:16:45 for the 47 entities, and we had 47 sheets of paper with our closeout claim against each
0:16:50 of those entities rolled up from wide across all the businesses.
0:16:58 And it took many of the major institutions on Wall Street months to do this.
0:17:04 And so, that was the power of SecDB, and of course, it was wildly imperfect, but it was
0:17:06 something that nobody else had.
0:17:12 Just to piggyback on that last point, what impact has regulation had historically on
0:17:14 technology’s impact on financial services?
0:17:19 And I think about the different asset classes, for example, in global markets that shifted
0:17:22 to be traded electronically, right?
0:17:30 Was it often historically driven by regulatory change, emergent technologies, both I’m curious
0:17:32 about that and also how it informs the future?
0:17:33 Yes.
0:17:38 Well, so regulation is a powerful driver of change, and so is technological change.
0:17:46 And some things are just inevitable, a strong believer in capitalism with constraints and
0:17:51 rules, and we can, and we’ll have a vigorous debate about the nature of the rules and the
0:17:56 depth of the rules and who writes the rules and how they’re implemented and all that matters
0:17:57 hugely.
0:18:01 But to say, oh, we don’t need any rules or trust us, we’ll look after ourselves.
0:18:04 I just haven’t seen that work very well.
0:18:08 And so, in some cases, the regulators will say something.
0:18:15 For instance, in the Dodd-Frank legislation, there’s a very short paragraph that says that
0:18:22 the Federal Reserve shall supervise a simulation, it was called the DFAST simulation, the Dodd-Frank,
0:18:25 and I don’t even remember what the rest stands for, right?
0:18:31 And that will be part of the job of the Federal Reserve, a simulation of how banks will perform
0:18:34 in a severely adverse scenario.
0:18:37 And that was a powerful concept, right?
0:18:42 You have to simulate the cash flow, the balance sheet, the income statement, several quarters
0:18:44 forward in the future.
0:18:49 None of this was specified in detail in the statute, but then the regulators came in and
0:18:54 really ran with it and said, you will simulate nine quarters in the future, nine quarters
0:18:56 in the future, right?
0:18:59 The whole bank, all of it, end to end.
0:19:06 And then, in a very important move, the acting supervisor for regulation at the time, Dan
0:19:13 Tarullo, the Reserve Governor, said, we’re going to link that simulation to capital actions,
0:19:18 whether you get to pay a dividend or whether you get to buy your shares back or whether
0:19:20 you get to pay your people, right?
0:19:25 Because he knew that that would get everybody’s attention if it’s just a simulation.
0:19:30 That’s one thing, but if you need to do it right before you can pay anybody, including
0:19:35 your shareholders and your people, then you’re going to put an awful lot of effort into it.
0:19:41 So that caused a massive change and made the system massively safer and sounder.
0:19:43 We saw that in the pandemic.
0:19:49 There’s actually a powerful lesson for us in the early days of electronic trading.
0:19:56 For the early days of artificial intelligence, right, there was a huge effort by the regulators
0:20:02 to say, we’ve got to understand what these algos are thinking because they could manipulate
0:20:03 the market.
0:20:04 They could spoof the market.
0:20:06 They could crash the market.
0:20:11 And we would always argue, you’re never going to be able to figure out or understand what
0:20:12 they are thinking.
0:20:18 That’s a version of the halting problem, but at the boundary between a computer doing
0:20:24 some thinking and the real world, there’s some API, there’s some boundary.
0:20:30 And at the boundary, just like in the old days of railroad control, at those junctions,
0:20:35 you better make sure that two trains can’t get on a collision track, right?
0:20:38 And it’s the junction where it’s going to happen.
0:20:41 But then when the trains are just running on the track, just leave them running on the
0:20:42 track.
0:20:44 Just make sure they’re on the right track.
0:20:49 That’s going to be an important principle for LLMs and AIs generally.
0:20:53 As they start agenting and causing change in the world, we have to care a lot about
0:20:54 those boundaries.
0:20:58 And may that’s a good transition to present day.
0:21:01 You were a huge force in the digitization of Goldman Sachs and Wall Street in general
0:21:05 and kind of the rise of the developer as decision maker.
0:21:09 Maybe talk a little bit about generative AI specifically today.
0:21:13 How is this technology different from the AI of your PhD in 1991?
0:21:17 And what are the impacts that you see, not just in financial services, but perhaps in
0:21:18 other industries as well?
0:21:25 Well, for full disclosure, I remember late ’80s, early ’90s, and this program at Stanford.
0:21:27 We were the Bayesians, right?
0:21:31 And then we would look at these connectionists through neural network people.
0:21:33 And I hate to say it, but it’s true.
0:21:34 We felt sorry for them.
0:21:39 We thought, like, I don’t work, simulate neurons, you got to be kidding.
0:21:44 Well, so they just kept stimulating those neurons and look what happened.
0:21:48 Now, in some ways, there’s nothing new under the sun.
0:21:53 I had a fantastic talk not so long ago with Joshua Bengio, who’s really one of the four
0:22:01 or five luminaries in this renaissance of AI that’s delivering these incredible results.
0:22:09 And he was talking about how his work is based on taking those old Bayesian decision networks
0:22:14 and coupling them with neural networks, where the neural networks designed the Bayesian
0:22:16 networks and vice versa.
0:22:23 And so some of these ideas are coming back, but it is safe to say that the thread of research,
0:22:29 or the river of research that took this connectionist neural network approach is the one that’s
0:22:31 bearing all the fruit right now.
0:22:36 And David, the way I would describe all of those algorithms, because they are just software,
0:22:37 right?
0:22:39 Everything is turning equivalent, right?
0:22:42 But they’re very interesting software.
0:22:47 They started off with images, images of cats on the internet, people are putting up pictures
0:22:48 of cats.
0:22:52 Well, now you’ve got billions of images that people have labeled as saying this image contains
0:22:53 a cat.
0:22:56 And you can assume all the other images don’t contain a cat.
0:23:00 And you can train a network to see whether there’s a cat or not.
0:23:02 And then all the versions of that, how old is this cat?
0:23:04 Is this cat ill?
0:23:05 What illness does it have?
0:23:12 All of these things over the last maybe starting 10 years ago, you started to see amazing results.
0:23:17 And then after the transformer paper, now we’ve got another version of it, which is fill
0:23:22 in the blank or predict what comes next or predict what came before.
0:23:28 And these are the transformers and all the chat bots that we have right now.
0:23:29 It’s amazing.
0:23:34 I wish we all understood in more detail how they do the things that they do.
0:23:36 And we’re starting to understand it.
0:23:38 It all depends on the training set.
0:23:43 And it also depends crucially on a stationary distribution, right?
0:23:48 So the reason all this works on is it a cat or not a cat is the cats change very slowly
0:23:50 in evolutionary time.
0:23:52 They don’t change from day to day.
0:23:56 The things that change from day to day, such as markets, it’s a lot less clear how these
0:23:58 techniques are going to be powerful.
0:24:02 But here they are, they’re doing amazing things.
0:24:09 We’re using this in my firm and we’re using it in production and we’re deeply aware of
0:24:10 all the risks.
0:24:12 And we have a lot of policies around it.
0:24:22 It reminds me a lot of the early Wild West days of electronic trading where we’re authorizing
0:24:29 a few of us to do some R&D, but very careful about what we put into production.
0:24:31 And we’re starting with the easy things.
0:24:36 It feels like a unique moment or maybe there’s a unique to me, a lot of momentum happening
0:24:42 both bottoms up and top down, bottoms up because, you know, I don’t know, something like 40%
0:24:49 of Fortune 100 is using maybe GitHub co-pilot and some new organization or Microsoft AI product.
0:24:54 And then conversely, every CEO or every board member, right, can plug a prompt into one
0:24:59 of these models and kind of understand intuitively the magic and imagine the impact that it could
0:25:00 have on their business.
0:25:04 And so it seems like the employees of many of these companies want the productivity gains
0:25:06 that you’re describing.
0:25:11 Boards are like, you know, how is this going to impact the human capital efficiency of
0:25:12 our company?
0:25:13 Like where can we deploy this technology?
0:25:18 I guess when other CEOs of large companies, you know, come to you for your advice, like
0:25:23 how are you advising them on how to deploy AI in their organizations?
0:25:24 Where within those companies?
0:25:26 Like what’s the opportunity you see maybe in the near term and, you know, in the middle
0:25:28 or long-term?
0:25:30 Really first order of business.
0:25:36 And this is something that we worked on at Goldman for a long time and I’m happy that
0:25:41 we left Goldman in a place where it’s going to be able to capitalize on Gen AI really,
0:25:47 really quickly, which is having a single source of truth for all the data across the
0:25:50 enterprise time traveling source of truth.
0:25:52 So what is true today?
0:25:58 And what did we know to be true at this, at the close of business on some day, three
0:25:59 years ago, right?
0:26:04 And we have all of that and it’s cleaned and it’s curated and it’s named.
0:26:10 And we know that we can rely on it because all of this training of AI’s is still garbage
0:26:12 in garbage out.
0:26:19 And so if you don’t have ground truth, then all you’re going to do is fret about hallucinations
0:26:26 and you’re just going to be caught in hallucinations and imaginings that are incorrect and not actionable.
0:26:32 And so getting your single source of truth right, that data engineering problem, I think
0:26:35 a lot of companies have done a terrible job of it.
0:26:42 I’m really excited about the new Gemini 1.5 context window, a million tokens like that
0:26:43 one.
0:26:46 I just want to shout that from the mountaintops, like if you’ve been in this game and you’ve
0:26:52 been using RAG Retrieval Augmented Generation, which is powerful, but you run into this problem
0:26:58 of I’ve got to take a doc, a complicated doc that references pieces of itself and chunk
0:27:03 it where you’re going to lose all of that unless you have a really big context window.
0:27:10 Having that quadratic time complexity of the length of the context window is just monumental.
0:27:14 And I think over the next few months, you’re going to see a lot of those changes problems
0:27:16 that were really hard are going to become really easy.
0:27:17 I don’t know.
0:27:18 What do you think?
0:27:22 Look, I think every company needs to kind of using Goldman maybe as an analogy, so much
0:27:29 of the organization, but in particular, even many parts of the Federation, I think can and
0:27:33 should be leveraging software and a lot of those workflows can be augmented with AI, right
0:27:38 from legal to compliance, to vendor onboarding to risk management as we’re talking about.
0:27:42 But I think it’s going to have a profound impact on the enterprise, obviously, we’re
0:27:43 quite biased.
0:27:48 I guess one topic that people debate quite often is the impact of regulation on the adoption
0:27:49 of this technology.
0:27:54 I’m just curious your view on the government’s role in this, in general AI and what advice
0:27:59 you have in kind of accelerating this versus what responsibility they have.
0:28:03 Well, one of the things that I learned during the financial crisis was a huge amount of
0:28:10 respect for the regulators and the lawmakers, they have a really tough job and really important
0:28:18 to collaborate with them and to become a trusted source of knowledge about how a business works.
0:28:23 And I just lament the number of people who just go into a regulator and they’re just
0:28:28 talking their own book and hoping that the regulator or lawmaker won’t understand it.
0:28:33 I think that is a terrible way to approach it and has the very likely risk of just making
0:28:36 them angry, right, which is definitely not the right outcome.
0:28:45 So I’ve been spending a lot of time with regulators and legislators and a bunch of different jurisdictions
0:28:51 and you already heard a bit of what I have to say, which is let’s please not take the
0:28:55 approach that we first took with electronic trading.
0:29:02 That approach was write a big document about how your electronic trading algo works.
0:29:07 And then step two was hand that document over to a control group who will then read the
0:29:11 document and assert the correctness of the algo, right?
0:29:13 This is the halting problem squared.
0:29:17 It’s not just a bad idea, it’s an impossible idea.
0:29:23 And instead, let’s put a lot of emphasis, a lot of standards and attestations at all
0:29:29 the places where there’s a real world interface, especially where there’s a real world interface
0:29:32 to another computer, right?
0:29:39 So the analogy is in electronic trading, there was not a lot you could do to prevent a trader
0:29:47 from shouting into a phone an order that would take your bank down, right?
0:29:51 How are you going to prevent that from happening, right?
0:29:58 But what you really worried about was computers that were putting in millions of those trades,
0:29:59 right?
0:30:03 Even if they were very small, they could do it very fast and you could cause terrible
0:30:04 things to happen.
0:30:10 And so another thing I’m always telling the regulators is, please, please, the concept
0:30:11 of liability, right?
0:30:18 They start with this idea, let’s make the LLM creators liable for every bad thing that
0:30:20 happens with an LLM.
0:30:28 To me, that is the exact equivalent of saying, let’s make Microsoft liable for every bad
0:30:31 thing that someone does on a Windows computer, right?
0:30:37 They’re fully general, and so these LLMs are a lot like operating systems.
0:30:41 And so I think the regulation has to happen at these boundaries, at these intersections,
0:30:45 at these control points first, and then see where we go.
0:30:50 And I would like to see some of these regulations in place sooner rather than later.
0:30:54 Unfortunately, the pattern of human history is we usually wait for something really bad
0:31:00 to happen and then go put in the cleanup regulations after the fact and generally overdo it.
0:31:03 That was the history of Dodd-Frank.
0:31:06 We don’t really know what went wrong in the financial crisis.
0:31:07 So let’s just go regulate everything.
0:31:13 And I think 99% of it was red tape that did not make the world a better place.
0:31:20 And some of it, such as the C-CAR regulations, was profound and did make the system safer
0:31:21 and sounder.
0:31:26 And I would want us to do those things first and not just the red tape.
0:31:29 Well, I know you’re also very passionate about life sciences.
0:31:33 You started your graduate career there, and I believe you now sit on the board of recursion,
0:31:34 you know, pharmaceuticals.
0:31:35 I do, yes.
0:31:40 Yeah, maybe talk through kind of the implications that you’re seeing for GenRWBI in life sciences
0:31:41 and biotech in particular.
0:31:44 Well, it’s epic, isn’t it?
0:31:48 I had an amazing moment just a couple of months ago.
0:31:55 I had the opportunity of being the fireside chat post for Jensen of NVIDIA at the JPMorgan
0:31:56 Healthcare event.
0:31:59 And there was a night that recursion was sponsoring.
0:32:04 And we really talked about everything he learned from chip design.
0:32:11 So Jensen, incredibly modest, will say, well, he was just the first in that generation of
0:32:17 chip designers who were the first to use software to design chips from scratch.
0:32:19 And I was really the only way he knew how to design it.
0:32:24 And he likes to say that NVIDIA is a software company, which it is, right?
0:32:27 But that seems counterintuitive, because it’s supposed to be a hardware company.
0:32:33 And he talks about the layers and layers of simulations that go into his business.
0:32:37 Those layers do not go all the way to Schrodinger’s equation.
0:32:40 And we can’t even do a good job on small molecules, right?
0:32:43 Solving Schrodinger’s equation for small molecules.
0:32:48 But it does go very low, and it goes very high to what algorithm is this chip running.
0:32:51 And that’s all software simulation.
0:32:57 And he said in that chat that at some point, he then has to press a button that says, “Take
0:33:03 this chip and fabricate it,” and the pressing of that button costs $500 million.
0:33:08 And so you really want to have a lot of confidence in your simulations.
0:33:14 Well, drugs have that flavor very much so, except they cost a lot more than $500 million
0:33:17 by the time they get through phase three.
0:33:25 And so it seems obvious to all of us that you ought to be able to do these kinds of simulations
0:33:26 and find the drugs.
0:33:33 Now, the first step is going to be just slightly improve the probability of success of a phase
0:33:34 two or phase three trial.
0:33:38 That’s going to be incredibly valuable, because right now, so many of them fail, and they’re
0:33:41 multi-billion-dollar failures.
0:33:45 But eventually, will we be able to just find the drug?
0:33:49 The needle in the haystack nature of this problem is mind-blowing.
0:33:53 There are, depending on the size of the carbon chain, but let’s just pick a size, there’s
0:34:00 about 10,000 trillion possible organic compounds, and there are 4,000 approved drugs globally.
0:34:03 So that’s a lot of zeros.
0:34:07 And if the AIs can help us navigate that space, that’s going to be huge.
0:34:12 But I’m going to bet that we will map biology in this way.
0:34:18 It’s just, biology is so many orders of magnitude, more complicated than the most complicated
0:34:19 chip.
0:34:24 And we don’t even know how many orders of magnitude and how many layers of abstraction
0:34:25 are in there.
0:34:30 But the question is, do we have enough data so that we can train the LOMs to infer the
0:34:32 rest of biology?
0:34:35 Or do we need an awful lot more data?
0:34:38 And I think everybody’s clear we need more data.
0:34:44 I think what we’re less clear on is, do we need 10 orders of magnitude, more data, or
0:34:46 100 more orders of magnitude?
0:34:47 We just don’t know.
0:34:49 Amazing time to be alive.
0:34:55 That’s the time ever we say this at the alphabet board, and I’d say, what an incredible group
0:34:56 of people.
0:35:01 And when I hear Sergey and Larry say, it’s the best time ever to be a computer scientist.
0:35:03 Of course, I agree with that.
0:35:04 It’s magical.
0:35:05 Totally.
0:35:06 Awesome.
0:35:07 Well, Marty, thank you so much for your time.
0:35:08 Always a pleasure.
0:35:11 You’ve had such a fascinating career, and we really appreciate you spending time with us.
0:35:12 David, great talking with you.
0:35:13 Be well.
0:35:14 Thanks.
0:35:18 I’d like to thank our guests for joining In the Vault.
0:35:24 You can hear all of our episodes by going to a16z.com/podcasts.
0:35:30 To learn more about the latest in fintech news, be sure to visit a16z.com/fintech and
0:35:33 subscribe to our monthly fintech newsletter.
0:35:33 Thanks for tuning in.
0:35:50 [MUSIC PLAYING]
0:36:00 [BLANK_AUDIO]
a16z General Partner David Haber talks with Marty Chavez, vice chairman and partner at Sixth Street Partners, about the foundational role he’s had in merging technology and finance throughout his career, and the magical promises and regulatory pitfalls of AI.
This episode is taken from “In the Vault”, a new audio podcast series by the a16z Fintech team. Each episode features the most influential figures in financial services to explore key trends impacting the industry and the pressing innovations that will shape our future.
Resources:
Listen to more of In the Vault: https://a16z.com/podcasts/a16z-live
Find Marty on X: https://twitter.com/rmartinchavez
Find David on X: https://twitter.com/dhaber
Stay Updated:
Find a16z on Twitter: https://twitter.com/a16z
Find a16z on LinkedIn: https://www.linkedin.com/company/a16z
Subscribe on your favorite podcast app: https://a16z.simplecast.com/
Follow our host: https://twitter.com/stephsmithio
Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.