Innovation Through Software Development and IT

AI transcript
0:00:05 Hi everyone, welcome to the A6NZ podcast, I’m Sonal.
0:00:08 So one of the recurring themes we talk a lot about on this podcast is how software changes
0:00:11 organizations and vice versa.
0:00:16 More broadly, it’s really about how companies of all kinds innovate with the org structures
0:00:18 and tools that they have.
0:00:22 And today’s episode, a rerun of a very popular episode from a couple years ago, draws on
0:00:28 actual research and data from one of the largest large-scale studies of software and organizational
0:00:30 performance out there.
0:00:34 Joining me in this conversation are two of the authors of the book Accelerate, the Science
0:00:39 of Lean Software and DevOps by Nicole Forsgren, Jez Humble, and Jean Kim.
0:00:43 We have the first two authors, so Nicole, who did her PhD research trying to answer
0:00:48 the lucid, eternal questions around how to measure software performance in orgs, especially
0:00:52 given past debates, around does IT matter?
0:00:57 She was the co-founder and CEO of Dora, which put out the annual State of DevOps report.
0:01:02 Dora was acquired by Google Cloud a little over a year ago, and she will soon be joining
0:01:05 GitHub as VP of Research and Strategy.
0:01:10 And then we also have Jez Humble, who was CTO at Dora, is currently in developer relations
0:01:17 at Google Cloud and is also the co-author of the books The DevOps Handbook, Lean Enterprise,
0:01:18 and Continuous Delivery.
0:01:23 In the conversation that follows, Nicole and Jez share their findings about high-performing
0:01:27 companies, even those that may not think they’re tech companies, and answer my questions about
0:01:32 whether there’s an ideal org type for this kind of innovation, whether it’s the size
0:01:36 of the organization, the software architecture they use, their culture or people, and where
0:01:39 the role of software and IT lives within that.
0:01:44 But first, we begin by talking briefly about the history of DevOps and where that fits
0:01:48 in the broader landscape of related software movements.
0:01:50 So I started as a software engineer at IBM.
0:01:54 I did hardware and software performance, and then I took a bit of a detour into academia
0:01:59 because I wanted to understand how to really measure and look at performance that would
0:02:04 be generalizable to several teams in predictable ways and in predictive ways.
0:02:10 And so I was looking at and investigating how to develop and deliver software in ways
0:02:15 that were impactful to individuals, teams, and organizations.
0:02:20 And then I pivoted back into industry because I realized this movement had gained so much
0:02:26 momentum and so much traction, and industry was desperate to really understand what types
0:02:30 of things are really driving performance outcomes and excellence.
0:02:32 And what do you mean by this movement?
0:02:38 This movement that now we call DevOps, so the ability to leverage software to deliver
0:02:44 value to customers, to organizations, to stakeholders.
0:02:47 And I think from a historical point of view, the best way to think about DevOps, it’s
0:02:53 a bunch of people who had to solve this problem of how do we build large distributed systems
0:02:59 that were secure and scalable and be able to change them really rapidly and evolve them.
0:03:02 And no one had had that problem before, certainly at the scale of companies like Amazon and
0:03:03 Google.
0:03:07 And that really is where the DevOps movement came from, trying to solve that problem.
0:03:11 And you can make an analogy to what Agile was about since the kind of software crisis
0:03:17 of the 1960s and people trying to build these defense systems at large scale, the invention
0:03:24 of software engineering as a field, Margaret Hamilton, her work at MIT on the Apollo program.
0:03:28 What happened in the decades after that was everything became kind of encased in concrete
0:03:33 in these very complex processes, this is how you develop software.
0:03:36 And Agile was kind of a reaction to that, saying we can develop software much more
0:03:40 quickly with much smaller teams in a much more lightweight way.
0:03:43 So we didn’t call it DevOps back then, but it’s also more Agile.
0:03:45 Can you guys break down the taxonomy for a moment?
0:03:48 Because when I think of DevOps, I think of it in the context of the containerization
0:03:51 of code and virtualization.
0:03:56 I think of it in the context of microservices and being able to do modular teams around
0:03:57 different things.
0:03:59 There’s an organizational element, there’s a software element, there’s an infrastructure
0:04:03 component, like paint the big picture for me of those building blocks and how they all
0:04:04 kind of fit together.
0:04:07 Well, I can give you a very personal story, which was my first show after college was
0:04:12 in 2000 in London, working at a startup where I was one of two technical people in the startup.
0:04:17 And I would deploy to production by FTP and code from my laptop directly into production.
0:04:21 And if I wanted to roll back, I’d say, “Hey, Johnny, can you FTP your copy of this file
0:04:22 to production?”
0:04:23 And that was our rollback process.
0:04:27 And then I went to work in consultancy where we were on these huge teams and deploying
0:04:30 to production, there was a whole team with a Gantt chart which put together the plan
0:04:31 to deploy to production.
0:04:32 And I’m like, this is crazy.
0:04:36 Unfortunately, I was working with a bunch of other people who also thought it was crazy.
0:04:40 And then we came up with these ideas around deployment automation and scripting and stuff
0:04:41 like that.
0:04:44 And suddenly we saw the same ideas had popped up everywhere, basically.
0:04:48 I mean, it’s realising that if you’re working in a large complex organisation, Agile’s going
0:04:54 to hit a brick wall because unlike the things we were building in the ’60s, product development
0:04:56 means that things are changing and evolving all the time.
0:04:58 So it’s not good enough to get to production the first time.
0:05:00 You’ve got to be able to keep getting there on and on.
0:05:01 And that really is where DevOps comes in.
0:05:05 It’s like, well, Agile, we’ve got a way to build and evolve products, but how do we keep
0:05:10 deploying to production and running the systems in production in a stable, reliable way, particularly
0:05:12 in the distributed context?
0:05:16 So if I phrase it another way, sometimes there’s a joke that says day one is short and day
0:05:17 two is long.
0:05:18 What does that mean?
0:05:19 Right.
0:05:20 So day one is when we create all these–
0:05:21 That’s by the way sad that you have to explain the joke to me.
0:05:22 No, it’s–
0:05:26 No, which is great, though, because so day one is when we create all of these systems.
0:05:28 And day two is when we deploy to production.
0:05:33 We have to deploy and maintain forever and ever and ever and ever.
0:05:35 So day two is an infinite day.
0:05:36 Right, exactly.
0:05:37 Yeah.
0:05:38 First successful product.
0:05:39 Hopefully.
0:05:41 We hope that day two is really, really long.
0:05:45 And we’re fond of saying Agile doesn’t scale.
0:05:48 And sometimes I’ll say this, and people shoot laser beams out of their eyes.
0:05:50 But when we think about it, Agile was meant for development.
0:05:53 Just like Jez said, it speeds up development.
0:05:58 But then you have to hand it over and especially infrastructure and IT operations.
0:05:59 What happens when we get there?
0:06:02 So DevOps was sort of born out of this movement.
0:06:06 And it was originally called Agile System Administration.
0:06:10 And so then DevOps sort of came out of development and operations.
0:06:14 And it’s not just DevOps, but if we think about it, that’s sort of the bookends of
0:06:15 this entire process.
0:06:17 Well, it’s actually like day one and day two combined into one phrase.
0:06:19 Day one and day two.
0:06:23 The way I think about this is I remember the stories of Microsoft in the early days and
0:06:29 the waterfall cascading model of development, Leslie Lamport once wrote a piece for me about
0:06:33 why software should be developed like houses because you need a blueprint.
0:06:37 And I’m not a software developer, but it felt like a very kind of old way of looking at
0:06:38 the world of code.
0:06:40 I hate that metaphor.
0:06:41 Tell me why.
0:06:44 If the thing you’re building has well understood characteristics, it makes sense.
0:06:47 So if you’re building a trust bridge, for example, there’s well-known understood models
0:06:51 of building trust bridges, you plug the parameters into the model and then you get a trust bridge
0:06:52 and it stays up.
0:06:55 Have you been to Sagrada Familia in Barcelona?
0:06:56 Oh, I love Gaudi.
0:06:57 Okay.
0:07:00 So if you go into the crypt of the Sagrada Familia, you’ll see his workshop and there’s
0:07:05 a picture, in fact, a model that he built of the Sagrada Familia, but upside down with
0:07:07 the weight simulating the stresses.
0:07:10 And so he would build all these prototypes and small prototypes because he was fundamentally
0:07:12 designing a new way of building.
0:07:17 All Gaudi’s designs were hyperbolic curves and parabolic curves and no one had used that
0:07:18 before.
0:07:19 Things that had never been pressure tested.
0:07:20 Right.
0:07:21 Literally.
0:07:22 In that case.
0:07:23 Exactly.
0:07:24 He didn’t want them to fall down.
0:07:25 So he built all these prototypes and did all this stuff.
0:07:29 He built his blueprint as he went by building and trying it out, which is a very rapid prototyping
0:07:30 kind of model.
0:07:31 Absolutely.
0:07:34 So in the situation where the thing you’re building has known characteristics and it’s
0:07:38 been done before, yeah, sure, we can take a very phased approach to it.
0:07:42 And, you know, for designing these kind of protocols that have to work in a distributed
0:07:46 context and you can actually do formal proofs of them, again, that makes sense.
0:07:51 But when we’re building products and services where particularly we don’t know what customers
0:07:55 actually want and what users actually want, it doesn’t make sense to do that because you’ll
0:07:57 build something that no one wants.
0:07:58 You can’t predict.
0:08:00 And we’re particularly bad at that, by the way.
0:08:05 Even companies like Microsoft, where they are very good at understanding what their
0:08:09 customer base looks like, they have a very mature product line.
0:08:15 Ronnie Cahave has done studies there and only about one-third of the well-designed features
0:08:16 deliver value.
0:08:18 That’s actually a really important point.
0:08:22 The mere question of does this work is something that people really clearly don’t pause to
0:08:26 ask, but I do have a question for you guys to push back, which is, is this a little bit
0:08:27 of the cult?
0:08:31 Oh, my God, it’s like so developer-centric, let’s be agile, let’s do it fast, our way,
0:08:35 you know, two pizzas, that’s the ideal size of a software team and, you know, I’m not
0:08:36 trying to mock it.
0:08:41 I’m just saying that isn’t there an element of actual practical realities like technical
0:08:46 debt and accruing a mess underneath all your code and a system that you may be there for
0:08:49 two or three years and you can go after the next startup, but okay, someone else has to
0:08:51 clean up your mess.
0:08:53 Tell me about how this fits into that big picture.
0:08:55 This is what enables all of that.
0:08:56 Oh, right.
0:08:57 Interesting.
0:08:59 So it’s not actually just creating a problem because that’s how I’m kind of hearing it.
0:09:00 No, absolutely.
0:09:05 So you still need development, you still need test, you still need QA, you still need operations,
0:09:08 you still need to deal with technical debt, you still need to deal with re-architecting
0:09:12 really difficult large monolithic code bases.
0:09:17 What this enables you to do is to find the problems, address them quickly, move forward.
0:09:22 I think that the problem that a lot of people have is that we’re so used to couching these
0:09:26 things as trade-offs and as dichotomies, the idea that if you’re going to move fast, you’re
0:09:27 going to break things.
0:09:32 The one thing which I always say is, if you take one thing away from DevOps is this, high-performing
0:09:34 companies don’t make those trade-offs.
0:09:36 They’re not going fast and breaking things.
0:09:40 They’re going fast and making more stable, more high-quality systems, and this is one
0:09:44 of the key results in the book, in our research, is this fact that high-performers do better
0:09:49 at everything because the capabilities that enable high-performance in one field, if done
0:09:51 right, enable it in other fields.
0:09:55 If you’re using version control for software, you should also be using version control for
0:09:56 your production infrastructure.
0:10:00 If there’s a problem in production, we can reproduce the state of the production environment
0:10:05 in a disaster recovery scenario, again in a predictable way that’s repeatable.
0:10:07 I think it’s important to point out that this is something that happened in manufacturing
0:10:08 as well.
0:10:09 Give it to me.
0:10:13 I love when people talk about software as drawn from hardware analogies as my favorite
0:10:14 type of metaphor.
0:10:20 Okay, so Toyota didn’t win by making shitty cars faster, they won by making higher-quality
0:10:22 cars faster and having shorter time to market.
0:10:25 The lean manufacturing method, which by the way also spawned lean startup thinking and
0:10:26 everything else connected to it.
0:10:30 And DevOps pulls very strongly from lean methodologies.
0:10:34 So you guys are probably the only people to have actually done a large-scale study of
0:10:36 organizations adopting DevOps.
0:10:38 What is your research and what did you find?
0:10:39 Sure.
0:10:44 My research really is the largest investigation of DevOps practices around the world.
0:10:48 We have over 23,000 data points, all industries.
0:10:49 Give me like a sampling, like what are the range of industries?
0:10:56 So I’ve got entertainment, I’ve got finance, I have healthcare and pharma, I have technology.
0:10:57 Government.
0:10:58 Government, education.
0:11:00 You basically have every vertical.
0:11:01 And then when you tell you around the world.
0:11:07 So we’re primarily in North America, we’re in Amia, we have India, we have a small sample
0:11:08 in Africa.
0:11:09 Right.
0:11:13 And we break down like the survey methodology questions that people have in the ethnographic
0:11:17 world, the way we would approach it is that you can never trust what people say they do.
0:11:19 You have to watch what they do.
0:11:23 However, it is absolutely true, and especially in a more scalable sense, that there are really
0:11:25 smart surveys that give you a shit ton of useful data.
0:11:26 Yes.
0:11:30 And part two of the book covers this in almost excruciating detail.
0:11:31 We like knowing methodologies.
0:11:32 Yes.
0:11:33 So it’s nice to share that.
0:11:37 Well, and it’s interesting because Jez talked about in his overview of Agile and how it changes
0:11:41 so quickly and we don’t have a really good definition, but that does is it makes it difficult
0:11:42 to measure.
0:11:43 Right.
0:11:49 And so what we do is we’ve defined core constructs, core capabilities, so that we can then measure
0:11:50 them.
0:11:57 We go back to core ideas around things like automation, process, measurement, lean principles.
0:12:02 And then I’ll get that pilot set of data and I’ll run preliminary statistics to test for
0:12:06 discriminant validity, convergent validity, composite reliability.
0:12:09 Make sure that it’s not testing what it’s not supposed to test.
0:12:12 It is testing what it is supposed to test.
0:12:15 Everyone is reading it consistently the same way that I think it’s testing.
0:12:20 I even run checks to make sure that I’m not inadvertently inserting bias or collecting
0:12:23 bias just because I’m getting all of my data from surveys.
0:12:25 Sounds pretty damn robust.
0:12:28 So tell me then what were the big findings?
0:12:30 That’s a huge question, but give me the hit list.
0:12:31 Well, okay.
0:12:35 So let’s start with one thing that Jess already talked about, speed and stability go together.
0:12:39 This is where he was talking about there not being necessarily a false dichotomy and that’s
0:12:41 one of your findings that you can actually accomplish both.
0:12:42 Yeah.
0:12:43 And it’s worth talking about how we measure those things as well.
0:12:48 So we measure speed or tempo as we call it in the book or sometimes people call it throughput
0:12:49 as well.
0:12:53 Which is a nice full circle manufacturing idea, like the semiconductor circuit throughput.
0:12:54 Yeah, absolutely.
0:12:56 I love hardware analogies for software, I told you.
0:12:57 A lot of it comes from lean.
0:13:01 So lead time, obviously one of the classic lean manufacturing measures we use.
0:13:02 How long does it take?
0:13:06 You look at the lead time from checking into version control to release into production.
0:13:09 So that part of the value stream because that’s more focused on the DevOps end of things.
0:13:11 And it’s highly predictable.
0:13:12 The other one is release frequency.
0:13:13 So how often do you do it?
0:13:17 And then we’ve got two stability metrics and one of them is time to restore.
0:13:21 So in the event that you have some kind of outage or some degradation in performance in
0:13:24 production, how long does it take you to restore service?
0:13:27 For a long time we focused on not letting things break.
0:13:30 And I think one of the changes, paradigm shifts we’ve seen in the industry, particularly
0:13:32 in DevOps, is moving away from that.
0:13:36 We accept that failure is inevitable because we’re building complex systems.
0:13:40 So not how do we prevent failure, but when failure inevitably occurs, how quickly can
0:13:41 we detect and fix it?
0:13:42 MTBF, right?
0:13:43 Mean time between failures.
0:13:48 If you only go down once a year, but you’re down for three days and it’s on Black Friday.
0:13:52 But if you’re down very small, low blast, very, very small blast radius and you can come
0:13:57 back almost immediately and your customers almost don’t notice.
0:13:58 That’s fine.
0:14:00 The other piece around stability is change fail, right?
0:14:03 When you push a change into production, what percentage of the time do you have to fix
0:14:04 it?
0:14:05 Because something went wrong.
0:14:07 By the way, what does that tell you if you have a change fail?
0:14:10 So in the lean kind of discipline, this is called percent complete and accurate.
0:14:12 And it’s a measure of a quality of your process.
0:14:17 So in a high quality process, when I do something for Nicole, Nicole can use it rather than
0:14:21 sending it back to me and say, “Hey, there’s a problem with this.”
0:14:24 And in this particular case, what percentage of the time when I deploy something to production
0:14:27 is there a problem because I didn’t test it adequately.
0:14:29 My testing environment wasn’t production like enough.
0:14:31 Those are the measures for finding this.
0:14:36 But the big finding is that you can have speed and stability together through DevOps.
0:14:38 Is that what I’m hearing?
0:14:39 Yes, yes.
0:14:40 High performers get it all.
0:14:42 Low performers kind of suck at all of it.
0:14:43 Medium performers hang out in the middle.
0:14:46 I’m not seeing trade-offs four years in a row.
0:14:50 So anyone who’s thinking, “Oh, I can be more stable if I slow down,” I don’t see it.
0:14:54 It actually breaks a very commonly held kind of urban legend around how people believe
0:14:55 these things operate.
0:14:58 So tell me, are there any other sort of findings like that?
0:14:59 Because that’s very counterintuitive.
0:15:01 Okay, so this one’s kind of fun.
0:15:07 One is that this ability to develop and deliver software with speed and stability drives organizational
0:15:08 performance.
0:15:09 Now, here’s the thing.
0:15:11 I was about to say, that’s a very obvious thing to say.
0:15:13 So it seems obvious, right?
0:15:17 Developing and delivering software with speed and stability drives things like profitability,
0:15:19 productivity, market share.
0:15:26 Okay, except if we go back to Harvard Business Review 2003, there’s a paper titled, “IT Doesn’t
0:15:27 Matter.”
0:15:32 We have decades of research, I want to say at least 30 or 40 years of research showing
0:15:37 the technology does not drive organizational performance.
0:15:38 It doesn’t drive ROI.
0:15:43 And we are now starting to find other studies and other research that backs this up.
0:15:48 Eric Brinniol sent out of MIT, James Best sent out of Boston University, 2017.
0:15:50 Did you say James Bessen?
0:15:51 Yeah.
0:15:52 Oh, I used to edit him, too.
0:15:54 Yeah, it’s fantastic.
0:15:56 Here’s why it’s different.
0:16:01 Because before, right in like the 80s and the 90s, we did this thing where like, you’d
0:16:03 buy the tech and you’d plug it in and you’d walk away.
0:16:07 It was on-prem sales model where you like deliver and leave as opposed to like software
0:16:09 as a service and the other ways that things happen.
0:16:11 And people would complain if you tried to upgrade it too often.
0:16:12 Oh, right.
0:16:17 The key is that everyone else can also buy the thing and plug it in and walk away.
0:16:22 How is that driving value or differentiation for a company?
0:16:27 If I just buy a laptop to help me do something faster, everyone else can buy a laptop to do
0:16:29 the same thing faster.
0:16:34 That doesn’t help me deliver value to my customers or to the market.
0:16:36 It’s a point of parity, not a point of distinction.
0:16:37 Right.
0:16:40 And you’re saying that point of distinction comes from how you tie together that technology
0:16:43 process and culture through DevOps.
0:16:44 Right.
0:16:46 And that it can provide a competitive advantage to your business.
0:16:50 If you’re buying something that everyone else also has access to, then it’s no longer a
0:16:51 differentiator.
0:16:54 But if you have an in-house capability and those people are finding ways to drive your
0:16:57 business, I mean, this is the classic Amazon model.
0:17:01 They’re running hundreds of experiments in production at any one time to improve the
0:17:02 product.
0:17:05 And that’s not something that anyone else can copy, that’s why Amazon keeps winning.
0:17:08 So what people are doing is copying the capability instead.
0:17:09 And that’s what we’re talking about.
0:17:10 How do you build that capability?
0:17:14 The most fascinating thing to me about all this is honestly not the technology per se,
0:17:17 but the organizational change part of it and the organizations themselves.
0:17:22 So of all the people you studied, is there an ideal organizational makeup that is ideal
0:17:23 for DevOps?
0:17:27 Or is it one of these magical formulas that has this ability to turn a big company into
0:17:31 a startup and a small company into, because that’s actually the real question.
0:17:34 From what I’ve seen, there might be two ideals.
0:17:39 The nice, happy answer is the ideal organization is the one that wants to change.
0:17:44 That’s, I mean, given this huge n equals 23,000 dataset, is it not tied to a particular profile
0:17:45 of a size of company?
0:17:47 They’re both shaking their head just for the listeners.
0:17:51 I see high performers among large companies.
0:17:52 I see high performers in small companies.
0:17:55 I see low performers in small companies.
0:17:57 I see low performers in highly regulated companies.
0:18:00 I see low performers in not regulated companies.
0:18:03 So tell me the answer you’re not supposed to say.
0:18:11 So that answer is it tends to be companies that are like, oh shit, and they’re two profiles.
0:18:16 Number one, they’re like way behind, and oh shit, and they have some kind of funds.
0:18:25 Or they are like this lovely, wonderful bastion of like they’re these really innovative, high-performing
0:18:29 companies, but they still realize they’re a handful of like two or three companies ahead
0:18:31 of them, and they don’t want to be number two.
0:18:32 They are going to be number one.
0:18:33 So those are sort of the ideal.
0:18:35 I mean, just like anthropomorphize it a little bit.
0:18:41 It’s like the 35 to 40 year old who suddenly discovers you might be pre-diabetic, so you
0:18:43 better do something about it now before it’s too late.
0:18:47 But it’s not too late because you’re not so old where you’re about to reach sort of
0:18:50 the end of a possibility to change that runway.
0:18:54 And then there’s this person who’s sort of kind of already like in the game running in
0:18:57 the race and they might be two or three, but they want to be like number one.
0:19:02 And I think to extend your metaphor, the companies that do well are the companies that never got
0:19:05 diabetic in the first place because they always just ate healthily.
0:19:07 They were already glucose monitoring.
0:19:10 They had continuous glucose monitors on, which is like DevOps actually.
0:19:11 They were always athletes.
0:19:12 Right.
0:19:15 You know, diets are terrible because at some point you have to stop the diet.
0:19:18 And it has to start and start and stop as opposed to a way of life is what you’re saying.
0:19:19 Right, exactly.
0:19:24 So if you just always eat healthily and never eat too much or very rarely eat too much and
0:19:27 do a bit of exercise every day, you never get to the stage like, oh my God, now I can
0:19:29 only eat tofu.
0:19:39 So like my loving professerness, nurture Nicole also has one more profile that like I love
0:19:42 and I worry about them like mother hen.
0:19:47 And it’s the companies that I talk to and they come to me and they’re struggling and
0:19:52 I haven’t decided if they want to change, but they’re like, so we need to do this transformation
0:19:53 and we’re going to do the transformation.
0:19:57 And it’s either because they want to or when they’ve been told that they need to.
0:20:01 And then they will insert this thing where they say, but I’m not a technology company.
0:20:08 I’m like, but we just had this 20 minute conversation about how you’re leveraging technology to drive
0:20:13 value to customers or to drive this massive process that you do.
0:20:15 And then they say, but I’m not a technology company.
0:20:19 I could almost see why they had that in their head because they were a natural resources
0:20:20 company.
0:20:23 But there was another one where they were a finance company.
0:20:27 I mean, an extension of software eats the world is really every company is a technology
0:20:28 company.
0:20:32 It’s fascinating to me that that third type exists, but it is a sign of this legacy world
0:20:38 moving into and I worry about them also, at least for me personally, you know, I lived
0:20:42 through this like mass extinction of several firms and I don’t want it to happen again.
0:20:46 And I worry about so many companies that keep insisting they’re not technology companies.
0:20:49 And I’m like, oh, honey child, you’re a tech company.
0:20:51 You know, one of the gaps in our data is actually China.
0:20:55 And I think big China is a really interesting example because they didn’t go through the
0:20:58 whole, you know, IT doesn’t matter phase.
0:21:02 They’re jumping straight from no technology to Alibaba and Tencent, right?
0:21:07 I think US companies should be scared because the moment Tencent and Alibaba already made
0:21:12 moving into other developing markets and they’re going to be incredibly competitive because
0:21:13 it’s just built into their DNA.
0:21:16 So the other fascinating thing to me is that you essentially were able to measure performance
0:21:20 of software and clearly productivity.
0:21:22 Is there any more insights on the productivity side?
0:21:23 Yes.
0:21:24 Yes.
0:21:25 I want to go.
0:21:26 This is his favorite ramp.
0:21:27 Jumping around and like waving his hand.
0:21:31 So tell us the reason the manufacturing metaphor breaks down is because in manufacturing you
0:21:32 have inventory.
0:21:33 Yes.
0:21:36 We do not have inventory in the same way in software.
0:21:39 In a factory, like the first thing your lean consultant is going to do, walking into the
0:21:42 factory is point to the piles of thing everywhere.
0:21:47 But I think if you walk into an office where there’s developers, where’s the inventory?
0:21:50 By the way, that’s what makes talking about this to executives so difficult.
0:21:51 They can’t see the process.
0:21:56 Well, it’s a hard question to answer because is the inventory the code that’s being written?
0:22:00 And people actually have done that and said, “Well, listen, lines of code are an accounting
0:22:04 measure and we’re going to capture that as, you know, capital.”
0:22:05 That’s insane.
0:22:08 It’s like an invitation to write crappy, unnecessarily long code.
0:22:09 That’s exactly what happens.
0:22:11 It’s like the olden days are getting paid for a book by how long it is and it’s like
0:22:14 actually really boring when you can actually write it in like one third of the length.
0:22:15 Let’s write it in German.
0:22:16 Right, you know.
0:22:17 I’m thinking of Charles Dickens.
0:22:19 In general, you know, you prefer people to write short programs because they’re easier
0:22:21 to maintain and so forth.
0:22:23 But lines of code have all these drawbacks.
0:22:25 We can’t use them as a measure of productivity.
0:22:27 So if you can’t measure lines of code, what can you measure?
0:22:30 Because I really want an answer like, how do you measure productivity?
0:22:31 So velocity is the other classic example.
0:22:38 Agile, there’s this concept of velocity, which is the number of story points a team manages
0:22:41 to complete in an iteration.
0:22:47 So before the start of an iteration in many agile, particularly scrum-based processes,
0:22:48 you’ve got all this work to do.
0:22:50 You’re like, “We need to build these five features.
0:22:51 How long will this feature take?”
0:22:54 And the developers fight over it and they’re like, “Oh, it’s five points.”
0:22:57 And then this one’s going to take three points, this one’s going to take two points.
0:23:00 And so you have a list of all these features and you don’t get through all of them.
0:23:03 At the end of the iteration, the customer signs off, “Well, I’m accepting this one.
0:23:04 This one’s fine.
0:23:05 This one’s fine.
0:23:06 This one’s a hot mess.
0:23:07 Go back and do it again.”
0:23:08 Whatever.
0:23:09 The number of points you complete in the iteration is the velocity.
0:23:12 So it’s like the speed at which you’re able to deliver those features.
0:23:16 So a lot of people treat it like that, but actually, that’s not really what it’s about.
0:23:20 It’s a relative measure of effort and it’s for capacity planning purposes.
0:23:23 So basically, for the next iteration, we’ll only commit to completing the same velocity
0:23:24 that we finished last time.
0:23:27 So it’s relative and it’s team dependent.
0:23:30 And so what a lot of people do is say they start comparing velocities across teams.
0:23:34 Then what happens is, a lot of work, you need to collaborate between teams.
0:23:38 But hey, if I’m going to help you with your story, that means I’m not going to get my
0:23:42 story points and you’re going to get your story points, right, people can game it as
0:23:43 well.
0:23:45 You should never use story points as a productivity measure.
0:23:48 So lines of code doesn’t work, velocity doesn’t work, what works?
0:23:53 So this is why we like, two things in particular, one thing that it’s a global measure.
0:23:57 And secondly, that it’s not just one thing, it mixes two things together, which might
0:23:59 normally be intention.
0:24:03 And so this is why we went for our measure of performance.
0:24:11 So measuring lead time, release frequency, and then time to restore and change fail rate.
0:24:15 Lead time is really interesting because lead time is on the way to production, right?
0:24:17 So all the teams have to collaborate.
0:24:21 It’s not something where I can go really fast in my velocity, but nothing ever gets delivered
0:24:22 to the customer.
0:24:23 It doesn’t count in lead time.
0:24:24 So it’s a global measure.
0:24:27 It takes care of that problem of the incentive alignment around the competitive dynamic.
0:24:30 Also, it’s an outcome.
0:24:31 It’s not an output.
0:24:33 There’s a guy called Jeff Patton.
0:24:36 He’s a really smart thinker in the kind of lead and agile space.
0:24:43 He says, minimize output, maximize outcomes, which I think is simple but brilliant.
0:24:45 It’s so simple because it just shifts the words to impact.
0:24:49 And even we don’t get all the way there because we’re not yet measuring, did the features
0:24:51 deliver the expected value to the organization or the customers?
0:24:58 Well, we do get there because we focus on speed and stability, which then deliver the
0:25:02 outcome to the organization, profitability, productivity, market share.
0:25:07 But the second half of this, which I am also hearing is, did it meet your expectations?
0:25:11 Did it perform to the level that you wanted it to?
0:25:13 Did it match what you asked for?
0:25:18 Or even if it wasn’t something you specified that you desired or needed, that seems like
0:25:19 a slightly open question.
0:25:20 So we did actually measure that.
0:25:24 We looked at non-profit organizations and these were exactly the questions we measured.
0:25:29 We asked people, did the software meet, I can’t remember what the exact questions were.
0:25:33 Effectiveness, efficiency, customer satisfaction, delivery, mission goals.
0:25:35 How fascinating that you do it non-profits because that is a larger move in the non-profit
0:25:38 measurement space to try to measure impact.
0:25:43 But we captured it everywhere because even profit seeking firms still have these goals.
0:25:47 In fact, as we know from research, companies that don’t have a mission other than making
0:25:49 money do less well than the ones that do.
0:25:54 I think, again, what the data shows is that companies that do well on the performance measures
0:25:58 we talked about outperform their low performing peers by a factor of two.
0:26:02 A hypothesis is what we’re doing when we create these high performing organizations in terms
0:26:06 of speed and stability is we’re creating feedback loops.
0:26:11 What it allows us to do is build a thin slice, a prototype of a feature, get feedback through
0:26:16 some UX mechanism, whether that’s showing people the prototype and getting their feedback,
0:26:19 whether it’s running A/B tests or multivariate tests in production.
0:26:23 It’s what creates these feedback loops that allow you to shift direction very fast.
0:26:25 I mean, that is the heart of Lean Startup.
0:26:29 It’s the heart of anything you’re putting out into the world is you have to kind of
0:26:30 bring it full circle.
0:26:33 It is a secret of success to Amazon, as you cited earlier.
0:26:35 I would distill it to just that.
0:26:37 I think I heard Jeff Bezos say the best line.
0:26:40 It was at the Internet Association dinner in DC last year where he came and asked me
0:26:41 about an innovation.
0:26:44 He’s like, to him, an innovation is something that people actually use.
0:26:47 And that’s what I love about the feedback loop thing, is it actually reinforces that
0:26:49 mindset of that’s what innovation is.
0:26:50 Right.
0:26:54 So to sum up, the way you can frame this is DevOps is that technological capability
0:26:59 that underpins your ability to practice Lean Startup and all these very rapid iterative
0:27:00 processes.
0:27:02 So I have a couple of questions then.
0:27:07 So one is going back to this original taxonomy question, and you guys described that there
0:27:09 isn’t necessarily an ideal organizational type.
0:27:11 Which by the way, should be encouraging.
0:27:12 I agree.
0:27:17 It’s super encouraging and more importantly democratizing that anybody can become a hit
0:27:18 player.
0:27:19 We were doing this in the federal government.
0:27:20 I love that.
0:27:24 But one of my questions is, when we had Adrian Cockroft on this podcast a couple of years
0:27:27 ago talking about microservices, and the thing that I thought was so liberating about what
0:27:34 he was describing the Netflix story was that it was a way for teams to essentially become
0:27:40 little mini product management units and essentially self-organize because the infrastructure
0:27:48 by being broken down into these micro pieces versus say a monolithic kind of uniform architecture,
0:27:53 I would think that being a organization that’s containerized its code in that way that has
0:27:58 this microservices architecture would be more suited to DevOps.
0:28:00 Or is that a wrong belief?
0:28:04 I’m just trying to understand again that taxonomy thing of how these pieces all fit together.
0:28:07 So we actually studied this as a whole section of architecture in the book where we looked
0:28:09 at exactly this question.
0:28:12 Architecture has been studied for a long time and people talk about architectural characteristics.
0:28:16 There’s the ATAM, the architectural trade-off model that kind of email and developed.
0:28:21 There’s some additional things we have to care about, testability and deployability.
0:28:27 Can my team test its stuff without having to rely on this very complex integrated environment?
0:28:31 Can my team deploy its code to production without these very complex orchestrated deployments?
0:28:34 Basically, can we do things without dependencies?
0:28:38 That is one of the biggest predictors in our cohort of IT performance is the ability of
0:28:43 teams to get stuff done on their own without dependencies on other teams, whether that’s
0:28:46 testing or whether it’s deploying or whether it’s planning.
0:28:47 Even just communicating.
0:28:53 Can you get things done without having to do mass communication and checking in permissions?
0:28:57 Question I love, love, love asking on this podcast is we always revisit the 1937 Coast
0:29:02 paper about the theory of the firm and its idea that transaction costs are more efficient.
0:29:07 This is like the ultimate model for reducing friction and those transaction costs, communication,
0:29:08 coordination costs, all of it.
0:29:11 That’s what all the technical and process stuff is about that.
0:29:13 I mean, Don Robinson once came to one of my talks on continuous delivery.
0:29:18 At the end, he said, “So, continuous delivery, that’s just about reducing transaction costs,
0:29:19 right?”
0:29:20 And I’m like…
0:29:21 An economist view of DevOps.
0:29:22 I love it.
0:29:23 You’re right.
0:29:25 You’ve reduced my entire body of work to one sentence.
0:29:27 It’s so much Conway’s Law, right?
0:29:28 This would remind me what Conway’s Law is.
0:29:33 Organizations which design systems are constrained to produce designs which are copies of the
0:29:35 communication structures of these organizations.
0:29:36 Oh, right.
0:29:39 It’s that idea basically that your software code looks like the shape of the organization
0:29:40 itself.
0:29:41 Right.
0:29:42 And how we communicate, right?
0:29:46 So, which, you know, Jez just summarized, if you have to be communicating and coordinating
0:29:48 with all of these other different groups…
0:29:52 Command and control looks like waterfall, a more decentralized model looks like independent
0:29:53 teams.
0:29:54 Right.
0:29:55 So, the data shows that.
0:29:58 A lot of people jump on the microservices, containerization, bandwagon.
0:30:03 There’s one thing that is very important to bear in mind, implementing those technologies
0:30:05 does not give you those outcomes we talked about.
0:30:07 We actually looked at people doing mainframe stuff.
0:30:10 You can achieve these results with mainframes.
0:30:16 Equally, you can use the, you know, Kubernetes and, you know, Docker and microservices and
0:30:17 not achieve these outcomes.
0:30:22 We see no statistical correlation with performance, whether you’re on a mainframe or a greenfield
0:30:24 or a brownfield system.
0:30:28 If you’re building something brand new or if you’re working on existing build.
0:30:31 And one thing I wanted to bring up that we didn’t before is I said, you know, day one
0:30:32 is short, day two is long.
0:30:36 And I talked about things that live on the internet and live on the web.
0:30:40 This is still a really, really smart approach for package software.
0:30:47 And I know people who are working in and running package software companies that use this methodology
0:30:51 because it allows them to still work in small, fast approaches.
0:30:56 And all they do is they push to a small package pre-production database.
0:31:01 And then when it’s time to push that code onto some media, they do that.
0:31:02 Okay.
0:31:05 So what I love hearing about this is that it’s actually not necessarily tied again to the
0:31:07 architecture or the type of company you are.
0:31:11 There’s this opportunity for everybody, but there is this mindset of like an organization
0:31:12 that is ready.
0:31:14 It’s like a readiness level for a company.
0:31:15 Oh, I hear that all the time.
0:31:19 I don’t know if I’d say there’s any such thing as readiness, right?
0:31:21 Like there’s always an opportunity to get better.
0:31:24 There’s always an opportunity to transform.
0:31:29 The other thing that really drives me crazy and makes my head explode is this whole maturity
0:31:30 model thing.
0:31:31 Okay.
0:31:33 Are you ready to start transforming?
0:31:38 Well, like you can just not transform and then maybe fail, right?
0:31:42 Maturity models, they’re really popular in industry right now, but I really can’t stress
0:31:47 enough that they’re not really an appropriate way to think about a technology transformation.
0:31:50 I was thinking of readiness in the context of like NASA technology readiness levels or
0:31:54 TRLs, which is something we use to think about a lot for very early stage things, but you’re
0:31:58 describing maturity of an organization and it sounds like there’s some kind of a framework
0:32:02 for assessing the maturity of an organization and you’re saying that doesn’t work, but first
0:32:05 of all, what is that framework and why doesn’t it work?
0:32:10 Well, so so many people think that they want a snapshot of their like DevOps or their technology
0:32:14 transformation and spit back a number, right?
0:32:18 And then you will have one number to compare yourself against everything.
0:32:24 The challenge though is that a maturity model usually is leveraged to help you think about
0:32:27 arriving somewhere and then here’s the problem.
0:32:29 Once you’ve arrived, what happens?
0:32:30 Oh, we’re done.
0:32:31 You’re done.
0:32:33 And then the resources are gone.
0:32:38 And by resources, I don’t just mean money, I mean time, I mean attention.
0:32:44 We see year over year over year, the best, most innovative companies continue to push.
0:32:46 So what happens when you’ve arrived, I’m using my finger quotes.
0:32:47 You stop pushing.
0:32:48 You stop pushing.
0:32:54 What happens when executives or leaders or whomever decide that you no longer need resources
0:32:55 of any type?
0:33:00 I have to push back again though, doesn’t this help because it is helpful to give executives
0:33:04 in particular, particularly those that are not tech native, coming from the seeds of
0:33:09 the engineering organization, some kind of metric to put your head around where are we,
0:33:10 where are we at?
0:33:12 So you can use a capability model.
0:33:17 You can think about the capabilities that are necessary to drive your ability to develop
0:33:20 and deliver software with speed and stability.
0:33:24 Another limitation is that they’re often kind of a lockstep or a linear formula, right?
0:33:25 No, right.
0:33:28 It’s like a stepwise A, B, C, D, E, one, two, three, four.
0:33:32 And in fact, the very nature of anything iterative is it’s very nonlinear and circular.
0:33:33 Feedback loops are circled.
0:33:34 Right.
0:33:37 And maturity models just don’t allow that.
0:33:42 No, another thing that’s really, really nice is that capability models allow us to think
0:33:46 about capabilities in terms of these outcomes.
0:33:48 Capabilities drive impact.
0:33:53 Maturity models are just this thing where you have this level one, level two, level
0:33:54 three, level four.
0:33:55 It’s a bit performative.
0:34:02 And then finally, maturity models just sort of take this snapshot of the world and describe
0:34:03 it.
0:34:05 How fast is technology and business changing?
0:34:11 If we create a maturity model now, let’s wait, let’s say four years, that maturity model
0:34:14 is old and dead and dusty and gone.
0:34:16 Do new technologies change the way you think about this?
0:34:20 Because I’ve been thinking a lot about how product management for certain types of technologies
0:34:24 changes with the technology itself and that machine learning and deep learning might be
0:34:25 a different beast.
0:34:26 And I’m just wondering if you guys have any thoughts on that.
0:34:27 Yeah.
0:34:30 I mean, me and Dave Farley wrote the continuous delivery book back in 2010.
0:34:34 And since then, you know, there’s Docker and Kubernetes and large-scale adoption of the
0:34:37 cloud and all these things that you had no idea would happen.
0:34:40 People sometimes ask me, you know, isn’t it time you wrote a new edition of the book?
0:34:43 I mean, yeah, we would probably rewrite it.
0:34:45 Does it change any of the fundamental principles?
0:34:46 No.
0:34:50 Do these new tools allow you to achieve those principles in new ways?
0:34:51 Yes.
0:34:54 So, I think, you know, this is how I always come back to any problem is go back to first
0:34:55 principles.
0:34:56 Yeah.
0:34:59 And the first principles, I mean, they will change over the course of centuries.
0:35:04 I mean, we’ve got modern management versus kind of scientific management, but they don’t
0:35:06 change over the course of like a couple of years.
0:35:08 The principles are still the same.
0:35:11 These give you new ways to do them, and that’s what’s interesting about them.
0:35:13 Equally, things can go backwards.
0:35:17 A great example of this is one of the capabilities we talk about in the book is working off a
0:35:22 shared trunk or master inversion control, not going on these long-lived feature branches.
0:35:26 And the reason for that is actually because of feedback loops.
0:35:29 You know, developers love going off into a corner, putting headphones on their head and
0:35:34 just coding something for like days, and then they try and integrate it into trunk, you
0:35:35 know, and that’s a total nightmare.
0:35:38 And not just for them, more critically for everyone else who then has to merge their
0:35:41 coding so whatever they’re working on.
0:35:42 So that’s hugely painful.
0:35:45 Git is one of these examples of a tool that makes it very easy for people like, “Oh, I
0:35:46 can use feature branches.”
0:35:49 So I think, again, it’s non-linear in the way that you describe.
0:35:50 Right.
0:35:51 Gives you new ways to do things, are they good and bad?
0:35:52 It depends.
0:35:55 But the thing that strikes me about what you guys have been talking about as a theme in
0:35:59 this podcast that seems to lend itself, well, to the world of machine learning and deep
0:36:03 learning where that technology might be different, is it sort of lends itself to a probabilistic
0:36:09 way of thinking and that things are not necessarily always complete, and that there is not a beginning
0:36:13 and an end, and that you can actually live very comfortably in an environment where things
0:36:18 are by nature complex, and that complexity is not necessarily something to avoid.
0:36:22 So in that sense, I do think there might be something kind of neat about ML and deep learning
0:36:26 and AI for that matter, because it is very much lending itself to that sort of mindset.
0:36:27 Yeah.
0:36:30 And in our research, we talk about working in small batches.
0:36:35 There’s a great video by Brett Victor called Inventing on Principle, where he talks about
0:36:39 how important it is to the creative process to be able to see what you’re doing, and
0:36:43 he has this great demo of this game he’s building where he can change the code and the game
0:36:46 changes its behavior instantly when you’re doing things like that.
0:36:48 You don’t get to see that.
0:36:52 No, and the whole thing with machine learning is how can we get the shortest possible feedback
0:36:56 from changing the input parameters to seeing the effect so that the machine can learn,
0:37:01 and that the moment you have very long feedback loops, the ML becomes much, much harder because
0:37:04 you don’t know which of the input changes caused the change in output that the machine
0:37:06 is supposed to be learning from.
0:37:10 So the same thing is true of organizational change and process, and product development
0:37:14 as well, by the way, which is working in small batches so that you can actually reason about
0:37:15 causing effects.
0:37:16 I changed this thing.
0:37:17 It had this effect.
0:37:20 Again, that requires short feedback loops.
0:37:21 That requires small batches.
0:37:24 That’s one of the key capabilities we talk about in the book, and that’s what DevOps enables.
0:37:28 So we’ve been this hallway style conversation around all these themes of DevOps, measuring
0:37:31 it, why it matters, and what it means for organizations.
0:37:36 But practically speaking, if a company, and you guys are basically arguing it, any company,
0:37:40 not necessarily a “company” that thinks it’s a tech company, and necessarily a company
0:37:44 that has this amazing modern infrastructure stack, it could be a company that’s still
0:37:45 working off mainframes.
0:37:48 What should people actually do to get started, and how do they know where they are?
0:37:52 So what you need to do is take a look at your capabilities, understand what’s holding you
0:37:56 back, try to figure out what your constraints are.
0:38:02 But the thing that I love about much of this is you can start somewhere, and culture is
0:38:04 such a core, important piece.
0:38:09 We’ve seen across so many industries, culture is truly transformative.
0:38:13 In fact, we measure it in our work, and we can show that culture has a predictive effect
0:38:17 on organizational outcomes and on technology capabilities.
0:38:23 We use a model from a guy called Ron Westrom, who was a social scientist studying safety
0:38:27 outcomes, in fact, in safety-critical industries like healthcare and aviation.
0:38:33 He created a typology where he organizes organizations based on whether they’re pathological, bureaucratic
0:38:34 or generative.
0:38:35 That’s actually a great topology.
0:38:37 I wanted to apply that to people I date.
0:38:38 I know, right?
0:38:39 Too real.
0:38:40 I wanted to apply it to people.
0:38:41 Too real.
0:38:42 There’s a book in there, definitely.
0:38:46 I like how I’m trying to anthropomorphize all these organizational things into people.
0:38:47 But anyway, go on.
0:38:52 Instead of the five love languages, we can have the three relationship types.
0:38:55 Pathological organizations are characterized by a low cooperation between different departments
0:38:58 and up and down the organizational hierarchy.
0:39:00 How do we deal with people who bring us bad news?
0:39:03 Do we ignore them, or do we shoot people who bring us bad news?
0:39:04 How do we deal with responsibilities?
0:39:08 Are they defined tightly so that when something goes wrong, we know whose fault it is, so
0:39:09 we can punish them?
0:39:12 Or do we share risks, because we know we’re all in it together, and it’s the team?
0:39:13 You all have to get in the game.
0:39:14 You’re all accountable, right?
0:39:15 Exactly.
0:39:16 We’re all in different departments.
0:39:18 And crucially, how do we deal with failure?
0:39:23 As we discussed earlier, in any complex system, including organizational systems, failure
0:39:24 is inevitable.
0:39:28 So failure should be treated as a learning opportunity, not whose fault was it, but why
0:39:32 did that person not have the information they needed, the tools they needed?
0:39:35 How can we make sure that when someone does something, it doesn’t lead to catastrophic
0:39:39 outcomes, but instead it leads to contained small blast radiuses?
0:39:40 Right.
0:39:41 Not an outage on Black Friday.
0:39:42 Right.
0:39:43 Exactly.
0:39:45 So how do we deal with novelty?
0:39:48 Is novelty crushed, or is it implemented, or does it lead to problems?
0:39:52 One of the pieces of research that kind of confirms what we were talking about was some
0:39:56 research that was done by Google, they were trying to find what makes the greatest Google
0:39:57 team.
0:40:01 You know, is it four Stanford graduates and no developer and fire all the managers?
0:40:04 Is it a data scientist and a Node.js programmer and a manager?
0:40:05 Right.
0:40:08 One product manager paired with one system engineer, with one.
0:40:14 And what they found was that the number one ingredient was psychological safety.
0:40:17 Does the team feel safe to take risks?
0:40:19 And this ties together failure and novelty.
0:40:25 If people don’t feel that when things go wrong, they’re going to be supported, they’re not
0:40:26 going to take risks.
0:40:29 And then you’re not going to get any novelty, because novelty by definition involves taking
0:40:30 risks.
0:40:34 So we see that one of the biggest things you can do is create teams where it’s safe to
0:40:39 go wrong and make mistakes, and where people will treat that as a learning experience.
0:40:42 This is a principle that applies, again, not just in product development, you know, the
0:40:46 lean start up, fail early, fail often, but also in the way we deal with problems at an
0:40:48 operational level as well.
0:40:50 And how we interact with our team when these things happen.
0:40:54 So just to kind of summarize that, you have pathological, this is a power oriented thing
0:40:58 where you know the people are scared, the messenger is going to be shot.
0:41:02 Then you have this bureaucratic kind of rule oriented world where the messengers aren’t
0:41:03 heard.
0:41:07 And then you have the sort of generative, and again, I really wish I could apply this
0:41:11 to people, but we’re talking about organizations here for culture, which is more performance
0:41:12 oriented.
0:41:15 And I just want to add one thing about this, you know, working in the federal government,
0:41:17 you would imagine that to be a very bureaucratic organization.
0:41:18 I would actually.
0:41:22 And actually, what was surprising to me was that yes, there’s lots of rules.
0:41:23 The rules aren’t necessarily bad.
0:41:26 That’s how we can operate at scale is by having rules.
0:41:28 But what I found was there was a lot of people who are mission oriented.
0:41:32 And I think that’s a nice alternative way to think about generative organizations.
0:41:34 You need to think about mission orientation.
0:41:38 The rules are there, but if it’s important to the mission, we’ll break the rules.
0:41:40 And we measure this at the team level, right?
0:41:45 Because you can be in the government and there were pockets that were very generative.
0:41:53 You can be in a startup and you can see startups that act very bureaucratic or very pathological.
0:41:54 Right.
0:41:55 The culture of the CEO.
0:41:59 Where it’s not charismatic, inspirational vision, but to the expense of actually being
0:42:01 heard and the messenger is shot, et cetera.
0:42:05 And we have several companies around the world now that are measuring their culture on a
0:42:09 quarterly cadence and basis because we show in the book how to measure it.
0:42:12 Western’s typology was the table itself.
0:42:16 And so we turn that into a scientific psychometric way to measure it.
0:42:19 Now this makes sense why I’m putting these anthropomorphic analogies because in this
0:42:22 sense organizations are like people.
0:42:23 They’re made of people.
0:42:24 Teams are organic entities.
0:42:28 And I love that you said that the unit of analysis is a team because it means you can
0:42:29 actually do something.
0:42:31 You can start there and then you can like see if it actually spreads or doesn’t spread
0:42:34 bridges, doesn’t bridge, et cetera.
0:42:38 And what I also love about this framework is it also moves away from this cult of failure
0:42:42 mindset that I think people tend to have where it’s like failing for the sake of failing.
0:42:44 And you actually want to avoid failure.
0:42:45 Right.
0:42:48 And the whole point of failing is to actually learn something and then be better and take
0:42:49 risks.
0:42:50 So you can implement these new things.
0:42:52 And very smart risks.
0:42:53 So what’s your final?
0:42:58 I mean, there’s a lot of really great things here, but like what’s your final sort of parting
0:43:02 take away for listeners or people who might want to get started or think about how they
0:43:03 are doing.
0:43:06 So I think, you know, we’re in a world where technology matters.
0:43:10 Anyone can do this stuff, but you have to get the technology part of it right.
0:43:15 That means investing in your engineering capabilities, in your process, in your culture, in your
0:43:17 architecture.
0:43:20 We dealt with a lot of things here that people think are intangible and we’re here to tell
0:43:21 you they’re not intangible.
0:43:22 You can measure them.
0:43:24 They will impact the performance of your organization.
0:43:29 So take a scientific approach to improving your organization and you will read the dividends.
0:43:32 When you guys talk about, you know, anyone can do this, the teams can do this, but what
0:43:37 role in the organization is usually most empowered to be the owner of where to get started?
0:43:39 Is it like the VP of engineering?
0:43:41 Is it the CTO, the CIO?
0:43:46 I was going to say, don’t minimize the role of and the importance of leadership.
0:43:53 DevOps sort of started as a grassroots movement, but right now we’re seeing roles like VP and
0:43:58 CTO being really impactful in part because they can set the vision for an organization,
0:44:01 but also in part because they have resources that they can dedicate to this.
0:44:04 We see a lot of CEOs and CTOs and CIOs in our business.
0:44:05 We have like a whole briefing center.
0:44:08 We hear what’s top of mind for them all the time.
0:44:09 Everyone thinks they’re transformational.
0:44:13 So like what actually makes a visionary type of leader who has that, not just the purse
0:44:18 strings and the decision-making power, but the actual characteristics that are right
0:44:19 for this.
0:44:20 Right.
0:44:21 And that’s such a great question.
0:44:24 We dug into that in our research and we find that there are five characteristics that end
0:44:31 up being predictive of driving change and really amplifying all of the other capabilities
0:44:32 that we found.
0:44:38 And these five characteristics are vision, intellectual stimulation, inspirational communication,
0:44:40 supportive leadership, and personal recognition.
0:44:46 And so what we end up recommending to organizations is absolutely invest in the technology.
0:44:51 So invest in leadership in your people because that can really help drive your transformation
0:44:52 home.
0:44:56 Well, Nicole, Jez, thank you for joining the A6 and Z podcast.
0:45:02 The book Just Out is Accelerate, Building and Scaling High-Performing Technology Organizations.
0:45:03 Thank you so much, you guys.
0:45:04 Thanks for having us.
0:45:04 Thank you.

One of the recurring themes we talk about a lot on the a16z Podcast is how software changes organizations, and vice versa… More broadly: it’s really about how companies of all kinds innovate with the org structures and tools that they have. 

But we’ve come a long way from the question of “does IT matter” to answering the question  of what org structures, processes, architectures, and roles DO matter when it comes to companies — of all sizes  — innovating through software and more. 

So in this episode (a re-run of a popular episode from a couple years ago), two of the authors of the book Accelerate: The Science of  Lean Software and DevOps, by Nicole Forsgren, Jez Humble, and Jean Kim join Sonal Chokshi to share best practices and large-scale findings about high performing companies (including those who may not even think they’re tech companies). Nicole was co-founder and CEO of Dora, which was acquired by Google in December 2018; she will soon be joining GitHub as VP of Research & Strategy. Jez was CTO at DORA; is currently in Developer Relations at Google Cloud; and is the co-author of the books The DevOps Handbook, Lean Enterprise, and Continuous Delivery.  

 

Leave a Comment