AI transcript
So one of the recurring themes we talk a lot about on this podcast is how software changes
organizations and vice versa.
More broadly, it’s really about how companies of all kinds innovate with the org structures
and tools that they have.
And today’s episode, a rerun of a very popular episode from a couple years ago, draws on
actual research and data from one of the largest large-scale studies of software and organizational
performance out there.
Joining me in this conversation are two of the authors of the book Accelerate, the Science
of Lean Software and DevOps by Nicole Forsgren, Jez Humble, and Jean Kim.
We have the first two authors, so Nicole, who did her PhD research trying to answer
the lucid, eternal questions around how to measure software performance in orgs, especially
given past debates, around does IT matter?
She was the co-founder and CEO of Dora, which put out the annual State of DevOps report.
Dora was acquired by Google Cloud a little over a year ago, and she will soon be joining
GitHub as VP of Research and Strategy.
And then we also have Jez Humble, who was CTO at Dora, is currently in developer relations
at Google Cloud and is also the co-author of the books The DevOps Handbook, Lean Enterprise,
and Continuous Delivery.
In the conversation that follows, Nicole and Jez share their findings about high-performing
companies, even those that may not think they’re tech companies, and answer my questions about
whether there’s an ideal org type for this kind of innovation, whether it’s the size
of the organization, the software architecture they use, their culture or people, and where
the role of software and IT lives within that.
But first, we begin by talking briefly about the history of DevOps and where that fits
in the broader landscape of related software movements.
So I started as a software engineer at IBM.
I did hardware and software performance, and then I took a bit of a detour into academia
because I wanted to understand how to really measure and look at performance that would
be generalizable to several teams in predictable ways and in predictive ways.
And so I was looking at and investigating how to develop and deliver software in ways
that were impactful to individuals, teams, and organizations.
And then I pivoted back into industry because I realized this movement had gained so much
momentum and so much traction, and industry was desperate to really understand what types
of things are really driving performance outcomes and excellence.
And what do you mean by this movement?
This movement that now we call DevOps, so the ability to leverage software to deliver
value to customers, to organizations, to stakeholders.
And I think from a historical point of view, the best way to think about DevOps, it’s
a bunch of people who had to solve this problem of how do we build large distributed systems
that were secure and scalable and be able to change them really rapidly and evolve them.
And no one had had that problem before, certainly at the scale of companies like Amazon and
Google.
And that really is where the DevOps movement came from, trying to solve that problem.
And you can make an analogy to what Agile was about since the kind of software crisis
of the 1960s and people trying to build these defense systems at large scale, the invention
of software engineering as a field, Margaret Hamilton, her work at MIT on the Apollo program.
What happened in the decades after that was everything became kind of encased in concrete
in these very complex processes, this is how you develop software.
And Agile was kind of a reaction to that, saying we can develop software much more
quickly with much smaller teams in a much more lightweight way.
So we didn’t call it DevOps back then, but it’s also more Agile.
Can you guys break down the taxonomy for a moment?
Because when I think of DevOps, I think of it in the context of the containerization
of code and virtualization.
I think of it in the context of microservices and being able to do modular teams around
different things.
There’s an organizational element, there’s a software element, there’s an infrastructure
component, like paint the big picture for me of those building blocks and how they all
kind of fit together.
Well, I can give you a very personal story, which was my first show after college was
in 2000 in London, working at a startup where I was one of two technical people in the startup.
And I would deploy to production by FTP and code from my laptop directly into production.
And if I wanted to roll back, I’d say, “Hey, Johnny, can you FTP your copy of this file
to production?”
And that was our rollback process.
And then I went to work in consultancy where we were on these huge teams and deploying
to production, there was a whole team with a Gantt chart which put together the plan
to deploy to production.
And I’m like, this is crazy.
Unfortunately, I was working with a bunch of other people who also thought it was crazy.
And then we came up with these ideas around deployment automation and scripting and stuff
like that.
And suddenly we saw the same ideas had popped up everywhere, basically.
I mean, it’s realising that if you’re working in a large complex organisation, Agile’s going
to hit a brick wall because unlike the things we were building in the ’60s, product development
means that things are changing and evolving all the time.
So it’s not good enough to get to production the first time.
You’ve got to be able to keep getting there on and on.
And that really is where DevOps comes in.
It’s like, well, Agile, we’ve got a way to build and evolve products, but how do we keep
deploying to production and running the systems in production in a stable, reliable way, particularly
in the distributed context?
So if I phrase it another way, sometimes there’s a joke that says day one is short and day
two is long.
What does that mean?
Right.
So day one is when we create all these–
That’s by the way sad that you have to explain the joke to me.
No, it’s–
No, which is great, though, because so day one is when we create all of these systems.
And day two is when we deploy to production.
We have to deploy and maintain forever and ever and ever and ever.
So day two is an infinite day.
Right, exactly.
Yeah.
First successful product.
Hopefully.
We hope that day two is really, really long.
And we’re fond of saying Agile doesn’t scale.
And sometimes I’ll say this, and people shoot laser beams out of their eyes.
But when we think about it, Agile was meant for development.
Just like Jez said, it speeds up development.
But then you have to hand it over and especially infrastructure and IT operations.
What happens when we get there?
So DevOps was sort of born out of this movement.
And it was originally called Agile System Administration.
And so then DevOps sort of came out of development and operations.
And it’s not just DevOps, but if we think about it, that’s sort of the bookends of
this entire process.
Well, it’s actually like day one and day two combined into one phrase.
Day one and day two.
The way I think about this is I remember the stories of Microsoft in the early days and
the waterfall cascading model of development, Leslie Lamport once wrote a piece for me about
why software should be developed like houses because you need a blueprint.
And I’m not a software developer, but it felt like a very kind of old way of looking at
the world of code.
I hate that metaphor.
Tell me why.
If the thing you’re building has well understood characteristics, it makes sense.
So if you’re building a trust bridge, for example, there’s well-known understood models
of building trust bridges, you plug the parameters into the model and then you get a trust bridge
and it stays up.
Have you been to Sagrada Familia in Barcelona?
Oh, I love Gaudi.
Okay.
So if you go into the crypt of the Sagrada Familia, you’ll see his workshop and there’s
a picture, in fact, a model that he built of the Sagrada Familia, but upside down with
the weight simulating the stresses.
And so he would build all these prototypes and small prototypes because he was fundamentally
designing a new way of building.
All Gaudi’s designs were hyperbolic curves and parabolic curves and no one had used that
before.
Things that had never been pressure tested.
Right.
Literally.
In that case.
Exactly.
He didn’t want them to fall down.
So he built all these prototypes and did all this stuff.
He built his blueprint as he went by building and trying it out, which is a very rapid prototyping
kind of model.
Absolutely.
So in the situation where the thing you’re building has known characteristics and it’s
been done before, yeah, sure, we can take a very phased approach to it.
And, you know, for designing these kind of protocols that have to work in a distributed
context and you can actually do formal proofs of them, again, that makes sense.
But when we’re building products and services where particularly we don’t know what customers
actually want and what users actually want, it doesn’t make sense to do that because you’ll
build something that no one wants.
You can’t predict.
And we’re particularly bad at that, by the way.
Even companies like Microsoft, where they are very good at understanding what their
customer base looks like, they have a very mature product line.
Ronnie Cahave has done studies there and only about one-third of the well-designed features
deliver value.
That’s actually a really important point.
The mere question of does this work is something that people really clearly don’t pause to
ask, but I do have a question for you guys to push back, which is, is this a little bit
of the cult?
Oh, my God, it’s like so developer-centric, let’s be agile, let’s do it fast, our way,
you know, two pizzas, that’s the ideal size of a software team and, you know, I’m not
trying to mock it.
I’m just saying that isn’t there an element of actual practical realities like technical
debt and accruing a mess underneath all your code and a system that you may be there for
two or three years and you can go after the next startup, but okay, someone else has to
clean up your mess.
Tell me about how this fits into that big picture.
This is what enables all of that.
Oh, right.
Interesting.
So it’s not actually just creating a problem because that’s how I’m kind of hearing it.
No, absolutely.
So you still need development, you still need test, you still need QA, you still need operations,
you still need to deal with technical debt, you still need to deal with re-architecting
really difficult large monolithic code bases.
What this enables you to do is to find the problems, address them quickly, move forward.
I think that the problem that a lot of people have is that we’re so used to couching these
things as trade-offs and as dichotomies, the idea that if you’re going to move fast, you’re
going to break things.
The one thing which I always say is, if you take one thing away from DevOps is this, high-performing
companies don’t make those trade-offs.
They’re not going fast and breaking things.
They’re going fast and making more stable, more high-quality systems, and this is one
of the key results in the book, in our research, is this fact that high-performers do better
at everything because the capabilities that enable high-performance in one field, if done
right, enable it in other fields.
If you’re using version control for software, you should also be using version control for
your production infrastructure.
If there’s a problem in production, we can reproduce the state of the production environment
in a disaster recovery scenario, again in a predictable way that’s repeatable.
I think it’s important to point out that this is something that happened in manufacturing
as well.
Give it to me.
I love when people talk about software as drawn from hardware analogies as my favorite
type of metaphor.
Okay, so Toyota didn’t win by making shitty cars faster, they won by making higher-quality
cars faster and having shorter time to market.
The lean manufacturing method, which by the way also spawned lean startup thinking and
everything else connected to it.
And DevOps pulls very strongly from lean methodologies.
So you guys are probably the only people to have actually done a large-scale study of
organizations adopting DevOps.
What is your research and what did you find?
Sure.
My research really is the largest investigation of DevOps practices around the world.
We have over 23,000 data points, all industries.
Give me like a sampling, like what are the range of industries?
So I’ve got entertainment, I’ve got finance, I have healthcare and pharma, I have technology.
Government.
Government, education.
You basically have every vertical.
And then when you tell you around the world.
So we’re primarily in North America, we’re in Amia, we have India, we have a small sample
in Africa.
Right.
And we break down like the survey methodology questions that people have in the ethnographic
world, the way we would approach it is that you can never trust what people say they do.
You have to watch what they do.
However, it is absolutely true, and especially in a more scalable sense, that there are really
smart surveys that give you a shit ton of useful data.
Yes.
And part two of the book covers this in almost excruciating detail.
We like knowing methodologies.
Yes.
So it’s nice to share that.
Well, and it’s interesting because Jez talked about in his overview of Agile and how it changes
so quickly and we don’t have a really good definition, but that does is it makes it difficult
to measure.
Right.
And so what we do is we’ve defined core constructs, core capabilities, so that we can then measure
them.
We go back to core ideas around things like automation, process, measurement, lean principles.
And then I’ll get that pilot set of data and I’ll run preliminary statistics to test for
discriminant validity, convergent validity, composite reliability.
Make sure that it’s not testing what it’s not supposed to test.
It is testing what it is supposed to test.
Everyone is reading it consistently the same way that I think it’s testing.
I even run checks to make sure that I’m not inadvertently inserting bias or collecting
bias just because I’m getting all of my data from surveys.
Sounds pretty damn robust.
So tell me then what were the big findings?
That’s a huge question, but give me the hit list.
Well, okay.
So let’s start with one thing that Jess already talked about, speed and stability go together.
This is where he was talking about there not being necessarily a false dichotomy and that’s
one of your findings that you can actually accomplish both.
Yeah.
And it’s worth talking about how we measure those things as well.
So we measure speed or tempo as we call it in the book or sometimes people call it throughput
as well.
Which is a nice full circle manufacturing idea, like the semiconductor circuit throughput.
Yeah, absolutely.
I love hardware analogies for software, I told you.
A lot of it comes from lean.
So lead time, obviously one of the classic lean manufacturing measures we use.
How long does it take?
You look at the lead time from checking into version control to release into production.
So that part of the value stream because that’s more focused on the DevOps end of things.
And it’s highly predictable.
The other one is release frequency.
So how often do you do it?
And then we’ve got two stability metrics and one of them is time to restore.
So in the event that you have some kind of outage or some degradation in performance in
production, how long does it take you to restore service?
For a long time we focused on not letting things break.
And I think one of the changes, paradigm shifts we’ve seen in the industry, particularly
in DevOps, is moving away from that.
We accept that failure is inevitable because we’re building complex systems.
So not how do we prevent failure, but when failure inevitably occurs, how quickly can
we detect and fix it?
MTBF, right?
Mean time between failures.
If you only go down once a year, but you’re down for three days and it’s on Black Friday.
But if you’re down very small, low blast, very, very small blast radius and you can come
back almost immediately and your customers almost don’t notice.
That’s fine.
The other piece around stability is change fail, right?
When you push a change into production, what percentage of the time do you have to fix
it?
Because something went wrong.
By the way, what does that tell you if you have a change fail?
So in the lean kind of discipline, this is called percent complete and accurate.
And it’s a measure of a quality of your process.
So in a high quality process, when I do something for Nicole, Nicole can use it rather than
sending it back to me and say, “Hey, there’s a problem with this.”
And in this particular case, what percentage of the time when I deploy something to production
is there a problem because I didn’t test it adequately.
My testing environment wasn’t production like enough.
Those are the measures for finding this.
But the big finding is that you can have speed and stability together through DevOps.
Is that what I’m hearing?
Yes, yes.
High performers get it all.
Low performers kind of suck at all of it.
Medium performers hang out in the middle.
I’m not seeing trade-offs four years in a row.
So anyone who’s thinking, “Oh, I can be more stable if I slow down,” I don’t see it.
It actually breaks a very commonly held kind of urban legend around how people believe
these things operate.
So tell me, are there any other sort of findings like that?
Because that’s very counterintuitive.
Okay, so this one’s kind of fun.
One is that this ability to develop and deliver software with speed and stability drives organizational
performance.
Now, here’s the thing.
I was about to say, that’s a very obvious thing to say.
So it seems obvious, right?
Developing and delivering software with speed and stability drives things like profitability,
productivity, market share.
Okay, except if we go back to Harvard Business Review 2003, there’s a paper titled, “IT Doesn’t
Matter.”
We have decades of research, I want to say at least 30 or 40 years of research showing
the technology does not drive organizational performance.
It doesn’t drive ROI.
And we are now starting to find other studies and other research that backs this up.
Eric Brinniol sent out of MIT, James Best sent out of Boston University, 2017.
Did you say James Bessen?
Yeah.
Oh, I used to edit him, too.
Yeah, it’s fantastic.
Here’s why it’s different.
Because before, right in like the 80s and the 90s, we did this thing where like, you’d
buy the tech and you’d plug it in and you’d walk away.
It was on-prem sales model where you like deliver and leave as opposed to like software
as a service and the other ways that things happen.
And people would complain if you tried to upgrade it too often.
Oh, right.
The key is that everyone else can also buy the thing and plug it in and walk away.
How is that driving value or differentiation for a company?
If I just buy a laptop to help me do something faster, everyone else can buy a laptop to do
the same thing faster.
That doesn’t help me deliver value to my customers or to the market.
It’s a point of parity, not a point of distinction.
Right.
And you’re saying that point of distinction comes from how you tie together that technology
process and culture through DevOps.
Right.
And that it can provide a competitive advantage to your business.
If you’re buying something that everyone else also has access to, then it’s no longer a
differentiator.
But if you have an in-house capability and those people are finding ways to drive your
business, I mean, this is the classic Amazon model.
They’re running hundreds of experiments in production at any one time to improve the
product.
And that’s not something that anyone else can copy, that’s why Amazon keeps winning.
So what people are doing is copying the capability instead.
And that’s what we’re talking about.
How do you build that capability?
The most fascinating thing to me about all this is honestly not the technology per se,
but the organizational change part of it and the organizations themselves.
So of all the people you studied, is there an ideal organizational makeup that is ideal
for DevOps?
Or is it one of these magical formulas that has this ability to turn a big company into
a startup and a small company into, because that’s actually the real question.
From what I’ve seen, there might be two ideals.
The nice, happy answer is the ideal organization is the one that wants to change.
That’s, I mean, given this huge n equals 23,000 dataset, is it not tied to a particular profile
of a size of company?
They’re both shaking their head just for the listeners.
I see high performers among large companies.
I see high performers in small companies.
I see low performers in small companies.
I see low performers in highly regulated companies.
I see low performers in not regulated companies.
So tell me the answer you’re not supposed to say.
So that answer is it tends to be companies that are like, oh shit, and they’re two profiles.
Number one, they’re like way behind, and oh shit, and they have some kind of funds.
Or they are like this lovely, wonderful bastion of like they’re these really innovative, high-performing
companies, but they still realize they’re a handful of like two or three companies ahead
of them, and they don’t want to be number two.
They are going to be number one.
So those are sort of the ideal.
I mean, just like anthropomorphize it a little bit.
It’s like the 35 to 40 year old who suddenly discovers you might be pre-diabetic, so you
better do something about it now before it’s too late.
But it’s not too late because you’re not so old where you’re about to reach sort of
the end of a possibility to change that runway.
And then there’s this person who’s sort of kind of already like in the game running in
the race and they might be two or three, but they want to be like number one.
And I think to extend your metaphor, the companies that do well are the companies that never got
diabetic in the first place because they always just ate healthily.
They were already glucose monitoring.
They had continuous glucose monitors on, which is like DevOps actually.
They were always athletes.
Right.
You know, diets are terrible because at some point you have to stop the diet.
And it has to start and start and stop as opposed to a way of life is what you’re saying.
Right, exactly.
So if you just always eat healthily and never eat too much or very rarely eat too much and
do a bit of exercise every day, you never get to the stage like, oh my God, now I can
only eat tofu.
So like my loving professerness, nurture Nicole also has one more profile that like I love
and I worry about them like mother hen.
And it’s the companies that I talk to and they come to me and they’re struggling and
I haven’t decided if they want to change, but they’re like, so we need to do this transformation
and we’re going to do the transformation.
And it’s either because they want to or when they’ve been told that they need to.
And then they will insert this thing where they say, but I’m not a technology company.
I’m like, but we just had this 20 minute conversation about how you’re leveraging technology to drive
value to customers or to drive this massive process that you do.
And then they say, but I’m not a technology company.
I could almost see why they had that in their head because they were a natural resources
company.
But there was another one where they were a finance company.
I mean, an extension of software eats the world is really every company is a technology
company.
It’s fascinating to me that that third type exists, but it is a sign of this legacy world
moving into and I worry about them also, at least for me personally, you know, I lived
through this like mass extinction of several firms and I don’t want it to happen again.
And I worry about so many companies that keep insisting they’re not technology companies.
And I’m like, oh, honey child, you’re a tech company.
You know, one of the gaps in our data is actually China.
And I think big China is a really interesting example because they didn’t go through the
whole, you know, IT doesn’t matter phase.
They’re jumping straight from no technology to Alibaba and Tencent, right?
I think US companies should be scared because the moment Tencent and Alibaba already made
moving into other developing markets and they’re going to be incredibly competitive because
it’s just built into their DNA.
So the other fascinating thing to me is that you essentially were able to measure performance
of software and clearly productivity.
Is there any more insights on the productivity side?
Yes.
Yes.
I want to go.
This is his favorite ramp.
Jumping around and like waving his hand.
So tell us the reason the manufacturing metaphor breaks down is because in manufacturing you
have inventory.
Yes.
We do not have inventory in the same way in software.
In a factory, like the first thing your lean consultant is going to do, walking into the
factory is point to the piles of thing everywhere.
But I think if you walk into an office where there’s developers, where’s the inventory?
By the way, that’s what makes talking about this to executives so difficult.
They can’t see the process.
Well, it’s a hard question to answer because is the inventory the code that’s being written?
And people actually have done that and said, “Well, listen, lines of code are an accounting
measure and we’re going to capture that as, you know, capital.”
That’s insane.
It’s like an invitation to write crappy, unnecessarily long code.
That’s exactly what happens.
It’s like the olden days are getting paid for a book by how long it is and it’s like
actually really boring when you can actually write it in like one third of the length.
Let’s write it in German.
Right, you know.
I’m thinking of Charles Dickens.
In general, you know, you prefer people to write short programs because they’re easier
to maintain and so forth.
But lines of code have all these drawbacks.
We can’t use them as a measure of productivity.
So if you can’t measure lines of code, what can you measure?
Because I really want an answer like, how do you measure productivity?
So velocity is the other classic example.
Agile, there’s this concept of velocity, which is the number of story points a team manages
to complete in an iteration.
So before the start of an iteration in many agile, particularly scrum-based processes,
you’ve got all this work to do.
You’re like, “We need to build these five features.
How long will this feature take?”
And the developers fight over it and they’re like, “Oh, it’s five points.”
And then this one’s going to take three points, this one’s going to take two points.
And so you have a list of all these features and you don’t get through all of them.
At the end of the iteration, the customer signs off, “Well, I’m accepting this one.
This one’s fine.
This one’s fine.
This one’s a hot mess.
Go back and do it again.”
Whatever.
The number of points you complete in the iteration is the velocity.
So it’s like the speed at which you’re able to deliver those features.
So a lot of people treat it like that, but actually, that’s not really what it’s about.
It’s a relative measure of effort and it’s for capacity planning purposes.
So basically, for the next iteration, we’ll only commit to completing the same velocity
that we finished last time.
So it’s relative and it’s team dependent.
And so what a lot of people do is say they start comparing velocities across teams.
Then what happens is, a lot of work, you need to collaborate between teams.
But hey, if I’m going to help you with your story, that means I’m not going to get my
story points and you’re going to get your story points, right, people can game it as
well.
You should never use story points as a productivity measure.
So lines of code doesn’t work, velocity doesn’t work, what works?
So this is why we like, two things in particular, one thing that it’s a global measure.
And secondly, that it’s not just one thing, it mixes two things together, which might
normally be intention.
And so this is why we went for our measure of performance.
So measuring lead time, release frequency, and then time to restore and change fail rate.
Lead time is really interesting because lead time is on the way to production, right?
So all the teams have to collaborate.
It’s not something where I can go really fast in my velocity, but nothing ever gets delivered
to the customer.
It doesn’t count in lead time.
So it’s a global measure.
It takes care of that problem of the incentive alignment around the competitive dynamic.
Also, it’s an outcome.
It’s not an output.
There’s a guy called Jeff Patton.
He’s a really smart thinker in the kind of lead and agile space.
He says, minimize output, maximize outcomes, which I think is simple but brilliant.
It’s so simple because it just shifts the words to impact.
And even we don’t get all the way there because we’re not yet measuring, did the features
deliver the expected value to the organization or the customers?
Well, we do get there because we focus on speed and stability, which then deliver the
outcome to the organization, profitability, productivity, market share.
But the second half of this, which I am also hearing is, did it meet your expectations?
Did it perform to the level that you wanted it to?
Did it match what you asked for?
Or even if it wasn’t something you specified that you desired or needed, that seems like
a slightly open question.
So we did actually measure that.
We looked at non-profit organizations and these were exactly the questions we measured.
We asked people, did the software meet, I can’t remember what the exact questions were.
Effectiveness, efficiency, customer satisfaction, delivery, mission goals.
How fascinating that you do it non-profits because that is a larger move in the non-profit
measurement space to try to measure impact.
But we captured it everywhere because even profit seeking firms still have these goals.
In fact, as we know from research, companies that don’t have a mission other than making
money do less well than the ones that do.
I think, again, what the data shows is that companies that do well on the performance measures
we talked about outperform their low performing peers by a factor of two.
A hypothesis is what we’re doing when we create these high performing organizations in terms
of speed and stability is we’re creating feedback loops.
What it allows us to do is build a thin slice, a prototype of a feature, get feedback through
some UX mechanism, whether that’s showing people the prototype and getting their feedback,
whether it’s running A/B tests or multivariate tests in production.
It’s what creates these feedback loops that allow you to shift direction very fast.
I mean, that is the heart of Lean Startup.
It’s the heart of anything you’re putting out into the world is you have to kind of
bring it full circle.
It is a secret of success to Amazon, as you cited earlier.
I would distill it to just that.
I think I heard Jeff Bezos say the best line.
It was at the Internet Association dinner in DC last year where he came and asked me
about an innovation.
He’s like, to him, an innovation is something that people actually use.
And that’s what I love about the feedback loop thing, is it actually reinforces that
mindset of that’s what innovation is.
Right.
So to sum up, the way you can frame this is DevOps is that technological capability
that underpins your ability to practice Lean Startup and all these very rapid iterative
processes.
So I have a couple of questions then.
So one is going back to this original taxonomy question, and you guys described that there
isn’t necessarily an ideal organizational type.
Which by the way, should be encouraging.
I agree.
It’s super encouraging and more importantly democratizing that anybody can become a hit
player.
We were doing this in the federal government.
I love that.
But one of my questions is, when we had Adrian Cockroft on this podcast a couple of years
ago talking about microservices, and the thing that I thought was so liberating about what
he was describing the Netflix story was that it was a way for teams to essentially become
little mini product management units and essentially self-organize because the infrastructure
by being broken down into these micro pieces versus say a monolithic kind of uniform architecture,
I would think that being a organization that’s containerized its code in that way that has
this microservices architecture would be more suited to DevOps.
Or is that a wrong belief?
I’m just trying to understand again that taxonomy thing of how these pieces all fit together.
So we actually studied this as a whole section of architecture in the book where we looked
at exactly this question.
Architecture has been studied for a long time and people talk about architectural characteristics.
There’s the ATAM, the architectural trade-off model that kind of email and developed.
There’s some additional things we have to care about, testability and deployability.
Can my team test its stuff without having to rely on this very complex integrated environment?
Can my team deploy its code to production without these very complex orchestrated deployments?
Basically, can we do things without dependencies?
That is one of the biggest predictors in our cohort of IT performance is the ability of
teams to get stuff done on their own without dependencies on other teams, whether that’s
testing or whether it’s deploying or whether it’s planning.
Even just communicating.
Can you get things done without having to do mass communication and checking in permissions?
Question I love, love, love asking on this podcast is we always revisit the 1937 Coast
paper about the theory of the firm and its idea that transaction costs are more efficient.
This is like the ultimate model for reducing friction and those transaction costs, communication,
coordination costs, all of it.
That’s what all the technical and process stuff is about that.
I mean, Don Robinson once came to one of my talks on continuous delivery.
At the end, he said, “So, continuous delivery, that’s just about reducing transaction costs,
right?”
And I’m like…
An economist view of DevOps.
I love it.
You’re right.
You’ve reduced my entire body of work to one sentence.
It’s so much Conway’s Law, right?
This would remind me what Conway’s Law is.
Organizations which design systems are constrained to produce designs which are copies of the
communication structures of these organizations.
Oh, right.
It’s that idea basically that your software code looks like the shape of the organization
itself.
Right.
And how we communicate, right?
So, which, you know, Jez just summarized, if you have to be communicating and coordinating
with all of these other different groups…
Command and control looks like waterfall, a more decentralized model looks like independent
teams.
Right.
So, the data shows that.
A lot of people jump on the microservices, containerization, bandwagon.
There’s one thing that is very important to bear in mind, implementing those technologies
does not give you those outcomes we talked about.
We actually looked at people doing mainframe stuff.
You can achieve these results with mainframes.
Equally, you can use the, you know, Kubernetes and, you know, Docker and microservices and
not achieve these outcomes.
We see no statistical correlation with performance, whether you’re on a mainframe or a greenfield
or a brownfield system.
If you’re building something brand new or if you’re working on existing build.
And one thing I wanted to bring up that we didn’t before is I said, you know, day one
is short, day two is long.
And I talked about things that live on the internet and live on the web.
This is still a really, really smart approach for package software.
And I know people who are working in and running package software companies that use this methodology
because it allows them to still work in small, fast approaches.
And all they do is they push to a small package pre-production database.
And then when it’s time to push that code onto some media, they do that.
Okay.
So what I love hearing about this is that it’s actually not necessarily tied again to the
architecture or the type of company you are.
There’s this opportunity for everybody, but there is this mindset of like an organization
that is ready.
It’s like a readiness level for a company.
Oh, I hear that all the time.
I don’t know if I’d say there’s any such thing as readiness, right?
Like there’s always an opportunity to get better.
There’s always an opportunity to transform.
The other thing that really drives me crazy and makes my head explode is this whole maturity
model thing.
Okay.
Are you ready to start transforming?
Well, like you can just not transform and then maybe fail, right?
Maturity models, they’re really popular in industry right now, but I really can’t stress
enough that they’re not really an appropriate way to think about a technology transformation.
I was thinking of readiness in the context of like NASA technology readiness levels or
TRLs, which is something we use to think about a lot for very early stage things, but you’re
describing maturity of an organization and it sounds like there’s some kind of a framework
for assessing the maturity of an organization and you’re saying that doesn’t work, but first
of all, what is that framework and why doesn’t it work?
Well, so so many people think that they want a snapshot of their like DevOps or their technology
transformation and spit back a number, right?
And then you will have one number to compare yourself against everything.
The challenge though is that a maturity model usually is leveraged to help you think about
arriving somewhere and then here’s the problem.
Once you’ve arrived, what happens?
Oh, we’re done.
You’re done.
And then the resources are gone.
And by resources, I don’t just mean money, I mean time, I mean attention.
We see year over year over year, the best, most innovative companies continue to push.
So what happens when you’ve arrived, I’m using my finger quotes.
You stop pushing.
You stop pushing.
What happens when executives or leaders or whomever decide that you no longer need resources
of any type?
I have to push back again though, doesn’t this help because it is helpful to give executives
in particular, particularly those that are not tech native, coming from the seeds of
the engineering organization, some kind of metric to put your head around where are we,
where are we at?
So you can use a capability model.
You can think about the capabilities that are necessary to drive your ability to develop
and deliver software with speed and stability.
Another limitation is that they’re often kind of a lockstep or a linear formula, right?
No, right.
It’s like a stepwise A, B, C, D, E, one, two, three, four.
And in fact, the very nature of anything iterative is it’s very nonlinear and circular.
Feedback loops are circled.
Right.
And maturity models just don’t allow that.
No, another thing that’s really, really nice is that capability models allow us to think
about capabilities in terms of these outcomes.
Capabilities drive impact.
Maturity models are just this thing where you have this level one, level two, level
three, level four.
It’s a bit performative.
And then finally, maturity models just sort of take this snapshot of the world and describe
it.
How fast is technology and business changing?
If we create a maturity model now, let’s wait, let’s say four years, that maturity model
is old and dead and dusty and gone.
Do new technologies change the way you think about this?
Because I’ve been thinking a lot about how product management for certain types of technologies
changes with the technology itself and that machine learning and deep learning might be
a different beast.
And I’m just wondering if you guys have any thoughts on that.
Yeah.
I mean, me and Dave Farley wrote the continuous delivery book back in 2010.
And since then, you know, there’s Docker and Kubernetes and large-scale adoption of the
cloud and all these things that you had no idea would happen.
People sometimes ask me, you know, isn’t it time you wrote a new edition of the book?
I mean, yeah, we would probably rewrite it.
Does it change any of the fundamental principles?
No.
Do these new tools allow you to achieve those principles in new ways?
Yes.
So, I think, you know, this is how I always come back to any problem is go back to first
principles.
Yeah.
And the first principles, I mean, they will change over the course of centuries.
I mean, we’ve got modern management versus kind of scientific management, but they don’t
change over the course of like a couple of years.
The principles are still the same.
These give you new ways to do them, and that’s what’s interesting about them.
Equally, things can go backwards.
A great example of this is one of the capabilities we talk about in the book is working off a
shared trunk or master inversion control, not going on these long-lived feature branches.
And the reason for that is actually because of feedback loops.
You know, developers love going off into a corner, putting headphones on their head and
just coding something for like days, and then they try and integrate it into trunk, you
know, and that’s a total nightmare.
And not just for them, more critically for everyone else who then has to merge their
coding so whatever they’re working on.
So that’s hugely painful.
Git is one of these examples of a tool that makes it very easy for people like, “Oh, I
can use feature branches.”
So I think, again, it’s non-linear in the way that you describe.
Right.
Gives you new ways to do things, are they good and bad?
It depends.
But the thing that strikes me about what you guys have been talking about as a theme in
this podcast that seems to lend itself, well, to the world of machine learning and deep
learning where that technology might be different, is it sort of lends itself to a probabilistic
way of thinking and that things are not necessarily always complete, and that there is not a beginning
and an end, and that you can actually live very comfortably in an environment where things
are by nature complex, and that complexity is not necessarily something to avoid.
So in that sense, I do think there might be something kind of neat about ML and deep learning
and AI for that matter, because it is very much lending itself to that sort of mindset.
Yeah.
And in our research, we talk about working in small batches.
There’s a great video by Brett Victor called Inventing on Principle, where he talks about
how important it is to the creative process to be able to see what you’re doing, and
he has this great demo of this game he’s building where he can change the code and the game
changes its behavior instantly when you’re doing things like that.
You don’t get to see that.
No, and the whole thing with machine learning is how can we get the shortest possible feedback
from changing the input parameters to seeing the effect so that the machine can learn,
and that the moment you have very long feedback loops, the ML becomes much, much harder because
you don’t know which of the input changes caused the change in output that the machine
is supposed to be learning from.
So the same thing is true of organizational change and process, and product development
as well, by the way, which is working in small batches so that you can actually reason about
causing effects.
I changed this thing.
It had this effect.
Again, that requires short feedback loops.
That requires small batches.
That’s one of the key capabilities we talk about in the book, and that’s what DevOps enables.
So we’ve been this hallway style conversation around all these themes of DevOps, measuring
it, why it matters, and what it means for organizations.
But practically speaking, if a company, and you guys are basically arguing it, any company,
not necessarily a “company” that thinks it’s a tech company, and necessarily a company
that has this amazing modern infrastructure stack, it could be a company that’s still
working off mainframes.
What should people actually do to get started, and how do they know where they are?
So what you need to do is take a look at your capabilities, understand what’s holding you
back, try to figure out what your constraints are.
But the thing that I love about much of this is you can start somewhere, and culture is
such a core, important piece.
We’ve seen across so many industries, culture is truly transformative.
In fact, we measure it in our work, and we can show that culture has a predictive effect
on organizational outcomes and on technology capabilities.
We use a model from a guy called Ron Westrom, who was a social scientist studying safety
outcomes, in fact, in safety-critical industries like healthcare and aviation.
He created a typology where he organizes organizations based on whether they’re pathological, bureaucratic
or generative.
That’s actually a great topology.
I wanted to apply that to people I date.
I know, right?
Too real.
I wanted to apply it to people.
Too real.
There’s a book in there, definitely.
I like how I’m trying to anthropomorphize all these organizational things into people.
But anyway, go on.
Instead of the five love languages, we can have the three relationship types.
Pathological organizations are characterized by a low cooperation between different departments
and up and down the organizational hierarchy.
How do we deal with people who bring us bad news?
Do we ignore them, or do we shoot people who bring us bad news?
How do we deal with responsibilities?
Are they defined tightly so that when something goes wrong, we know whose fault it is, so
we can punish them?
Or do we share risks, because we know we’re all in it together, and it’s the team?
You all have to get in the game.
You’re all accountable, right?
Exactly.
We’re all in different departments.
And crucially, how do we deal with failure?
As we discussed earlier, in any complex system, including organizational systems, failure
is inevitable.
So failure should be treated as a learning opportunity, not whose fault was it, but why
did that person not have the information they needed, the tools they needed?
How can we make sure that when someone does something, it doesn’t lead to catastrophic
outcomes, but instead it leads to contained small blast radiuses?
Right.
Not an outage on Black Friday.
Right.
Exactly.
So how do we deal with novelty?
Is novelty crushed, or is it implemented, or does it lead to problems?
One of the pieces of research that kind of confirms what we were talking about was some
research that was done by Google, they were trying to find what makes the greatest Google
team.
You know, is it four Stanford graduates and no developer and fire all the managers?
Is it a data scientist and a Node.js programmer and a manager?
Right.
One product manager paired with one system engineer, with one.
And what they found was that the number one ingredient was psychological safety.
Does the team feel safe to take risks?
And this ties together failure and novelty.
If people don’t feel that when things go wrong, they’re going to be supported, they’re not
going to take risks.
And then you’re not going to get any novelty, because novelty by definition involves taking
risks.
So we see that one of the biggest things you can do is create teams where it’s safe to
go wrong and make mistakes, and where people will treat that as a learning experience.
This is a principle that applies, again, not just in product development, you know, the
lean start up, fail early, fail often, but also in the way we deal with problems at an
operational level as well.
And how we interact with our team when these things happen.
So just to kind of summarize that, you have pathological, this is a power oriented thing
where you know the people are scared, the messenger is going to be shot.
Then you have this bureaucratic kind of rule oriented world where the messengers aren’t
heard.
And then you have the sort of generative, and again, I really wish I could apply this
to people, but we’re talking about organizations here for culture, which is more performance
oriented.
And I just want to add one thing about this, you know, working in the federal government,
you would imagine that to be a very bureaucratic organization.
I would actually.
And actually, what was surprising to me was that yes, there’s lots of rules.
The rules aren’t necessarily bad.
That’s how we can operate at scale is by having rules.
But what I found was there was a lot of people who are mission oriented.
And I think that’s a nice alternative way to think about generative organizations.
You need to think about mission orientation.
The rules are there, but if it’s important to the mission, we’ll break the rules.
And we measure this at the team level, right?
Because you can be in the government and there were pockets that were very generative.
You can be in a startup and you can see startups that act very bureaucratic or very pathological.
Right.
The culture of the CEO.
Where it’s not charismatic, inspirational vision, but to the expense of actually being
heard and the messenger is shot, et cetera.
And we have several companies around the world now that are measuring their culture on a
quarterly cadence and basis because we show in the book how to measure it.
Western’s typology was the table itself.
And so we turn that into a scientific psychometric way to measure it.
Now this makes sense why I’m putting these anthropomorphic analogies because in this
sense organizations are like people.
They’re made of people.
Teams are organic entities.
And I love that you said that the unit of analysis is a team because it means you can
actually do something.
You can start there and then you can like see if it actually spreads or doesn’t spread
bridges, doesn’t bridge, et cetera.
And what I also love about this framework is it also moves away from this cult of failure
mindset that I think people tend to have where it’s like failing for the sake of failing.
And you actually want to avoid failure.
Right.
And the whole point of failing is to actually learn something and then be better and take
risks.
So you can implement these new things.
And very smart risks.
So what’s your final?
I mean, there’s a lot of really great things here, but like what’s your final sort of parting
take away for listeners or people who might want to get started or think about how they
are doing.
So I think, you know, we’re in a world where technology matters.
Anyone can do this stuff, but you have to get the technology part of it right.
That means investing in your engineering capabilities, in your process, in your culture, in your
architecture.
We dealt with a lot of things here that people think are intangible and we’re here to tell
you they’re not intangible.
You can measure them.
They will impact the performance of your organization.
So take a scientific approach to improving your organization and you will read the dividends.
When you guys talk about, you know, anyone can do this, the teams can do this, but what
role in the organization is usually most empowered to be the owner of where to get started?
Is it like the VP of engineering?
Is it the CTO, the CIO?
I was going to say, don’t minimize the role of and the importance of leadership.
DevOps sort of started as a grassroots movement, but right now we’re seeing roles like VP and
CTO being really impactful in part because they can set the vision for an organization,
but also in part because they have resources that they can dedicate to this.
We see a lot of CEOs and CTOs and CIOs in our business.
We have like a whole briefing center.
We hear what’s top of mind for them all the time.
Everyone thinks they’re transformational.
So like what actually makes a visionary type of leader who has that, not just the purse
strings and the decision-making power, but the actual characteristics that are right
for this.
Right.
And that’s such a great question.
We dug into that in our research and we find that there are five characteristics that end
up being predictive of driving change and really amplifying all of the other capabilities
that we found.
And these five characteristics are vision, intellectual stimulation, inspirational communication,
supportive leadership, and personal recognition.
And so what we end up recommending to organizations is absolutely invest in the technology.
So invest in leadership in your people because that can really help drive your transformation
home.
Well, Nicole, Jez, thank you for joining the A6 and Z podcast.
The book Just Out is Accelerate, Building and Scaling High-Performing Technology Organizations.
Thank you so much, you guys.
Thanks for having us.
Thank you.
One of the recurring themes we talk about a lot on the a16z Podcast is how software changes organizations, and vice versa… More broadly: it’s really about how companies of all kinds innovate with the org structures and tools that they have.
But we’ve come a long way from the question of “does IT matter” to answering the question of what org structures, processes, architectures, and roles DO matter when it comes to companies — of all sizes — innovating through software and more.
So in this episode (a re-run of a popular episode from a couple years ago), two of the authors of the book Accelerate: The Science of Lean Software and DevOps, by Nicole Forsgren, Jez Humble, and Jean Kim join Sonal Chokshi to share best practices and large-scale findings about high performing companies (including those who may not even think they’re tech companies). Nicole was co-founder and CEO of Dora, which was acquired by Google in December 2018; she will soon be joining GitHub as VP of Research & Strategy. Jez was CTO at DORA; is currently in Developer Relations at Google Cloud; and is the co-author of the books The DevOps Handbook, Lean Enterprise, and Continuous Delivery.