A practical short-cut to understanding Lean Thinking
I don’t know about you, but personally, I find the world pretty confusing. People go out of their way to make simple things complicated; easy things hard. They’d rather say “utilize” than “use.” Maybe that’s Superior Utilizification of English. I don’t know. I’m a bear of very little brain.
Why I didn’t go into psychology
Once they’ve made everything complicated and hard, people then rush through the day as fast as they can, so that they don’t have time to understand any of that complicated stuff on a deep level. All they can to is brush lightly against the surfaces of things as they hurry past.
<sigh/> People are the main reason I didn’t go into psychology.
A Thing people wish were complicated
One of the things people are trying to complicate these days is Lean Thinking. It’s getting to the point you need a PhD in statistics just to follow what Lean practitioners are saying. They’ll post a graphic on social media that looks like a colorized Rorschach inkblot and ask, “What does this trend tell us about throughput?” (I’m looking at you, Troy.)
A short tale
When I was a freshman in high school, we were asked to write a report in biology class about the human digestive system. Being rather less mature and serious than I am today, I wrote my paper in first person from the perspective of a morsel of food. I traced its journey from the time it entered the body to the time it <ahem>exited</ahem>. It made a splash. (The paper, I mean.)
A feature’s journey
It occurs to me this could be a way to get a handle on Lean Thinking.
What? No, not eating.
I meant tracing the journey of a software feature through the delivery pipeline, from the point of view of the feature itself rather than that of the people doing the work. It’s a way to take the focus off utilization and busy-ness, and visualize certain key Lean concepts like flow, lead time, cycle efficiency, muda, and mura…without all that bothersome math.
Do the math if you want to
A funny thing I’ve noticed about Reality is that it happens anyway, even if we didn’t build a mathematical model of it first. And it generally happens the way it wants to, over and over again. So, if we want to understand Reality on a practical level, we can just look at it. We don’t have to be able to describe it mathematically. But don’t worry: If you like math, there’s plenty of it around. Just not here.
What’s a feature?
We often think about the functionality of the software we’re building in a hierarchical way, with big ideas at the top (sometimes called epics or themes), smaller ideas decomposed from there (sometimes called features), and the smallest building blocks decomposed from there (sometimes called stories or backlog items).
Generally, something like a feature represents a logically-cohesive set of functionality that could conceivably be meaningful to an end user of the software. So, let’s trace the journey of a feature, rather than an epic or a story, through the delivery pipeline.
What we’re looking for
As we (the feature) progress through the process, we want to capture two pieces of information:
- The time during which someone is moving us forward (toward delivery). We’ll call that value add time (VA), for lack of a better name.
- The time during which we are waiting for someone to move us forward. We’ll call that non value add time (NVA), so it has a different name than VA.
At the end, we’ll just look and see which number is bigger.
The journey begins
We’re an idea on a backlog! That’s good news, because it means we exist! (I’m working on an unvalidated assumption it’s better to exist than not to exist. If you disagree, please play along for discussion purposes.)
Our times at this point are:
- VA: 0 hours
- NVA: 0 hours
We are pulled!
Someone pulled us off the backlog. Now we’re moving! The person is examining us closely and refining the description of our functionality so that others will be able to carry the work forward.
Our clock starts when we move from a ready queue into an “in progress” state. Let’s say the person adds value to us for half an hour, and then has to take a phone call. After the call, someone comes to the person’s office to discuss something. By the end of that discussion, it’s time to go to a meeting. An hour and a half later, the person returns. She refines us further, spending an additional 30 minutes adding value to us before she moves us to her “done” queue.
Per the canonical LeadingAgile model, this is something a Portfolio Team would do. (Of course, details may vary from one situation to another.)
So, we experienced 1 hour of VA and 2 hours of NVA time. Our accumulated times now look like this:
- VA: 1 hour
- NVA: 2 hours
Waiting, but not for Godot
The next thing that happens to us is we sit in that queue for six weeks. This is where a utilization-focused assessment of delivery performance misses useful information. You see, the lead time clock doesn’t stop while we’re in a wait state. It started when we were first dropped into the pipeline. It will stop when we’re delivered to production or (at a minimum) to a staging environment where full-system testing occurs. That is, the clock stops when a customer can use us or, at least, when there are no technical barriers to a customer using us (a business decision not to release doesn’t count as a technical barrier).
At the end of this period our accumulated times look like this:
- VA: 1 hour
- NVA: 242 hours
We’re pulled again!
We’re out of the queue at last! It sure feels good to have value added to us again, doesn’t it? This time, people are figuring out how we should be prioritized against other features and other kinds of work items, and scheduled for delivery.
Let’s say the group working on us pays attention to us for half an hour, and spends the remainder of their four-hour working session focusing on other parts of the backlog. When they’re done, they put us on their “done” queue. So we can add half an hour to VA time and 3.5 hours to NVA time.
Per the canonical LeadingAgile model, this is something a Program Team would do. They’re preparing work for Delivery Teams in their program.
At the end of this period, our times look like this:
- VA: 1.5 hours
- NVA: 245.5 hours
Here we go again
The length of time a feature remains in this queue will vary depending on how long the release cycle is and how many iterations or development cadences pass before a Delivery Team pulls us from the backlog. For this exercise, let’s assume Delivery Teams are using Scrum with two-week sprints, and each release cycle comprises five sprints. We’ll further assume we are pulled in sprint 3. Thus, we sit in the queue waiting for attention for 4 weeks.
- VA: 1.5 hours
- NVA: 405.5 hours
After the 4 weeks, a Delivery Team pulls us. For purposes of illustration, let’s say this team operates in a relatively immature way. One of the things they do is to move all the User Stories planned for the current sprint into the “in progress” state on the first day of the sprint. And we’ll further assume the team gets around to working on us on day 5 of the sprint. So, we’ve waited for attention an additional 4 days beyond the 4 week wait.
Note that metrics culled from a conventional application lifecycle management tool would not detect the difference between VA and NVA while we are in an “in progress” state; we would obtain an unrealistically optimistic representation of cycle efficiency. Those metrics are only as good as Shemp’s watch in The Three Stooges: “What time does your watch say?” “It don’t say nuttin’. Ya gotta look at it!” (For Crimin’ Out Loud, 1956).
- VA: 1.5 hours
- NVA: 437.5 hours
Now the team pulls us, they do the Three Amigos thing for fifteen minutes, and a pair of developers heads off to their workstation to work on us. They happily test-drive us for an hour and 45 minutes. Then they must off to a mandatory All Hands Meeting to listen to executives read PowerPoint slides to them while they catch up on text messages from their friends. Five hours later they return.
- VA: 3.5 hours
- NVA: 442.5 hours
They test-drive us for 30 minutes before they hit a snag. They need an answer to a technical question from the archtecture team. Alas, no one is available just now. We’re still officially “in progress,” but no value is being added to us. Two hours later, the architect answers their question and they continue to add value to us.
- VA: 4 hours
- NVA: 444.5 hours
After two more hours of rigorous test-driving, we’ve had sufficient value added to us that we’re ready for acceptance testing. The developers stick us on a queue to wait for that to happen. Due to various organizational realities, we wait in that queue for a week. Then the testing team pulls us.
- VA: 6 hours
- NVA: 484.5 hours
Keeping us in an “in progress” state the whole time, the testing team devotes 30 minutes of testing activity to us, and spends the rest of their two-day acceptance test cycle focusing on other areas of the system. Then they put us in the “ready to deploy” queue.
Our organization happens to operate by deploying to a staging environment at the start of the fifth sprint in each release cycle. We were “completed” toward the end of sprint 3. So we wait an additional two weeks in the “ready to deploy” queue.
- VA: 6.5 hours
- NVA: 580 hours
Now we’re pulled again! Value is being added to us through the mechanism of full system testing. The entire process takes two weeks, of which about 1 hour is pertinent to our functionality. At that point, we’re finally “production ready.”
- VA: 7.5 hours
- NVA: 659 hours
What does it mean?
In Lean terms, we’ve experienced the following:
- Our Lead Time is equal to the length of the release cycle plus prep time from the moment we were first pulled from the backlog at the Portfolio team level, amounting to about 640 hours.
- Our Cycle Time is VA + NVA time, or 626.5 hours.
- Our Cycle Efficiency is the percentage of Lead Time that was VA time, or just over 1%.
- Flow isn’t so good. We stopped several times on the journey. We would need to examine the details of the process to understand the reasons why. Typically, reasons include high WIP, inappropriate team structure, cross-team dependencies, and inefficient technical practices.
- muda means “non value add activity.” We know we spent most of our time in the delivery pipeline that way.
- mura in the manufacturing context means variation in the widgets produced; in a software delivery context it means unevenness of flow in the delivery process. The stopping, starting, context switching, and waiting for answers all contributed to mura in our journey.
By walking through a software delivery process from the point of view of a piece of work, rather than from the point of view of a worker, we can get a gut-feel sense of several Lean concepts without first having to master an assortment of mathematical models and new buzzwords. The exercise doesn’t substitute for mindful, thorough study of the basics, but in a tl;dr world it can be a practical starting point for people to begin to understand Lean Thinking as it applies to software delivery.