Toyota Kata and The Menlo Way

I have been telling everyone who will listen to read Rich Sheridan’s book Joy, Inc. ever since I came across and read it in the fall of 2015.

Fast forward to earlier this year when Lean Frontiers sent out their request for suggested keynote speakers for KataCon. I wrote to Mike Rother and asked him “Do you think we could get Rich Sheridan?”

Skip ahead a bit more, and I spent four days last week at Menlo Innovations in Ann Arbor – two days “in the chalk circle” paying close attention to the actual day-to-day work there, and two days working (pairing) with Rich Sheridan to work out the key beats for his KataCon keynote.

 

The “So What?” Test

Menlo is well known as a benchmark for a great working culture. But the question you may be asking (and, honestly I hope you ARE asking it) is “What does Menlo Innovations and agile software development have to do with Toyota Kata?”

If you visit Menlo (and I really hope you do!) here is what you won’t see:

  • Learner storyboards.
  • “5 Questions” coaching cycles.
  • Obstacle parking lots.
  • Experiment Records (PDCA Records)

In other words, you won’t see the explicit artifacts that characterize an organization using Toyota Kata to learn how to think about improvement scientifically. In that sense, Menlo isn’t a “Toyota Kata” benchmark.

OK… and?

You don’t see those things at Toyota either. You don’t go to Toyota to see “Toyota Kata.”

The Underlying Thinking Pattern

What you will see (and hear… if you pay attention) at Menlo Innovations is an underlying pattern of scientific thinking and safe problem solving in everything they do.

Let’s review what Toyota Kata is really all about.

Rather than re-writing something elegant here, I am going to quote from my part of an email exchange between Mike Rother, Rich Sheridan and me:

Going back to Mike’s original research premise, we knew that Toyota has this pretty awesome culture thing, but didn’t really understand the “secret sauce” of the exact structure of their interactions. Put another way, we saw and understood all of the artifacts, but copying the artifacts doesn’t copy the culture.

Mike’s research was really the first that dug deeper into the interactions that the artifacts support.

Once he extracted that “secret sauce” he then boiled off all of the other stuff, and what remained at the bottom of the pot was the Improvement Kata steps and the Coaching Kata steps.

In practice at Toyota, those things are deeply embedded in the artifacts. Sometimes they aren’t even spoken.

My informal hypothesis was that if I spent time paying attention to, not just the artifacts, but the way those artifacts guided interactions at Menlo, and then boiled off the other stuff, what would remain at the bottom of the Menlo pot would also be the Improvement Kata and Coaching Kata steps. And, though I didn’t do this formally, and yes, I had confirmation bias working here, I believe I can safely say “I have no evidence to contradict this hypothesis.”

For example:

In our conversation on Friday, Mike pushed back a bit on “just run the experiment,” [context clarification: Experiments to randomly try stuff, without a clear target condition rarely get you anywhere] but the reality I observed and heard was that “purpose” (challenge and direction) and “current condition” are deeply embedded in the day-to-day interactions, and “just run the experiment” is, indeed, working on a specific obstacle in the way of a target condition of some kind.

[…]

“What problem are you trying to solve?” is Menlo jargon that I overheard many times just listening to people talk.

Within Menlo, that term is contextual. Sometimes it is about the higher-level direction and challenge.

Sometimes it is about an intermediate target condition.

Sometimes it is about an immediate problem or obstacle.

As we say in Kata world, it is fractal. It is truly fractal at Menlo as well, to the point where the words don’t change at various levels.

The words DO change at various levels in Toyota Kata’s jargon, but we can’t get hung up on the terms, we have to look at the structure of problem solving.

Menlo’s co-founders already had this thinking pattern, and deliberately sought to embed it into the culture of the company they were starting. There wasn’t really any need to explicitly teach it because they weren’t trying to change the default behavior of an organization. New Menlonians learn the culture through the interviewing and on-boarding process and adopt very quickly because the very structure of the work environment drives the culture there.

In fact, spend any time there even just hanging out, and it is very difficult NOT to get pulled into The Menlo Way. Like everyone else, Rich and I were in the daily stand-up as pair-partners, reporting our work progress on his keynote.

image

What About Toyota Kata?

Menlo has had hundreds (thousands, actually) of visitors, and those who are “lean savvy” all ask if Menlo is “using lean” as their guideline. The answer is “no, we are just trying to solve problems.” While they have certainly incorporated most of the artifacts of “agile software production,” the purists push back that they aren’t “really doing it” because they didn’t copy those artifacts exactly. Nope. They used them as a baseline to solve Menlo’s problems.

When we see an awesome problem-solving culture, it is tempting to try to reverse engineer it by copying the physical mechanics, such as heijunka boxes (work authorization boards), kanban, “standard work” and the like.

But we have to dig down and look at the routines, the behavior that those artifacts and rituals support. When we do, we see the same patterns that Toyota Kata is intended to teach.

You need to begin with the thinking pattern. Use Toyota Kata to learn that.

As you do, take a look at your artifacts – the procedures, the policies, the control mechanics of your work. Reinforce the ones that are working to create the kind of culture you want. Challenge the ones that are getting in your way. Do both of those things as deliberate experiments toward a clear vision of the culture you want to create.

That is the benefit of studying companies like Menlo.

I hope to see you all at KataCon, hear what Rich has to say to our community, and establish a link between these two communities that have, up to now, been separate.

katasummit.com

Overproduction vs. Fast Improvement Cycles

A couple of weeks ago ago I posted the question “Are you overproducing improvements?” and compared a typical improvement “blitz” with a large monument machine that produces in large batches.

I’d like to dive a little deeper into some of the paradoxes and implications of 1:1 flow of anything, improvements included.

What is “overproduction” – really?

In the classic “7 wastes” context, overproduction is making something faster than your customer needs it. In practical terms, this means that the cycle time of the producing process is faster than the cycle time of the consuming process, and the producing process keeps making output after a queue has built up above a predetermined “stop point.”

If the cycle times are matched, then as an item is completed by the upstream process, it is consumed by the downstream process.

If the upstream process is cycling faster, then there must be an accumulation of WIP in the middle, and that accumulation must be dealt with. Further, those accumulated items are not yet verified as fit-for-use by the downstream process that uses them.

The way this applies to my “Big Improvement Machine” metaphor is that we are generating “improvement ideas” faster than we can test and incorporate them into the process.

“Small Changes” Doesn’t Mean “Slow Changes”

No matter how good your solution or idea, it is just an academic exercise until it is anchored as the an organizational norm. The rate limit on improvement is established by how quickly people can absorb changes to their daily, habitual routine.

Implementing and testing small changes one-by-one is generally faster than trying to make One Big Change all at once. When we do One Big Change, it is usually actually a lot of small changes.

I hear “we don’t have time to experiment,” but when I ask what really happens if a big change is made, what I hear almost every time is they had to spend considerable time getting things working. Why? Because no matter how well the Big Change was thought through, once you are actually trying it, the REAL problems will come up.

Key Point #1: Don’t waste time trying to develop paper solutions to every problem you can imagine. Instead, “go real” with enough of the new process to start revealing the real issues as quickly as possible.

In other words, the sooner you start actively learning vs. trying to design perfection, the quicker you’ll get something working.

Slow is Smooth, Smooth is Fast

Your other objective here is to develop the skill within the organization to test and anchor changes quickly, as a matter of routine. This will take time.

When we see a high-performance organization making rapid big changes, what we are typically seeing is making small changes even more rapidly. They have learned, through practice over time, how to do this. It isn’t reasonable to expect any organization to immediately know how to do this.

Key Point #2: If managers, or professional change agents (internal or external consultants, for example) are telling people exactly what to do, this learning is not taking place.

It is critical for the organization to develop this learning skill, and they are only going to do it if they can practice. Learning something new always involves doing it slowly, and poorly at first. If your internal or external consultants are serving you, their primary focus is on developing this basic competence. Their secondary focus is on getting the changes into place. This is the only approach that actually strengthens the organization’s capability.

The same is true for an operational manager who “gets” lean, but tries to just direct people to implement the perfect flow. It will work pretty well for a while. But think about how you (the operational manager) learned this stuff: Likely you learned it by making mistakes and figuring things out. If you don’t give your people a chance learn for themselves, you limit the organization in two ways:

  1. They will never be any better than you.
  2. They will wait to do what they are told, because that is what you are teaching them to do.

Think about what you want your people to be capable of doing without your help, and make sure you are giving them direction that requires them to practice doing those things. It will likely be different than telling them what they layout should look like.

Improve your Cycle Time for Change

Coming back to the original metaphor, if you want fast changes to last, you have to work speeding up the organization’s cycle time for testing improvement ideas. Part of this is going to involve making that activity an inherent and deliberate part of the daily work, not a special exception to daily work.

Part of that is going to be paying attention to how people are working on testing their ideas. The Improvement Kata and Coaching Kata are one way to learn how to deliberately structure this work so that learning takes place. Like any exponential curve, progress seems painfully slow at first. Don’t let that fool you. Be patient, do this right, and the organization will slingshot itself past where you would be with a liner approach.

Small changes, applied smoothly and continuously become big changes very quickly.

Are You Overproducing Improvements?

Imagine a factory with a large monument machine. It takes several days to set up. When it does run, it runs very fast, much faster than you can actually use its output. Therefore, you take the excess output and store it to use later. Actually, you don’t know how many items you need to make, so you make as many as you can while the machine is available to you.

image

Some of that excess output may prove to be less useful than you thought, but there is pressure to use it all anyway since it was so expensive to produce.

After a run making items for one process, you change it over to make items for a different process, and build up a queue of output there.

When all of the output is used, it all may or may not work the way you expected it to.

Most of us would see this as a classic case of “overproduction” – overwhelming the system with excess output that hasn’t been checked for quality, that isn’t needed right now, and might or might not be useful in the future. But it seemed like good efficiency to make it while we could.

Our lean thinking tells us we want to do a few things here. Ultimately, we want to work hard to break up the batches, and make the value flow more evenly to each of the customer processes, and ultimately to the end customers. The work we must do to keep things moving smoothly and evenly will pay off in both tangible and intangible ways.

One of the primary reasons we push hard for 1:1 flow in the lean world is to enable one-by-one confirmation.

We want to test each item of output to confirm that it actually performs as expected rather than making lots of them without knowing if they are any good.

How Do You Produce Improvements?

Now, imagine doing improvements this way. What would it look like?

A process would be scheduled to receive the rapid output of the Improvement Machine.

The Improvement Machine would be set up over a period of several days to produce improvements for the target process.

Once it was running, we would run the Improvement Machine very fast for a week or so. It would produce improvements faster than the target process can really absorb them.

At the end of the run, we might test the improvements as a batch. We might not test them at all, but rather report that they have been implemented with the assumption that they will work. We would also have excess improvements stored on a to-do list for future use.

Those excess improvements might, or might not, prove to be useful, but we would have huge pressure to implement them because they are on the list.

Once the machine was done producing improvements for one process, it would be set up to produce improvements the same way for another.

We would measure how many times we were able to run the improvement machine in a year, not so much the actual sustained impact we were making.

Improvements that were made without the machine might not be measured or credited at all. Or worse, these rogue improvements might actually be discouraged since they are made by people who aren’t certified to run the machine.

Now… substitute “kaizen event” for “improvement machine” and see if it makes any sense.

Why Are Big Batches Necessary?

The reason the Large Monument Machine has to cycle batches of output to different customers is because those customer processes don’t have the internal capability to do what it does. We need an outside resource to do it.

The countermeasure we strive to apply in these cases is to identify the capability that the customer process does need. We then work hard to develop it on a scale they can incorporate into their daily work. This is typically smaller, more specialized, and scoped to their needs.

My purpose here, though, is to apply a metaphor, not to discuss the economics of large capital equipment.

When we “batch improvements” it is often for the same reason: The area that is being improved can’t do it themselves, so we have to dedicate a scarce outside resource – an improvement expert – to lead them through it. Since that improvement leader can’t be there 100% of the time, he has to work as hard and fast as he can when he IS there.

Making Improvements vs. Teaching How to Make Improvements

The countermeasure in both scenarios is the same: Develop the capability within the process. In the case of making improvements, this means asking ourselves “Why can’t they do it?”

All of these are harder than just doing it for them. But if we want improvement to flow, this is the work that must be done.

Executive Rounding: Taking the Organization’s Vitals

Background:

image

I wrote an article appearing in the current (October 2017) issue of AME Target Magazine (page 20) that profiles two very different organizations that have both seen really positive shifts in their culture. (And yes, my wife pointed out the misspelling “continous” on the magazine cover.)

The second case study was about Meritus Health in Hagerstown, Maryland, and I want to go into a little more depth here about an element that has, so far, been a keystone to the positive changes they are seeing.

Sara Abshari and Eileen Jaskuta are presenting the Meritus story at the AME conference next week (October 9, 2017).

Sara is a manager (and excellent kata coach) in the Meritus CI office. Eileen is now at Main Line Health System, but was the Chief Quality Officer at Meritus at the time Joe was presenting at KataCon.

Their presentation is titled Death From Kaizen to Daily Improvement and outlines the journey at Meritus, including the development of executive rounding. If you are attending the conference, I encourage you to seek them out – as well as Craig Stritar – and talk to them about their experiences.

Mark’s Word Quibble

In addition, honestly, the Target Magazine editors made a single-word change in the article that I feel substantially changed the contextual meaning of the paragraph, and I am using this forum to explain the significance.

Here is paragraph from the draft as originally submitted. (Highlighting added to point out the difference):

[…][Meritus][…] executives follow a similar structure as they round several times a week to check-in with the front line and ensure there are no obstacles to making progress. Like the Managing Daily improvement meetings at Idex, the executive rounding at Meritus has evolved as they have learned how to connect the front-line improvements to the strategic priorities.

This is what appears in print in the magazine:

[…][Meritus][…] executives follow a similar structure as they visit several times a week to check in with the frontline and ensure there are no obstacles to making progress. Like the MDI meetings at Idex, the executive visiting at Meritus has evolved as they have learned how to connect the front-line improvements to the strategic priorities.

While this editing quibble can easily be dismissed as a pedantic author (me), the positive here is it gives me an opportunity to highlight different meanings in context, go into more depth on the back-story than I could in the magazine article, and invite those of you who will be attending the upcoming AME conference to talk to some of the key people who will be presenting their story there.

Rounding vs. Visiting

In the world of healthcare, “rounding” is the standard work performed by nurses and physicians as they check on the status of each patient. During rounds, they should be deliberately comparing key metrics and indicators of the patient’s health (vital signs, etc.) against what is expected. If something is out of the expected range, that becomes a signal for further investigation or intervention.

“Visiting” is what the patient’s family and friends do. They stop by, and engage socially.

In industry, we talk about “gemba walks,” and if they are done well, they serve the same purpose as “rounding” on patients in healthcare. A gemba walk should be standard work that determines if things are operating normally, and if they are not, investigating further or intervening in some way.

I am speculating that if I had used the term “structured leader standard work” rather than “rounding” it would not have been changed to “visiting.”

Executive Rounding

Joe Ross, the CEO at Meritus Health, presented a keynote at the Kata Summit last February (2017). You can actually download a copy of his presentation here: http://katasummit.com/2017presentations/. The title of his presentation was “Creating Healthy Disruption with Kata.” More about that in a bit.

The keystone of his presentation was about the executives doing structured rounding on various departments several times a week. These are the C-Level executives, and senior Vice Presidents. They round in teams, and change the routes they are rounding on every couple of weeks. Thus, the entire executive team is getting a sense of what is going on in the entire hospital, not just in their departments.

Rather than just “visiting,” they have a formal structure of questions, built from the Coaching Kata questions + some additional information. Since everyone is asking the same basic questions, the teams can be well prepared and the actual time spent in a particular department is programmed to be about 5 minutes. The schedule is tight, so there isn’t time to linger. This is deliberate.

After the teams round, the executives meet to share what they have learned, identify system-wide issues that need their attention, and reflect on what they have learned.

In this case, rather than rounding on patients, the executives are rounding to check the operational health of the hospital. They are checking the vital signs and making sure nothing is impeding people from doing the right thing – do people know the right thing to do? If not, then the executives know they need to provide clarity. Do people know how to do the right thing? If not, then the executives need to work on building capability and competence.

In both cases, executives are getting information they need so they can ensure that routine things happen routinely, and the right people are working to improve the right things, the right way. In the long-term, spending this time building those capabilities and mechanisms for alignment deep into the operational hierarchy gives those executives more time to deal with real strategic issues. Simply put, they are investing time now to build a far more robust organization that can take on bigger and bigger challenges with less and less drama.

Results

Though they were only a little more than a year in when Joe presented at KataCon, he reported some pretty interesting results. I’ll let you look at the presentation to see the statistically significant positive changes in employee surveys, patient safety and patient satisfaction scores. What I want to bring attention to are the cultural changes that he reported:

image

Leadership Development

Actually points 1. and 2. above are both about leadership development. The executives are far more in touch with what is happening, not only in their own departments, but in others. Even if they don’t round on their own departments, they hear from executives who did, and get valuable perspectives and questions from outsiders. This helps break down silo walls, build more robust horizontal linkages, and gives their people a stage to show what they are working on.

Since executives can’t be the ones with all of the solutions, they are (or should be) mostly concerned with developing the problem solving capabilities in their departments. At the same time, rounding gives them perspective on problems that only executive action can fix. In a many organizations mid-manager facing these systemic obstacles would try to work around them, ignore them, or just accept “that’s the way it is” and nothing gets done about these things. That breeds helplessness rather than empowerment.

On the other hand, if a manager should be able to solve the problem, then there is a leader development opportunity. That is the point when the executive should double down on ensuring the directors and upper managers are coaching well, have target conditions for developing their staff, and are aware of who is struggling and who is not. You can’t delegate knowing what is actually going on. Replying on reports from subordinates without ever checking in a couple of levels down invites well-meaning people to gloss over issues they don’t want to bother anyone about.

Breaking Down Silos by Providing Transparency

The side-benefit of this type of process is that the old cultures of “stay out of my area” silos get broken down. It becomes OK to raise problems. The opposite is a culture where executives consider it betrayal if someone mentions a problem to anyone outside of the department. That control of information and deliberate isolation in the name of maintaining power doesn’t work here. Nobody likes to work in a place like that. Once an organization has started down the road toward openness and no-blame problem solving, it’s hard to turn back without creating backlash of some kind within the ranks.

Creating Disruption

Joe used the term “Disruption” in the title of his presentation. Disruption is really more about emotions than process. There is a crucial period of transition because this new transparency makes people uncomfortable if they come from a long history of trying hard to make sure everything looks great in the eyes of the boss. Even if the top executive wants transparency and getting things out in the open, that often doesn’t play well with leaders who have been steeped in the opposite.

Thus, this process also gives a CEO and top leaders an opportunity to check, not only the responses of others, but their own responses, to the openness. If there are tensions, that is an opportunity to address them and seek to understand what is driving the fear.

In reality, that is very difficult. In our world of “just the facts, ma’am” we don’t like to talk about emotions, feelings, things that make us uncomfortable. Those things can be perceived as weakness, and in the Old World, weakness could never be shown. Being open about the issues can be a level of vulnerability that many executives haven’t been previously conditioned to handle. Inoculation happens by sticking with the process structure, even in the face of pushback, until people become comfortable with talking to each other openly and honestly. The cross-functional rounding into other departments is a vital part of this process. Backing off is like stopping taking your antibiotics because you feel better. It only emboldens the fear.

These kinds of changes can challenge people’s tacit assumptions about what is right or wrong. Emotions can run high – often without people even being aware of why.

Think Big, Change Small

Anton, my Dutch friend, had a study mission group of Healthcare MBA students from the University of Amsterdam visiting Seattle last week.

Friday morning I spent about four hours with them going through the background and basics of the Improvement Kata and Coaching Kata, and worked to tie that in to what they observed in their visits to local companies. They were a great, engaging group that was fun to work with.

One thing I do to close out every session I do with a group is ask “What did we learn?” and write down their replies on a flip chart. I find that helps foster some additional discussion and consolidate learning. It also gives me feedback on what “stuck” with them.

Sometimes I get a gem that would make a good title for a blog post. The title of this post is one of those.

Think Big

Alice and the Cat

Alice went on, “Would you tell me, please, which way I ought to go from here?”

“That depends a good deal on where you want to get to,” said the Cat.

“I don’t much care where…“ said Alice, “…so long as I get somewhere,” Alice added in explanation.

“Oh, you’re sure to do that,” said the Cat, “if only you walk long enough.”

The question, of course, is whether or not where Alice ends up is where she intends to go. A lot of continuous improvement activity takes this approach –  Look for waste. Brainstorm ideas. Implement them. Just take steps. And, like Alice, you will surely end up somewhere if you do this enough.

I had a former boss (back in the late 90’s) advocate this approach. “We are painting the wall with tennis balls dipped in paint.” The idea, I think, was that sooner or later all of the splotches would start to connect into a coherent color. Maybe. But, at the same time, he was also very impatient for tangible results. Actually that isn’t true. He was impatient for tangible activity which is not the same thing at all.

Direction and Challenge Establish Meaning

In her journeys through Wonderland, Alice learns that objective truth has no meaning in a world of random nonsense. The story, of course, is a parody of the culture and times of Victorian England. It does, however, reflect the frustrations many practitioners can feel when they are just trying to “make improvements.”

As one thing is “fixed,” another pops up for any number of reasons:

  • The “new” problem may well have been hidden by the “fixed” one.
  • Leadership may be chasing short-term symptoms and constantly redirecting the effort.

Day to day it just seems like random stuff, and can get pretty demoralizing.

The point of “Think Big” is that being clear about where WE (not just you) are trying to go helps everyone understand the meaning of what they are doing. That is the whole point of “Understand the Direction and Challenge” as the first step of the Improvement Kata. “What is the meaning behind what you are working on?” It is really a verification check by the coach that the coach has adequately communicated meaning to the learning.

Establish “Why” not “What”

At the same time, it is important for the organization to be clear on why improvement is necessary. I have discussed this a number of times, but keep referring back to Learning to See in 2013 where I ask “Why are you doing this at all?” as the question everyone skips past.

“Where we are going” should not simply be your model of your [Fill Company Name In Here] Production System.

No matter how well explained or understood, a model does not directly address the “Why are we doing this at all?” question that provides meaning to the effort.

It may well establish a good representation of what you would like your process structure to look like, but it does not give people any skill in actually putting these systems into practice, nor a reason to put in the effort required to learn something completely new.

Change Small

Small changes = fast progress as long as there is a coherent direction.

The classic 5 day kaizen event is often an attempt to make a radical improvement in a short period of time. Things usually look really impressive at the end of the week, and even into the next few weeks. What happens, though, is that the follow-up is usually more about finishing up implementation action items than it is working to stabilize the new process.

The problem comes from the baseline assumption that we already understand all of the problems, and our changes will solve them. We line things up, get 1:1 flow running, and yes, there is a dramatic reduction in the nominal throughput time simply because we have eliminated all of the inventory queues.

There is tons of research that backs up the assertion we can’t expect people to be creative when they are under pressure to perform. They are going to revert to their existing habits. During the event itself,  the short time period and high expectations put pressure on people to just implement stuff. People are likely to defer to the suggestions and lead of the workshop leader and install the standard “lean tools” without full understanding of how they work or what effect they will have on the process and people dynamics.

Come Monday morning, we put all of those changes to the test… at once. The people are working in a different way. The problems that will be surfaced are different. The tighter the flow, the more sensitive the system will be to small problems. It is pretty easy to overwhelm people, especially the supervisors who have to decide right now what to do when things don’t seem to be working.

That same pressure to perform exists, only now it is pressure to produce, and possibly even catch up production from what was lost during the previous week. Once again, we can’t expect people to think creatively when these new issues come up, they are going to revert to what they know.

When we do see successful “big change” it is usually the result of many small changes that have each been tested and anchored.

So why is the “blitz” approach so appealing? I think I got some insight into the reason in a conversation with a continuous improvement director in a large corporation. He had so little opportunity to actually engage and break things loose that, when he did, he felt the need to push in everything he could.

My interpretation of this goes back to the first line above: Small changes = fast progress as long as there is a coherent direction. In his case, there wasn’t coherent direction. He had a week, maybe two, to push as hard as he could in the direction he felt things should go. The rest of the time, things were business as usual.

This is why “think big” is important. It provides organizational alignment, and reduces the pressure to seize a limited opportunity and, frankly, inject chaos.

Small, Quick Changes

Because we often don’t see just how long it takes to stabilize a “quick, big change,” we tend to think that quick small changes are slower. I disagree. In my experience the opposite is true.

When there is a clear Challenge and Direction, and frequent check-ins via coaching cycles (or less formal means) on what changes are being made, no time is wasted working on the wrong things.

When small changes are made and tested as part of experiments vs. just being implemented, then there is less chance of erosion later. Rather than overwhelming people with all of the problems at once from a bunch of changes, one-by-one lets them learn what problems must be dealt with. They have an opportunity to always take the next step from a working process rather than struggling to get something that is totally unfamiliar to work at all.

That, in turn, builds confidence and capability.

In a mature organization that has practiced this for years, an outside observer might well see “big changes” being made. But that organization is operating from a base of learning and experience, and what might look big to you might not be big to them. It is all a matter of perspective.

What Do You Think?

I’m throwing this out there, hoping to hear from practitioners. What have you struggled with getting changes made that actually shift people’s behavior (vs. just implementing tools and techniques). What has worked? What hasn’t worked? I’d love to hear in the comments.

Push Improvement vs. Pull Improvement

I’m writing up a proposal for a benchmarking and study week, and just typed the term “push improvement” as a contrast to a true “continuous improvement culture.” I wanted to explore that a bit here, with you, and perhaps I’ll fill in the idea in my own mind.

Push Improvement

A few years ago I was working with a few sites of a multi-national corporation. In one of their divisions, their corporate lean office was pushing various programs into place. Each site was required to implement, and was graded on:

  • 5S
  • Jidoka
  • Heijunka
  • Toyota Kata

plus a few other things. Those are the ones I really remember. Each of these programs was separate and distinct, they had been deployed on some kind of phased time-table over a few years. They also had requirements to maintain some number of Six-Sigma Black Belts and Green Belts in their facilities, with the requirement to report on a number of projects.

They brought me in to teach the Toyota Kata stuff.

In reality, while people taking the class generally thought it was worthwhile, the leadership teams were also a little resentful that all of this stuff was being, in the words of one plant manager, “pushed down our throats.”

Even the Toyota Kata “implementation” had a specific sequence of steps that the site was expected to check off and report their progress on. In addition, there were requirements for how the improvement boards would look, including the position of the corporate logo and the colors used – all in the name of “standardization.”

At the same time, of course, the plants were also measured on their performance – financial, delivery, quality, etc. These metrics were separate and distinct from their audit scores on all of the lean stuff.

On the shop floor, they had the artifacts in place – the boards, the charts, the lines on the floor, but were struggling with making it all work. There was no integration into the management system. Each of these was a program.

I should also point out that, ironically, the plant that was getting the most traction with continuous improvement at the time was the one that was pushing back the hardest against this rote, “standard” approach.

A few years before that, I had been working (as an employee) in another multi-national company. There was a big push from the executives at the very top to develop a set of metrics they could use to determine if a site was “doing lean correctly.” They wanted a set of measurements they could monitor from corporate headquarters that would ensure that any business performance improvement was the result of “doing lean” rather than something else.

Both of these organizations were looking at this as an issue with compliance rather than developing their leaders.

Implementations like these typically involve things like:

  • Developing some kind of “curriculum” and rolling it out.
  • Requiring sites to have a “value stream map” and a “lean plan.”
  • Audits and “lean assessments” against some kind of checklist.
  • Monitoring the level of activity, such as how many kaizen events sites are running.

I have also seen cases of an underlying assumption that “if only we can explain it well enough, then managers will understand and do it.” This assumption drives building models and diagrams that try to explain how everything works, or the relationships between the tools. I’ve seen “pillar models,” puzzles, gears, notional value stream maps, and lots of other diagrams.

In summary, “push improvement” is present when implementing an improvement program is the goal. Symptoms are things such as a project plan with milestones for specific “lean tools” to be in place.

The underlying thinking is that you are doing improvement because you can, not because you must.

This is, I believe, what Jeff Liker and Karyn Ross refer to as mechanistic lean in their book The Toyota Way to Service Excellence.

Relationship to the Status Quo

One of the obstacles that organizations face with this approach is the traditional relationship to the status quo. Most business leaders are trained to evaluate any change against a financial return based on the cost. In other words, we look at the current level of performance as a baseline; evaluate the likely improvement; look at what it would cost to get to that new level and ask “Is it worth doing this?”

If the answer comes up short of some threshold of return, then the answer is “no,” and the status-quo remains as a rational optimum.

Implication: The status-quo is OK unless there is a compelling reason to change it.

For the lean practitioner, this presents a problem. We have to convince management that there is a short-term return on what we are proposing to do in order to justify the effort, time and expense. Six Sigma black belt projects are intently focused on demonstrating very high cost benefit, for example. And, honestly, if you are spending the money to bring in a consultant from Japan and an interpreter, I can see wanting some assurance there is a payback.1

If the discussions are around “What improvements can we make?” and then working up the benefit, you are in this trap. Those benefits, by the way, rarely find their way to the actual P&L unless there is already a business plan to take advantage of them.

The root cause of this thinking may well be disconnection of continuous improvement from whatever challenges the organization is facing; coupled with assumptions that a “continuous improvement program” can be implemented as a project plan on a predictable timeline.

Pull Improvement

Solving Problems

The purpose of any improvement activity is to solve problems. Real problems. In my classes, I often ask for a show of hands from people who have a shortage of problems on a daily basis. I never get any takers. Since there are usually lots of problems – too many to deal with all of them – we need to be careful to work on the right ones. The ROI approach I talked about above is a common way to sort through which ones are worth it. We can also get locked into “Pareto Paralysis”

So which ones should we work on?

Inverting this question, when someone wants to make a change, put in a “lean tool,” etc., my question is “What problem are you trying to solve?” And “we don’t have standard work” is not a problem, it is lack of a proposed solution. In these cases I might ask “…. and therefore?” to try to understand the consequence of this “lack of…” The problem might be buried in there somewhere. But if the issue at hand is trying to raise an audit score for its own sake, we are pack to “Push Improvement.”

A meeting gets derailed pretty fast when people are debating which solution should be applied without first agreeing on the problem they are solving. The Coaching Kata question “Which one [obstacle] are you addressing now?” is intended to get help the learner stay clear and focused on this.

But long before we are talking about problems, we need to know where we are trying to end up. This is why the idea of a clear and compelling challenge is critical.

Challenge First

Organizations that are driven by continuous improvement have a different relationship with the status-quo. Those organizations always have a concrete challenge in front of them. In other words, the status-quo is unacceptable. “Today is the worst we will ever be.”

Cost enters into the discussion when looking at possible ways to reach that destination. But it does not drive the decision about whether or not to try. That decision has already been made. The debate is on how to do it.

The Challenge with Challenges

“Wait a minute… we have objectives to reach too!” Yes, lots of organizations set out objectives for the year or longer. Here are some of the scenarios I have seen, and why I think they are different.

The goal setting is bottom-up. The question “What can we improve?” cascades down the organization, and individual managers set their goals for the year and roll them back up. There may be a little back-and-forth, and discussion of “stretch goals,” but the commitment comes from below. In most of the cases I have seen these goals are carefully worded, the measurements delicately negotiated, with bonuses riding on the level of attainment of these objectives. I have used the word “goal” here because these are rarely challenges or even challenging.

The goal is based strictly on hitting metrics. I have discussed the dangers of “management by measurement” a few times in the past.

A pass is given if there is a strong enough justification made for missing the goal. Thus the incentive to carefully define the metrics, and include caveats and loopholes.

There are measurements but not objectives. Any improvement is OK.

Any true challenge is labeled as a “stretch goal” meaning “I don’t really expect you to be able to reach it.”

And, sometimes the end of the year is an exercise in re-negotiating the definition of “success” to meet whatever has been achieved.

None of this is going to drive continuous improvement, nor align the effort toward achieving something remarkable or “insanely great.”

But here is the biggest difference: FEAR.

Fear of failure. Fear of committing to something I don’t already know how to achieve. Fear of admitting “I don’t know.”

And fear breeds excuses and other victim language that makes sure success or failure was beyond my control.

Fearless Challenges

So maybe that’s it. The key difference is fearless challenge. So what would that look like?

Here I defer to Jeff Liker and Gary Convis’s great book The Toyota Way to Lean Leadership. But I have seen this in action elsewhere as well.

Some key differences here:

  • The challenge comes from above. It isn’t a bottom-up “what can we improve?” It is a top-down “This is what we need to be able to do.”
  • It is an operational need, not a process specification. In other words, it isn’t the “lean plan.” The process specification is what is created to meet the challenge.
  • The challenge is an integral part of developing people’s capability. Perhaps this is the key difference.
  • There is no fear because the challenge comes with active support to meet it. That support, if done well, is both technical and emotional. We’re in this together – because we are.

Improvement = Meeting the Challenge

Given that there is a business or operational imperative established, we are no longer trying to push improvement for its own sake. We have an answer to “Why are we doing this?” beyond “to get a higher 5S score.”

Now there is a pull.

In my Toyota Kata class, I give the teams a challenge that, in the moment, is seemingly impossible. That is intentional. I hear air getting sucked in through teeth. “No way!” is usually the reply when I ask how it feels.

Then, for the next few hours, the teams are methodically guided through:

  • Grasping the current condition – understanding their process as a much deeper level.
  • Breaking down the problem into pieces, taking on one at a time. First a target condition, then specific obstacles.

Depending on how much time they have, 1/4 to 1/3 of the teams crack the problem, a few excel and go beyond, and those that don’t usually acknowledge they are close.

The teams that get there fastest are the ones who get into a quick cadence of documented experiments and learning. They see the problems they must solve much quicker, and figure out what they need to do. I tell them at the start, as I hold up my blank Experiment Records, “The teams that get it are the ones who burn through these the quickest.”

In the process of tackling the challenge, they learn what continuous improvement is really about. I don’t specify their solution. Any hints I give are about what to pay attention to, not what the solution looks like. I don’t deploy tools or give them a template for the solution. At the same time, most teams converge on something similar, which isn’t surprising.

Taking this to the real world – I see similar things. Teams taking on similar challenges on similar processes often arrive at similar looking solutions. But each got there themselves, for their own reasons, often following quite different paths.

While it seems more efficient to just tell them the answers, it is far more effective to teach them how to solve the problem. That is something they can take beyond the immediate issue and into other domains.

Pull Improvement = Meeting a Need

In my post Learning to See in 2013, I posed the question nobody asks: Why are you doing this at all? I point out that, in many cases, value-stream mapping is used as a “what could we improve?” tool, which is backwards from the original intent.

If there is a clear answer to “Why are we doing this?” or, put another way, “What do we need to be able to do that, today, we can’t?” or even “What experience do we aspire to deliver to our customers that, today, we cannot?” then everything else follows. Continuous Improvement becomes a daily discussion about what steps are we taking to get there, how are we doing, what are we learning, what do we need to do next (based on what we learned)?

This is pull. The people responsible for getting a higher level of performance are pulling the effort to get things to flow more smoothly. The mantra here is “not good enough,” but that must be the form of a challenge that inspires people to step up, not punitive.

Then it’s easy because “they” want to do it.

image

Epilogue: For the Practitioner

If you are reading this, you are likely a practitioner – someone on staff who is responsible for “continuous improvement” in some form, but not directly responsible for day-to-day operations. I say that because I know, in general, who my subscribers are.

This concept presents a dilemma because while you are challenged with influencing how the organization goes about improving things, the challenge of what improvements must be made (if it exists at all) is disconnected from your efforts.

That leaves you with trying to “drive improvement into the organization” and “be a change agent” and all of those other buzzwords that are probably in your job description.

Here are some things to at least think about that might help.

Let go of dogma. If you think continuous improvement is only valid if a specific set of tools or jargon are used, then you are already creating resistance for your efforts.

Focus on learning rather than doing. You don’t have all of the answers. And even if you did, you aren’t helping anyone else by just telling them what to do. No matter how much sense it makes to you, logical arguments are rarely persuasive, and generally create a false “yes.”

Seek first to understand. Listen. Paraphrase back. Try to get the words “Yeah, that’s right.” to come out of that resistant manager you are dealing with. Remember your purpose here is to help line leadership meet their challenges. Often those challenges are vague, are negative – as in trying to avoid some consequence – or even expressed as implied threats. You don’t have to agree, but you do need to “get it.”

That’s the first step to rapport, which in turn, is necessary to any kind of agreement or real cooperation. As a friend of mine said a long time ago: “You can always get someone’s attention by punching them in the nose, but they likely aren’t going to listen to what you have to say.” Making someone wrong is rarely going to increase their cooperation.

All of this, by the way, is harder than it sounds. I’m still learning these lessons, sometimes multiple times. I’ve been on my own journey of explicit / deliberate learning here for a couple of years.

We have a couple of generations now of improvement practitioners who have been trained with the idea that “lean is good” (or Six Sigma is good, or Theory of Constraints is good, or…). Therefore, it follows, that these things need to be put into place for their own sake – because all of the best companies do them.

This approach, though, reflects (to me) a shallow understanding of what continuous improvement is all about. It skips the “Why?” and goes straight to “How” and “What.” My experience has also been that relatively few of the practitioners steeped in this can actually articulate how the mechanics of these systems actually drive improvement on a daily basis after all of the mechanics are in place beyond a superficial statement like “people would see and remove waste.”

It seems that implementing the mechanics is equated with improvement, when in reality, those mechanics are simply an engine for starting improvement.

Yes, the mechanics are important, but the mechanics are not the reason. We are leaving out the people when we have these discussions. What are they doing every day (other than “following their standard work”)? How do these mechanics actually help them move, as a team, toward a goal they cannot otherwise attain?

————–

1In its worst manifestation, this thinking can be a cancer on the integrity of the company, for example, GM has had a couple of scandals where it has been revealed that they calculated the ROI of fixing a safety defect vs. the cost of paying off wrongful dealt lawsuits. Don’t even go there!

The Importance of Prediction for Learning

image

One of the things, perhaps the thing, that distinguishes “scientific thinking” from “just doing stuff” is the idea of prediction: When we take some kind of action, and deliberately and consciously predict the outcome we create an opportunity to override the default narrative in our brain and deliberately examine our results.

The Toyota Kata “Experiment Record” (which also goes by the name “PDCA Cycles Record”) is a simple form that provides structure for turning an “action item” into an experiment.

Why Is It Important to Make a Prediction?

Explicit learning is driven by prediction.

Explicit Learning

“The most exciting phrase to hear in science, the one that heralds new discoveries, is not ‘Eureka!’ (I found it!) but ‘That’s funny …’ “

— Isaac Asimov

Curiosity is sparked by the unexpected. “I wonder what that is…”

The only way to have “unexpected” is to have “expected.”

When we consciously and deliberately make a prediction, we are setting ourselves up to learn. Why? Because rather than relying on happening to notice things are a little unusual, we are deliberately looking for them.

Deliberate Prediction: The Key to a “Learning Organization?”

Steve Spear, in his book The High Velocity Edge, makes the case that what all high-performance organizations have in common is a culture of explicitly defining their expected result from virtually everything they do.

He studied Toyota extensively for his PhD work, and discovered that rather than exploiting a “lean tool set,” what distinguished Toyota’s culture was deliberately designing prediction mechanisms into all of their processes and activities. This was followed up by an immediate response to investigate anything that doesn’t align with the prediction.

This is the purpose behind standard work, kanban, takt time / cycle time, 1:1 flow, etc. All of those “tools” are mechanisms for driving anomalous outcomes into immediate visibility so they can say… “Huh… that’s funny. I wonder what just happened?”

The High Velocity Edge extends the theory into a more general one, and we see a common mechanism in other high-performance organizations.

OK… that’s one data point on the higher-level continuum.

 

Building 214

Back in 2009 I wrote about a culture change in a post titled A Morning Market. That story actually took place around 2002-2004, and I have just re-verified (Spring 2017) that it still holds.

But it really wasn’t until this afternoon as I was discussing that story with Craig that it finally hit me. The last step in their problem-solving process was “Verification.” To summarize a key point that is actually buried in that post, they could not say a problem was cleared until they had a countermeasure, and had verified that it works.

What is that? It’s a prediction.

Rather than simply putting in a solution and moving on, their process forced them to construct a hypothesis (this countermeasure will make the problem go away), and then experimentally test that hypothesis.

If it worked, great. If it didn’t work then… “Huh, that’s funny. I wonder what just happened?”

This, in turn, not only made them better deliberate problem solvers, it engaged deliberate learning.

What is critically important to understand here is this: That verification step was not included in the problem solving process they trained on. We added it internally as part of our (then kind of rote) understanding of “What would Toyota do?” But it worked, and I believe added a level of nuance that was instrumental in keeping it going.

 

The Improvement Kata

Mike Rother’s work extends what we learned about Toyota. Going beyond “How do they structure their processes?” he went into “How do they structure their conversations?” (And “How can we learn to structure ours the same way?”)

A hallmark of the Improvement Kata, especially (but not exclusively) the “Starter Kata” around experiments, is a deliberate step to make a prediction, test it, and compare the actual outcome with the prediction.

This, in turn, is backed up in Steve Spear’s HBR articles, especially Learning to Lead at Toyota and Fixing Health Care From the Inside, Today,”  both of which should be mandatory reading for anyone interested in learning about continuous improvement.

 

You are Always Making a Prediction Anyway

Any action you take, anything you do, is actually a hypothesis. You are intending or expecting some kind of outcome.

What time do you leave for work? Why? Likely because you predict that if you leave at a particular time, and follow a particular route, you will arrive by a specified time. You might not think about it, but you have made a prediction.

If you are running to any kind of plan, the plan itself is a prediction. It is saying that “If these people work on these tasks, starting at this time, they will complete them at this later time.” It is predicting that the assigned tasks are the tasks that are required to get the bigger job done.

A work sequence is a prediction. If these people carry out these tasks in this order, we will get this outcome in this amount of time at this quality level.

A Six Sigma project is a prediction. If we control these variables in this way, we will see this aspect of the variation stay within these limits.

An “action item” is a prediction. If we take this action then that will happen, or this problem will be solved.

In all of these cases you don’t know, for sure if it will work until you try it and look for anomalies that don’t fit the model.

But in the difference in day-to-day life is we aren’t explicit about what we expect. We don’t really think it through and aren’t particularly aware when an outcome or result differs from what we expected. We just deal with the immediate condition and move on, or worse, assign blame.

What About Implicit Learning?

The human brain (and all brains, really) is a learning engine. Our experience of learning typically comes from what we perceive as feelings.

Take a look at Destlin Sandlin’s classic “Backwards Bicycle” video here, then let’s talk about what was happening.

 

There is nothing special about a “backwards bicycle.” If Destin (or his son) had no prior experience with a regular bicycle, this would simply be “learning to ride a bicycle.” What makes it hard is that, in addition to building new neural pathways for riding a backwards bicycle, he must also extinguish the existing pathways for “riding a bicycle.”

The Neuroscience of Learning (As I understand it.)

Destin has a clear (very clear) objective (Challenge) in his mind: Ride the bicycle without falling down.

As he tries to ride, he knows if he feels like he is losing his balance then he is about to fall.

He (his brain) doesn’t know how to control the wheel to keep the bike upright as he tries to ride. His arms initially make more or less random movements in an attempt to stay upright. This is instinctive, he isn’t thinking about how to move his arms. (This is what he calls the difference between “knowledge” and “understanding.”)

Whatever neurons were firing to move his arms when he loses his balance are a little less likely to fire again the next time he attempts to ride.

Whatever neurons were firing to move his arms when he stays upright for a little while are a little more likely to fire again the next time he attempts to ride.

This actually starts with increased levels of excitatory or inhibitory neurotransmitters in those neural synapses. No physical change to the brain takes place. But this requires a lot of energy. IF HE PERSISTS, over time (often a long time), the brain grows physical connections in those circuits, making those new pathways more permanent. (It also breaks the connections in the pathways that are being extinguished.)

Destin’s six year old son’s brain is optimized for this kind of learning. He creates those new physical neural connections much faster than an adult does. His brain is set up to learn how to ride a bicycle. His father’s brain is set up to ride a bicycle without thinking too much about it. Thus, Destin has a harder time shifting his performance-optimized brain back into learning mode.

All of this is implicit learning. You have something you want to learn, and you are essentially trying stuff. Initially it is random. But over time, the things that work eventually overpower the things that do not. This is also how machine learning algorithms work (not surprisingly).

 

What does this have to do with prediction?

Destin’s brain is running a series of initially random trials and comparing the result of each with the desired result. The line between a “desired result” and a “predicted result” can be kind of blurry in this type of learning. But what is critical here is to understand that learning cannot take place without some baseline to compare the actual result against. There must be a gap of some kind between the outcome we want and what we got. Without that gap, we are simply reinforcing the status-quo.

The weakness with implicit learning is it can reinforce behaviors and beliefs that correlate with a result without actually causing it. We aren’t actually testing whether our actions caused the outcome. We are just repeating those actions that have been followed by the outcome we wanted whether that is by causation or coincidence.

In the case of something like learning to ride a bicycle, that is generally OK. We may learn things that are unnecessary to stay upright on the bicycle1, but we will learn the things that are required.

In athletics, once the basics are in place, coaches can help shift this learning from implicit to explicit by having you practice specific things with specific objectives.

Moving from Implicit to Explicit

Bluntly, the vast majority of organizations are engaged in implicit, not explicit, learning. They repeat whatever has worked in the past without necessary examining why it worked, or if “now” even is similar to “the past.”

These are organizations that operate on “instinct” and “feel.” That actually more-or-less works as long as conditions are relatively stable. They may do things that are unnecessary but are also doing things that are required.

… Until conditions or requirements change.

When the organization has to accomplish something that is outside of their current domain of knowledge – beyond their knowledge threshold – those anecdotes break down. The narrative of cause-and-effect in our minds is no longer accurate.

That is when it is critical to step back, become deliberate, and ask “Where, exactly, are we tying to go?” and “What do we need to learn to get there?”

The alternative is “just trying stuff” and hoping, somewhere along the way, you get the outcome you want. The problem with that? You’re right back where you were – it works, but you don’t know why.

_______________

1Sometimes we develop beliefs that things we do can influence events that, in reality, we have no control over whatsoever. Once we develop those beliefs, we bias heavily to see evidence they are true, and exclude evidence that they are not true.

People to Meet at KataCon

KataCon is a couple of weeks out. If you are considering going you are probably looking at the keynote speakers and KataGeekYellowbreakout workshops.

The other reason to attend KataCon is to meet other people and share experiences with them. I’d like to introduce you to two of those people.

imageHal Frohreich is the Chief Operating Officer of Cascade DAFO in Ferndale, Washington. Their product is custom pediatric foot / ankle orthotics that help kids walk. Yup, custom. Every one is different.

Since taking the position, Hal has been using Toyota Kata as a mechanism to develop the leadership and technical skills of the supervisors and, in doing so, make fundamental shifts to the culture of the organization. For you TWI folks, he has also deployed TWI, especially Job Instruction, along side the Toyota Kata for much more consistency in the way work is performed.

 

 

imageHal provides support to his Production Manager, Tim Grigsby. Tim coaches 4-7 kata boards every day and covers diverse areas including people development, I.T. issues, R&D, and production. Tim views his job as seeing that each work team has the time, education, direction, space, tools and help to improve their work. Toyota Kata provides the structure that he uses to help them develop critical thinking and clarity in their target conditions, obstacles, and their PDCA cycles.

Each afternoon the COO and CEO walk the floor and review the target conditions, obstacles and next steps. This helps keep things aligned as well as ensure nobody is “stuck” on a problem that is outside of their scope to fix.

 

I believe, and teach, that Toyota Kata is a mechanism for driving culture change, and this is the philosophy that Hal and Tim have embraced. While the performance of the organization has dramatically improved by every measure you care to ask about, that is not the real result of this work.

The real outcome has been to create a cadre of front-line leaders that are taking initiative and applying creative solutions vs. just getting through the day doing what they are told.

Come to KataCon and find these guys. They are worth talking to.

Learning = Extending the Threshold of Knowledge

“My computer won’t boot.”

Mrs. TheLeanThinker’s computer was hanging on the logo screen, keyboard unresponsive.

I know already that if the CPU were bad it wouldn’t get this far.

I also knew that the system hasn’t even tried to boot the OS from the hard drive yet, so that likely isn’t the problem.

Working hypothesis: It’s something on the motherboard.

Start with the simple stuff that challenges the working hypothesis:

  • Hang test a different, known good, power supply. No change.
  • Pull memory cards and reinstall them one by one. No change.
  • Pull the motherboard battery, unplug, wait a few minutes to possibly reset the BIOS. No change.
  • Try holding down the DEL key on power-up to get into BIOS settings. Nope, system still hangs, though it does read that one keystroke, the keyboard is dead after that.
  • Try Ctrl-Home to reach the BIOS flash process. Nope.

image

There is no evidence that the motherboard is not dead. Final test:

Get the numbers off the motherboard, find the same model on Amazon, order it for $37.50 to the door. (Intel hasn’t made this processor type since 2011).

New motherboard arrived today. Switch it out, takes about 30 minutes.

Boot up the machine, works OK, set the time in the BIOS, and pretty much good to go.

Convince Windows 10 that I haven’t made a bootleg copy.

Done.

The Threshold of Knowledge

I learned to code in 1973 on PDP-8 driving teletypes. Although my programming skills are largely obsolete these days, I am comfortable poking around inside the box of a PC, and I generally know how they work. Thus, the troubleshooting and component replacement I described above was not a learning experience. Yes, I learned what was wrong with this computer. (The “bad motherboard” was a hypothesis I tested by installing a new one.) But I didn’t learn anything about computers in general.

Rather than working through experiments into new territory, I was troubleshooting. Something that had worked was not working now. My experiments were an effort to confirm the point of failure.

Therefore, as interesting as the diversion was, aside from a little research on some of the more arcane troubleshooting, it was not a learning exercise for me. It was all within my Threshold of Knowledge.

In the Improvement Kata, “threshold of knowledge” refers to the boundary between “We know for sure” and “We don’t know.” Strictly speaking, we only say “We know” when there is specific and relevant evidence to back it up.

image

In this case, my challenge (fix the wife’s computer) was well inside the red circle.

But this wouldn’t be the case for everyone.

The Threshold of Knowledge is Subjective

Someone else with the same challenge may not see this as a routine troubleshoot-and-repair task. Rather, he has to learn.

I had to learn it at some point as well. The difference is that I had already learned it. I had already made mistakes, taken a week to build a PC and get it working many years ago. I learned by experimenting and being surprised when something didn’t work, then digging in and understanding why. On occasion, especially in the early days, I consulted experts who coached me, or at least taught me what to do and why.

Coaching To Extend the Threshold of Knowledge

Learning is the whole point of the Improvement Kata. That is why we call the “improver” the “learner.” If someone encounters a problem like my example and I am responsible for developing their skills, I am not serving them if I do something like:

  • Sit down at their machine and troubleshoot it.
  • Tell them what step to take, and asking what happened so I can interpret the outcome.

That second case is deceptive. The question is “Who is doing the thinking?” If the coach is doing the thinking, then the coach isn’t coaching, and the learner isn’t learning.

In this case I would also have to recognize this is going to take longer than it would if I did it myself. That is a trap many leaders fall into. They got where they are because they can arrive at a solution quickly. But the only reason they can do that is because, at some point in the past, they had time to learn.

“My computer doesn’t boot.” If my objective is for this person to learn, then I need to go back to the steps of learning. Given that the challenge is likely “My computer operates normally,” what would be my next question to help this person learn how to troubleshoot a problem like this?

I need to know what they know. “Do you know where in the boot sequence it is hanging up?” If the answer is “No,” or just a repeat of the symptom, then my next target condition is for them to understand the high-level sequence of steps that happens between “ON” and the login screen. That would be easy to depict in a block diagram. It’s just another process. But my learner might have to do a little research, and I can certainly point him in the right direction.

I’m not going to get into the details here, because this post isn’t about troubleshooting cranky computers.

General Application

“If somebody comes to me with a problem, I have two problems.”

  • The original issue.
  • The fact that this person didn’t know how to handle it.

You can easily translate my computer example into a production quality example. A defect is produced by a process that normally does not produce them. What is different between “Defect” and “Defect-Free?” Something is. We just don’t know what.

Is it something we need to learn? Something we need to teach? Or something we need to communicate?

If my working challenge for my organization is something like “Everyone knows everything they need to do their jobs perfectly.” then I am confronted every day with evidence that this is not as true as I would like.

If I look at those interventions as “the boss just doing his job” then I lose the opportunity to teach and to grow the organization. I am showing how much I know, and by doing so just extending the dependency. That might feel good in the short term, but it doesn’t do much for the future.

Think about this… in your organization, if the boss were promoted or hired out of the job tomorrow, would you look outside the immediate organization for a replacement? If so, you are not developing your people. When I see senior leaders being hired from outside, all I can do is wonder why they have so little faith in the people they already have.

_________

*I remember when Gateway built their own machines, which I guess shows how long I’ve been playing with PCs. Then again, I remember when the premium brand was Northgate. Of course, I also remember programming on punch cards.

Toyota Kata: Reflection on Coaching Struggling Learners

The “Five Questions” are a very effective way to structure a coaching / learning conversation when all parties are more or less comfortable with the process.

The 5 Questions of the Coaching Kata

Some learners, however, seriously struggle with both the thinking pattern and the process of improvement itself. They can get so focused on answering the 5 questions “correctly” that they lose sight of the objective – to learn.

A coach, in turn, can exacerbate this by focusing too much on the kata and too little on the question: “Is the learner learning?”

I have been on a fairly steep learning curve* in my own journey to discover how modify my style in a way that is effective. I would like to share some of my experience with you.

I think there are a few different factors that could be in play for a learner that is struggling. For sure, they can overlap, but still it has helped me recently to become more mindful and step back and understand what factors I am dealing with vs. just boring in.

None of this has anything to do with the learner as a person. Everyone brings the developed the habits and responses they have developed throughout their life which were necessary for them to survive in their work environment and their lives up to this point.

Sometimes the improvement kata runs totally against the grain of some of these previous experiences. In these cases, the learner is going to struggle because, bluntly, her or his brain is sounding very LOUD warning signals of danger from a very low level. It just feels wrong, and they probably can’t articulate.

Sometimes the idea of a testable outcome runs against a “I can’t reveal what I don’t know” mindset. In the US at least, we start teaching that mindset in elementary school.

What is the Point of Coaching?

Start with why” is advice for me, you, the coach.

“What is the purpose of this conversation?” Losing track of the purpose is the first step into the abyss of a failed coaching cycle.

Coach falling over a cliff.

Overall Direction

The learner is here to learn two things:

  • The mindset of improvement and systematic problem solving.
  • Gain a detailed, thorough understanding of the dynamics of the process being addressed.

I want to dive into this a bit, because “ensure the learner precisely follows the Improvement Kata” is not the purpose.

Let me say that again: The learner is not here to “learn the Improvement Kata.”

The learner is here to learn the mindset and thinking pattern that drives solid problem solving, and by applying that mindset, develop deep learning about the process being addressed.

There are some side-benefits as the learner develops good systems thinking.

Learning and following the Improvement Kata is ONE structured approach for learning this mindset.

The Coaching Kata, especially the “Five Questions” is ONE approach for teaching this mindset.

The Current Condition

Obviously there isn’t a single current condition that applies to all learners. But maybe that insight only follows being clear about the objective.

What we can’t do is assume:

  • Any given learner will pick this up at the same pace.
  • Any given learner will be comfortable with digging into their process.
  • Any given learner will be comfortable sharing what they have discovered, especially if it is “less than ideal.”

In addition:

  • Many learners are totally unused to writing down precisely what they are thinking. They may, indeed, have a lot of problems doing this.
  • Many learners are not used to describing things in detail.
  • Many learners are not used to thinking in terms of logical cause-effect.
  • The idea of actually predicting the result in a tangible / measurable way can be very scary, especially if there is a history of being “made wrong” for being wrong.

Key Point: It doesn’t matter whether you (or me), the coach, has the most noble of intentions. If the learner is uncomfortable with the idea of “being wrong” this is going to be a lot harder.

Summary: The Improvement Kata is a proven, effective mechanism for helping a learner gain these understandings, but it isn’t the only way.

The Coaching Kata is a proven, effective mechanism for helping a coach learn the skills to guide a learner through learning these things.

For the Improvement Kata / Coaching Kata to work effectively, the learner must also learn how to apply the precise structure that is built into them. For a few people learning that can be more difficult than the process improvement itself.

Sometimes We Have To Choose

A quote from a class I took a long time ago is appropriate here:

“Sometimes you have to choose between ‘being right’ or ‘getting what you want.’”

I can “be right” about insisting that the 5 Questions are being answered correctly and precisely.

Sometimes, though, that will prevent my learner from learning.

Countermeasure

When I first read Toyota Kata, my overall impression was “Cool! This codifies what I’ve been doing, but had a hard time explaining.” … meaning I was a decent coach, but couldn’t explain how I thought, or why I said what I did. It was just a conversation.

What the Coaching Kata did was give me a more formal structure for doing the same thing.

But I have also found that sometimes it doesn’t work to insist on following that formal structure. I have been guilty of losing sight of my objective, and pushing on “correctly following the Improvement Kata” rather than ensuring my learner was learning.

Recently I was set up in the situation again. I was asked to coach a learner who has had a hard time with the structure. Rather than trying to double down on the structure, I experimented and took a different approach. I let go of the structure, and reverted to my previous, more conversational, style.

The difference, though, is that now I am holding a mental checklist in my mind. While I am not asking the “Five Question” explicitly, I am still making sure I have answers to all of them before I am done. I am just not concerned about the way I get the answers.

“What are you working on?” While I am asking “What is your target condition?,” that question has locked up this learner in the past. What I got in reply was mostly a mix of the problems (obstacles) that had been encountered, where things are now, (the current condition), some things that had been tried (the last step), what happened, etc.

The response didn’t exactly give a “Target Condition” but it did give me a decent insight into the learner’s thinking which is the whole point! (don’t forget that)

I asked for some clarifications, and helped him focus his attention back onto the one thing he was trying to work out (his actual target condition), and encouraged him to write it down so he didn’t get distracted with the bigger picture.

Then we went back into what he was working on right now. It turned out that, yes, he was working to solve a specific issue that was in the way of making things work the way he wanted to. There were other problems that came up as well.

We agreed that he needed to keep those other things from hurting output, but he didn’t need to fix them right now. (Which *one* obstacle are you addressing now?). Then I turned my attention back to what he was trying right now, and worked through what he expected to happen as an outcome, and why, and when he would like me to come by so he could show me how it went.

This was an experiment. By removing the pressure of “doing the kata right” my intent is to let the learner focus on learning about his process. I believe I will get the same outcome, with the learner learning at his own pace.

If that works, then we will work, step by step, to improve the documentation process as he becomes comfortable with it.

Weakness to this Approach

By departing from the Coaching Kata, I am reverting to the way I was originally taught, and the way I learned to do this. It is a lot less structured, and for some, more difficult to learn. Some practitioners get stuck on correct application of the lean tools, and don’t transition to coaching at all. I know I was there for a long time (probably through 2002 or so), and found it frustrating. It was during my time as a Lean Director at Kodak that my style fundamentally shifted from “tools” to “coaching leaders.” (To say that my subsequent transition back into a “tools driven” environment was difficult is an understatement.)

Today, as an outsider being brought into these organizations, my job is to help them establish a level of coaching that is working well enough that they can practice and learn through self-reflection.

We ran into a learner who had a hard time adapting to the highly structured approach of the Improvement Kata / Coaching Kata, so we had to adapt. This required a somewhat more flexible and sophisticated approach to the coaching which, in turn, requires a more experienced coach who can keep “the board” in his head for a while.

Now my challenge is to work with the internal coaches to get them to the next level.

What I Learned

Maybe I should put this at the top.

  • If a learner is struggling with the structured approach, sometimes continuing to emphasize the structure doesn’t work.
  • The level of coaching required in these cases cannot be applied in a few minutes. It takes patience and a fair amount of 1:1 conversation.
  • If the learner is afraid of “getting it wrong,” no learning is going to happen, period.
  • Sometimes I have to have my face slammed into things to see them. (See below.)
  • Learning never stops. The minute you think you’re an expert, you aren’t.

__________________

image* “Steep learning curve” in this case means “sometimes learning the hard way” which, in turn means, “I’ve really screwed it up a couple of times.”

They say “experience” is something you gain right after you needed it.