Executive Rounding: Taking the Organization’s Vitals

Background:

image

I wrote an article appearing in the current (October 2017) issue of AME Target Magazine (page 20) that profiles two very different organizations that have both seen really positive shifts in their culture. (And yes, my wife pointed out the misspelling “continous” on the magazine cover.)

The second case study was about Meritus Health in Hagerstown, Maryland, and I want to go into a little more depth here about an element that has, so far, been a keystone to the positive changes they are seeing.

Sara Abshari and Eileen Jaskuta are presenting the Meritus story at the AME conference next week (October 9, 2017).

Sara is a manager (and excellent kata coach) in the Meritus CI office. Eileen is now at Main Line Health System, but was the Chief Quality Officer at Meritus at the time Joe was presenting at KataCon.

Their presentation is titled Death From Kaizen to Daily Improvement and outlines the journey at Meritus, including the development of executive rounding. If you are attending the conference, I encourage you to seek them out – as well as Craig Stritar – and talk to them about their experiences.

Mark’s Word Quibble

In addition, honestly, the Target Magazine editors made a single-word change in the article that I feel substantially changed the contextual meaning of the paragraph, and I am using this forum to explain the significance.

Here is paragraph from the draft as originally submitted. (Highlighting added to point out the difference):

[…][Meritus][…] executives follow a similar structure as they round several times a week to check-in with the front line and ensure there are no obstacles to making progress. Like the Managing Daily improvement meetings at Idex, the executive rounding at Meritus has evolved as they have learned how to connect the front-line improvements to the strategic priorities.

This is what appears in print in the magazine:

[…][Meritus][…] executives follow a similar structure as they visit several times a week to check in with the frontline and ensure there are no obstacles to making progress. Like the MDI meetings at Idex, the executive visiting at Meritus has evolved as they have learned how to connect the front-line improvements to the strategic priorities.

While this editing quibble can easily be dismissed as a pedantic author (me), the positive here is it gives me an opportunity to highlight different meanings in context, go into more depth on the back-story than I could in the magazine article, and invite those of you who will be attending the upcoming AME conference to talk to some of the key people who will be presenting their story there.

Rounding vs. Visiting

In the world of healthcare, “rounding” is the standard work performed by nurses and physicians as they check on the status of each patient. During rounds, they should be deliberately comparing key metrics and indicators of the patient’s health (vital signs, etc.) against what is expected. If something is out of the expected range, that becomes a signal for further investigation or intervention.

“Visiting” is what the patient’s family and friends do. They stop by, and engage socially.

In industry, we talk about “gemba walks,” and if they are done well, they serve the same purpose as “rounding” on patients in healthcare. A gemba walk should be standard work that determines if things are operating normally, and if they are not, investigating further or intervening in some way.

I am speculating that if I had used the term “structured leader standard work” rather than “rounding” it would not have been changed to “visiting.”

Executive Rounding

Joe Ross, the CEO at Meritus Health, presented a keynote at the Kata Summit last February (2017). You can actually download a copy of his presentation here: http://katasummit.com/2017presentations/. The title of his presentation was “Creating Healthy Disruption with Kata.” More about that in a bit.

The keystone of his presentation was about the executives doing structured rounding on various departments several times a week. These are the C-Level executives, and senior Vice Presidents. They round in teams, and change the routes they are rounding on every couple of weeks. Thus, the entire executive team is getting a sense of what is going on in the entire hospital, not just in their departments.

Rather than just “visiting,” they have a formal structure of questions, built from the Coaching Kata questions + some additional information. Since everyone is asking the same basic questions, the teams can be well prepared and the actual time spent in a particular department is programmed to be about 5 minutes. The schedule is tight, so there isn’t time to linger. This is deliberate.

After the teams round, the executives meet to share what they have learned, identify system-wide issues that need their attention, and reflect on what they have learned.

In this case, rather than rounding on patients, the executives are rounding to check the operational health of the hospital. They are checking the vital signs and making sure nothing is impeding people from doing the right thing – do people know the right thing to do? If not, then the executives know they need to provide clarity. Do people know how to do the right thing? If not, then the executives need to work on building capability and competence.

In both cases, executives are getting information they need so they can ensure that routine things happen routinely, and the right people are working to improve the right things, the right way. In the long-term, spending this time building those capabilities and mechanisms for alignment deep into the operational hierarchy gives those executives more time to deal with real strategic issues. Simply put, they are investing time now to build a far more robust organization that can take on bigger and bigger challenges with less and less drama.

Results

Though they were only a little more than a year in when Joe presented at KataCon, he reported some pretty interesting results. I’ll let you look at the presentation to see the statistically significant positive changes in employee surveys, patient safety and patient satisfaction scores. What I want to bring attention to are the cultural changes that he reported:

image

Leadership Development

Actually points 1. and 2. above are both about leadership development. The executives are far more in touch with what is happening, not only in their own departments, but in others. Even if they don’t round on their own departments, they hear from executives who did, and get valuable perspectives and questions from outsiders. This helps break down silo walls, build more robust horizontal linkages, and gives their people a stage to show what they are working on.

Since executives can’t be the ones with all of the solutions, they are (or should be) mostly concerned with developing the problem solving capabilities in their departments. At the same time, rounding gives them perspective on problems that only executive action can fix. In a many organizations mid-manager facing these systemic obstacles would try to work around them, ignore them, or just accept “that’s the way it is” and nothing gets done about these things. That breeds helplessness rather than empowerment.

On the other hand, if a manager should be able to solve the problem, then there is a leader development opportunity. That is the point when the executive should double down on ensuring the directors and upper managers are coaching well, have target conditions for developing their staff, and are aware of who is struggling and who is not. You can’t delegate knowing what is actually going on. Replying on reports from subordinates without ever checking in a couple of levels down invites well-meaning people to gloss over issues they don’t want to bother anyone about.

Breaking Down Silos by Providing Transparency

The side-benefit of this type of process is that the old cultures of “stay out of my area” silos get broken down. It becomes OK to raise problems. The opposite is a culture where executives consider it betrayal if someone mentions a problem to anyone outside of the department. That control of information and deliberate isolation in the name of maintaining power doesn’t work here. Nobody likes to work in a place like that. Once an organization has started down the road toward openness and no-blame problem solving, it’s hard to turn back without creating backlash of some kind within the ranks.

Creating Disruption

Joe used the term “Disruption” in the title of his presentation. Disruption is really more about emotions than process. There is a crucial period of transition because this new transparency makes people uncomfortable if they come from a long history of trying hard to make sure everything looks great in the eyes of the boss. Even if the top executive wants transparency and getting things out in the open, that often doesn’t play well with leaders who have been steeped in the opposite.

Thus, this process also gives a CEO and top leaders an opportunity to check, not only the responses of others, but their own responses, to the openness. If there are tensions, that is an opportunity to address them and seek to understand what is driving the fear.

In reality, that is very difficult. In our world of “just the facts, ma’am” we don’t like to talk about emotions, feelings, things that make us uncomfortable. Those things can be perceived as weakness, and in the Old World, weakness could never be shown. Being open about the issues can be a level of vulnerability that many executives haven’t been previously conditioned to handle. Inoculation happens by sticking with the process structure, even in the face of pushback, until people become comfortable with talking to each other openly and honestly. The cross-functional rounding into other departments is a vital part of this process. Backing off is like stopping taking your antibiotics because you feel better. It only emboldens the fear.

These kinds of changes can challenge people’s tacit assumptions about what is right or wrong. Emotions can run high – often without people even being aware of why.

Obstacles: Right Now vs. Longer Term

A couple of weeks ago Gemba Academy filmed my Toyota Kata class and some shop floor work with a live audience at one of their customer’s sites. One of the participants asked a really good question. Upon reflection, I think I can answer it better here than I did “live,” so I’m going to take a do-over.

image

Background

The team had analyzed their current condition, had established a pretty good target condition, and was working through obstacles.

One of the obstacles was around the fact that the written procedures had not kept up with the way the work was really being performed. This is actually pretty common in industry. The people doing the work know how to do it, and get it done in ways that are better than what is in the documents.

Nevertheless, they needed to update those procedures. If they did not, then new people, or workers that might be rotated into the area temporarily for some reason, would struggle to perform the work in the best way.

The Question

This obstacle was not in the way of reaching their target condition process. However they knew the target process would not be stable in the face of people rotation or turnover. The question was along the lines of:

Is it an obstacle if we can’t sustain the target condition unless we address it?

Answer: It Depends

At the risk of bringing up some really old U.S. political humor, “It depends on what your definition of ‘target condition’ is.”

Here is what I am thinking now.

The first step is to get the target process, as they defined it, to work at all. To do this, I would work to control variables, including trying hard to avoid rotating people through there while I am getting it dialed in.

Once we have established that the target process can work with experienced people, then the next target condition might well be to get this process anchored well enough that it will sustain over time without tons of intervention.

Maybe my next target condition is to be able to sustain the target process no matter who is doing it (assuming they have the basic qualification to do that kind of work).

One of the obstacles in the way of that target condition could well be “Our documentation is obsolete.”

Most documentation I have encountered in any industry is actually pretty poor. So this represents an opportunity to experiment your way into developing process documentation that (1) can actually be followed as written and (2) might even be useful for training someone. I’ve never seen that work without a process of iterative trials.

So in this case, I would say “Get it to work the way you intend it to first.” Make that your target condition. THEN start looking at what erodes it if new people step in. In this case, especially, that is going to involve much more than simply updating documentation. How can you set up the work area so that anyone knows what must be done next? What do you need to teach? What do you need to communicate?

I’m Still Thinking About This

Finally, I think this is one of those real-world cases where there isn’t a hard right or wrong answer. There wouldn’t be any harm in updating the process documentation early – except that I expect they will have to do it over once they learn more.

And – not all “obstacles” are actually problems to solve. Sometimes (though less often than we think), there is just something that has to be done that we already know how to do – we just haven’t done it. In those cases, just do it and move on – EXCEPT: Make sure you predict the result of your “just do it,” and CHECK to make sure it worked the way you thought it would. I’ll lay even money it doesn’t, but you won’t know unless you construct it as an experiment. “Just do it-s” usually turn in to “Oh… that didn’t work quite like we thought it would.”

Just make sure you are deliberately learning rather than doing things by rote.

Experimenting at the Threshold of Knowledge

The title of this post was a repeating theme from KataCon 3. It is also heavily emphasized in Mike Rother’s forthcoming book The Toyota Kata Practitioner’s Guide (Due for publication in October 2017).

What is the Threshold of Knowledge?

“The root cause of all problems is ignorance.”

– Steven Spear

September 1901, Dayton, Ohio: Wilbur was frustrated. The previous year, 1900, he, with his brother’s help1, had built and tested their first full-size glider. It was designed using the most up-to-date information about wing design available. His plan had been to “kite” the glider with him as a pilot. He wanted to test his roll-control mechanism, and build practice hours “flying” and maintaining control of an aircraft.

But things had not gone as he expected. The Wrights were the first ones to actually measure the lift and drag2 forces generated by their wings, and in 1900 they were seeing only about 1/3 of the lift predicted by the equations they were using.

The picture below shows the 1900 glider being “kited.”. Notice the angle of the line and the steep angle of attack required to fly, even in a stiff 20 knot breeze.  Although they could get some basic tests done, it was clear that this glider would not suit their purpose.

image

In 1901 they had returned with a new glider, essentially the same design only about 50% bigger. They predicted they would get enough lift to sustain flight with a human pilot. They did succeed in making glides and testing the principle of turning the aircraft by rolling the wing. But although it could lift more weight, the lift / drag ratio was no better.

image

What They Thought They Knew

Wilbur’s original assumptions are well summarized in a talk he gave later that month at the invitation of his mentor and coach, Octave Chanute.

Excerpted from the published transcript of Some Aeronautical Experiments presented by Wilbur Wright on Sept 18, 1901 to the Western Society of Engineers in Chicago (Please give Wilbur a pass for using the word “Men.” He is living in a different era):

The difficulties which obstruct the pathway to success in flying-machine construction are of three general classes:

  1. Those which relate to the construction of the sustaining wings;
  2. those which relate to the generation and application of the power required to drive the machine through the air;
  3. those relating to the balancing and steering of the machine after it is actually in flight.

Of these difficulties two are already to a certain extent solved. Men already know how to construct wings or aeroplanes which, when driven through the air at sufficient speed, will not only sustain the weight of the wings themselves, but also that of the engine and of the engineer as well. Men also know how to build engines and screws of sufficient lightness and power to drive these planes at sustaining speed. As long ago as 1884 a machine3 weighing 8,000 pounds demonstrated its power both to lift itself from the ground and to maintain a speed of from 30 to 40 miles per hour, but failed of success owing to the inability to balance and steer it properly. This inability to balance and steer still confronts students of the flying problem, although nearly eight years have passed. When this one feature has been worked out, the age of flying machines will have arrived, for all other difficulties are of minor importance.

What we have here is Wilbur’s high-level assessment of the current condition – what is known, and what is not known, about the problem of “powered, controlled flight.”

Summarized, he believed there were three problems to solve for powered, controlled flight:

  1. Building a wing that can lift the weight of the aircraft and a pilot.
  2. Building a propulsion system to move it through the air.
  3. Controlling the flight – going where you want to.

Based on their research, and the published experience of other experimenters, Wilbur had every reason to believe that problems (1) and (2) were solved, or easy to solve. He perceived that the gap was control and focused his attention there.

His first target condition had been to validate his concept of roll control based on “warping” (bending) the wings. In 1899 he built a kite and was able to roll, and thus turn, it at will.

At this point, he believed the current condition was that lift was understood, and that the basic concept of changing the direction by rolling the wing was valid. Thus, his next target condition was to scale his concept to full size and test it.

What Happened

Wilbur had predicted that their wing would perform with the calculated amount of lift.

When they first tested it at Kitty Hawk in 1900, it didn’t.

However, at this point, Wilbur was not willing to challenge what was “known” about flight.

Instead the 1901 glider was a larger version of the 1900 one with one major exception: It was built so they could reconfigure the airfoil easily.

Impatient, Wilbur insisted on just trying it. But, to quote from Harry Combs’ excellent history, Kill Devil Hill:

“The Wrights in their new design had also committed what to modern engineers would be an unforgivable sin. […] they made two wing design changes simultaneously and without test.4

Without going into the details (get the book if you are interested) they did manage to get some glides, but were really no closer to understanding lift than they had been the previous year.

They had run past their threshold of knowledge and had assumed (with good reason) that they understood something that, in fact, they did not (nobody did).

They almost gave up.

Deliberate Learning

Being invited to speak in September actually gave Wilbur a chance to reflect, and renewed his spirits. That fall and winter, he and his brother conducted empirical wind tunnel experiments on 200 airfoil designs to learn what made a difference and what did not. In the process, as an “oh by the way,” they invented the “Wright Balance” which was the gold standard for measuring lift and drag in wind tunnel testing until electronics took over.

They went back to what was known, and experimented from there. They made no assumptions. Everything was tested so they could see for themselves and better understand.

The result of their experiments was the 1902 Wright Glider. You can see a full size replica in the ticketing area of the Charlotte, NC airport.

I’ll skip to the results:

image

Notice that the line is now nearly vertical, and the wing pointed nearly straight forward rather than steeply tipped back.

What Do We Need to Learn?

Making process improvements is a process of research and development, just like Wilbur and Orville were going through. In 1901 they fell into the trap of “What do we need to do?” After they got back to Dayton, they recovered and asked “What do we need to learn?” “What do we not understand?”

The Coaching Kata

What I have come to understand is the main purpose of coaching is to help the learner (and the coach) find that boundary between what we know (and can confirm) and what we need to learn. Once that boundary is clear, then the next experiment is equally clear: What are we going to do in order to learn? Learning is the objective of any task, experiment, or action item, because they are all built on a prediction even if you don’t think they are.

By helping the learner make the learning task explicit, rather than implicit, the coach advances learning and understanding – not only for the learner, but for the entire organization.

Where is your threshold of knowledge? How do you know?

 

________________________________

1We refer to “the Wright Brothers” when talking about this team. It was Wilbur who, in 1899, became interested in flight. Through 1900 it was largely Wilbur with his brother helping him. After 1901, though, his letters and diary entries start referring to “we” rather than “I” as the project moved into being a full partnership with Orville.

2The Wright Brothers used the term “drift” to refer to what, today, we call “drag.”

3Wilbur is referring to a “flying machine” built by Hiram Maxim.

4I’m not so sure that this is regarded as an “unforgivable sin” in a lot of the engineering environments I have seen, though the outcomes are similar.

Learning = Extending the Threshold of Knowledge

“My computer won’t boot.”

Mrs. TheLeanThinker’s computer was hanging on the logo screen, keyboard unresponsive.

I know already that if the CPU were bad it wouldn’t get this far.

I also knew that the system hasn’t even tried to boot the OS from the hard drive yet, so that likely isn’t the problem.

Working hypothesis: It’s something on the motherboard.

Start with the simple stuff that challenges the working hypothesis:

  • Hang test a different, known good, power supply. No change.
  • Pull memory cards and reinstall them one by one. No change.
  • Pull the motherboard battery, unplug, wait a few minutes to possibly reset the BIOS. No change.
  • Try holding down the DEL key on power-up to get into BIOS settings. Nope, system still hangs, though it does read that one keystroke, the keyboard is dead after that.
  • Try Ctrl-Home to reach the BIOS flash process. Nope.

image

There is no evidence that the motherboard is not dead. Final test:

Get the numbers off the motherboard, find the same model on Amazon, order it for $37.50 to the door. (Intel hasn’t made this processor type since 2011).

New motherboard arrived today. Switch it out, takes about 30 minutes.

Boot up the machine, works OK, set the time in the BIOS, and pretty much good to go.

Convince Windows 10 that I haven’t made a bootleg copy.

Done.

The Threshold of Knowledge

I learned to code in 1973 on PDP-8 driving teletypes. Although my programming skills are largely obsolete these days, I am comfortable poking around inside the box of a PC, and I generally know how they work. Thus, the troubleshooting and component replacement I described above was not a learning experience. Yes, I learned what was wrong with this computer. (The “bad motherboard” was a hypothesis I tested by installing a new one.) But I didn’t learn anything about computers in general.

Rather than working through experiments into new territory, I was troubleshooting. Something that had worked was not working now. My experiments were an effort to confirm the point of failure.

Therefore, as interesting as the diversion was, aside from a little research on some of the more arcane troubleshooting, it was not a learning exercise for me. It was all within my Threshold of Knowledge.

In the Improvement Kata, “threshold of knowledge” refers to the boundary between “We know for sure” and “We don’t know.” Strictly speaking, we only say “We know” when there is specific and relevant evidence to back it up.

image

In this case, my challenge (fix the wife’s computer) was well inside the red circle.

But this wouldn’t be the case for everyone.

The Threshold of Knowledge is Subjective

Someone else with the same challenge may not see this as a routine troubleshoot-and-repair task. Rather, he has to learn.

I had to learn it at some point as well. The difference is that I had already learned it. I had already made mistakes, taken a week to build a PC and get it working many years ago. I learned by experimenting and being surprised when something didn’t work, then digging in and understanding why. On occasion, especially in the early days, I consulted experts who coached me, or at least taught me what to do and why.

Coaching To Extend the Threshold of Knowledge

Learning is the whole point of the Improvement Kata. That is why we call the “improver” the “learner.” If someone encounters a problem like my example and I am responsible for developing their skills, I am not serving them if I do something like:

  • Sit down at their machine and troubleshoot it.
  • Tell them what step to take, and asking what happened so I can interpret the outcome.

That second case is deceptive. The question is “Who is doing the thinking?” If the coach is doing the thinking, then the coach isn’t coaching, and the learner isn’t learning.

In this case I would also have to recognize this is going to take longer than it would if I did it myself. That is a trap many leaders fall into. They got where they are because they can arrive at a solution quickly. But the only reason they can do that is because, at some point in the past, they had time to learn.

“My computer doesn’t boot.” If my objective is for this person to learn, then I need to go back to the steps of learning. Given that the challenge is likely “My computer operates normally,” what would be my next question to help this person learn how to troubleshoot a problem like this?

I need to know what they know. “Do you know where in the boot sequence it is hanging up?” If the answer is “No,” or just a repeat of the symptom, then my next target condition is for them to understand the high-level sequence of steps that happens between “ON” and the login screen. That would be easy to depict in a block diagram. It’s just another process. But my learner might have to do a little research, and I can certainly point him in the right direction.

I’m not going to get into the details here, because this post isn’t about troubleshooting cranky computers.

General Application

“If somebody comes to me with a problem, I have two problems.”

  • The original issue.
  • The fact that this person didn’t know how to handle it.

You can easily translate my computer example into a production quality example. A defect is produced by a process that normally does not produce them. What is different between “Defect” and “Defect-Free?” Something is. We just don’t know what.

Is it something we need to learn? Something we need to teach? Or something we need to communicate?

If my working challenge for my organization is something like “Everyone knows everything they need to do their jobs perfectly.” then I am confronted every day with evidence that this is not as true as I would like.

If I look at those interventions as “the boss just doing his job” then I lose the opportunity to teach and to grow the organization. I am showing how much I know, and by doing so just extending the dependency. That might feel good in the short term, but it doesn’t do much for the future.

Think about this… in your organization, if the boss were promoted or hired out of the job tomorrow, would you look outside the immediate organization for a replacement? If so, you are not developing your people. When I see senior leaders being hired from outside, all I can do is wonder why they have so little faith in the people they already have.

_________

*I remember when Gateway built their own machines, which I guess shows how long I’ve been playing with PCs. Then again, I remember when the premium brand was Northgate. Of course, I also remember programming on punch cards.

Coaching Kata: Walking Through an Improvement Board

Improver's Storyboard

The Coaching Kata is much more than just asking the 5 questions. The coach needs to pay attention to the answers and make sure the thinking flows.

Although I have alluded to pieces in prior posts, today I want to go over how I try to connect the dots during a coaching cycle.

Does the learner understand the challenge she is striving for?

The “5 Questions” of the Coaching Kata do not explicitly ask about the challenge the learner is striving for. This makes sense because the challenge generally doesn’t change over the course of a week or two.

But I often see challenges that are vague, defined only by a general direction like “reduce.” The question I ask at that point is “How will you know when you have achieved the challenge?

If there isn’t a measurable outcome (and sometimes there isn’t), I am probing to see if the learner really understands what he must achieve to meet the challenge.

This usually comes up when I am 2nd coaching and the learner and regular coach haven’t really reached a meeting of the minds on what the challenge is.

Is the target condition a logical step in the direction of the challenge?

And is the target condition based on a thorough grasp of the current condition?

I’m going to start with this secondary question since I run into this issue more often, especially in organizations with novice coaches. (And, by definition, that is most of the organizations where I spend time.)

It is quite common for the learner to first try to establish a target condition, and then grasp the current condition. Not surprisingly, they struggle with that approach. It sometimes helps to have the four steps of the Improvement Kata up near the board, and even go as far as to have a “You are here” arrow.

Four steps of the Improvement Kata
(c) Mike Rother

Another question I ask myself is Can I directly compare the target condition and the current condition? Can I see the gap, can I see the same indicators and measurements used for each so I can compare “apples vs apples”?

Along with this is the same question I ask for the challenge, only more so for the target condition:

How will the learner be able to tell when the target is met? Since this has a short-term deadline, I am really looking for a crisp, black-and-white line here. The target condition is either met or not met on the date.

Is there a short-term date that is in the future?

It is pretty common for a novice learner to set a target condition equal to the challenge. If they are over-reaching, I’ll impose a date, usually no more than two weeks out. “Where will you be in two weeks?” Another way to ask is “What will the current condition be in two weeks?”

Sometimes the learner has slid up to the date and past it. Watch for this! If the date comes up without hitting the target, then it is time to reflect and establish a new target condition in the future.

Is the target condition a step in the direction of the challenge?

Usually the link between the target condition and the overall challenge is pretty obvious. Sometimes, though, it isn’t clear to the coach, even if it is clear in the mind of the learner. In these cases, it is important for the coach to ask.

Key Point: The coach isn’t rigidly locked into the script of the 5 questions. The purpose of follow-up questions is to (1) actually get an answer to the Coaching Kata questions and (2) make sure the coach understands how the learner is thinking. Remember coach: It is the learner’s thinking that you are working to improve, so you have to understand it!

(And occasionally the learner will try to establish a target condition that really isn’t related to the challenge.)

Does the “obstacle being addressed” actually relate to the target condition?

(Always keep your marshmallow on top!)

The question is “What obstacles do you think are preventing you from reaching the target condition?” That question should be answered with a reading of all of the obstacles. (Again, the coach is trying to understand what the learner is thinking.) Then “Which one (obstacle) are you addressing now?”

Generally I give a pretty broad (though not infinitely broad) pass to the obstacles on the list. They are, after all, the learner’s opinion (“…do you think are…”). But when it comes to the “obstacle being addressed now” I apply a little more scrutiny.

I have addressed this with a tip in a previous post: TOYOTA KATA: IS THAT REALLY AN OBSTACLE?

It is perfectly legitimate, especially early on, for an obstacle to be something we need to learn more about. The boundary between “Grasp the current condition” and “Establish the next target condition” can be blurry. As the focus is narrowed, the learner may well have to go back and dig into some more detail about the current condition. If that is impeding getting to the target, then just write it down, and be clear what information is needed. Then establish a step that will get that information.

Sometimes the learner will write down every obstacle they perceive to reaching the challenge. The whole point of establishing a Target Condition is to narrow the scope of what needs to be worked on down to something easier to deal with. When I focus them on only the obstacles that relate directly to their Target Condition, many are understandably reluctant to simply cross other (legitimate, just not “right now”) issues off the list.

In this case it can be helpful to establish a second Obstacle Parking Lot off to the side that has these longer-term obstacles and problems on it. That does a couple of things. It can remind the coach (who is often the boss) that, yes, we know those are issues, but we aren’t working on those right now. Other team members who contribute their thinking can also know they were heard, and those issues will be addressed when they are actually impeding progress.

Does the “Next step or experiment” lead to learning about the obstacle being addressed?

Sometimes it helps to have the learner first list what they need to learn, and then fill in what they are going to do. See this previous post for the details: IMPROVEMENT KATA: NEXT STEP AND EXPECTED RESULT.

In any case, I am looking to see an “Expected result” that at least implies learning.

In “When can we go and see what we have learned from taking that step?” I am also looking for a fast turn-around. It is common for the next step to be bigger than it needs to be. “What can you do today that will help you learn?” can sometimes help clarify that the learner doesn’t always need to try a full-up fix. It may be more productive to test the idea in a limited way just to make sure it will work the way she thinks it will. That is faster than a big project that ends up not working.

The Root (Cause) Of All Problems

One of my clients has been working with Steve Spear. They shared a great point he made with me, and I want to share it with you:

“The root cause of all problems is ignorance.”

— Steven Spear

I can’t argue with that. If you don’t understand the root cause, you need to learn more about the problem. And to be clear, “problem solving” and “improvement” are learning processes. If you didn’t learn, you didn’t solve the problem, and you didn’t improve anything. At best you suppressed the symptoms.

So… next time you encounter a problem (which is likely as soon as you put your head up from reading this), instead of asking “What should we do?” ask “What do we need to learn about this?” It will set people off in an entirely different (and far more robust) direction.

Averages, Percentages and Math

As a general rule I strongly discourage the use of averages and “percent improvement” (or reduction) type metrics for process improvement.

The Problem with Averages

Averages can be very useful when used as part of a rigorous statistical analysis. Most people don’t do that. They simply dump all of their data into a simple arithmetic mean, and determine a score of sorts for how well the process is doing.

The Average Trap

There is a target value. Let’s say it is 15. Units could be anything you want. In this example, if we exceed 15, we’re good. Under 15, not good.

“Our goal is 15, and our average performance is 20.”

Awesome, right?

Take a look at those two run charts below*. They both have an average of 20.

On the first one, 100% of the data points meet or exceed the goal of 15.

Run chart with average of 20, all points higher than 15.

On the one below, 11 points miss the goal

image

But the both have an average 5 points over the goal.

In this case, the “average” really gives you almost no information. I would have them measure hits and misses, not averages. The data here is contrived, but the example I am citing is something I have seen multiple times.

Why? Most people learned how to calculate an arithmetic mean in junior high school. It’s easy. It’s easier to put down a single number than to build a run chart and look at every data point. And once that single number is calculated, the data are often thrown away.

Be suspicious when you hear “averages” used as a performance measurement.

Using Averages Correctly

(If you understand elementary statistical testing you can skip this part… except I’ve experts who should have known better fall into the trap I am about to describe, so maybe you shouldn’t skip it after all.)

In spite of what I said above, there are occasions when using an average as a goal or as part of a target condition makes sense.

A process running over time produces a range of values that fall into a distribution of some kind.

Any sample you take is just that – a sample. Take a big enough sample, and you can become reasonably confident that the average of your sample represents (meaning is close to) the average of everything.

The move variation there is, the bigger sample you need to gain the same level of certainty (which is really expressed as the probability you are wrong).

The more certain you want to be, the bigger sample you need.

Let’s say you’ve done that. So now you have an average (usually a mean) value.

Since you are (presumably) trying to improve the performance, you are trying to shift that mean – to change the average to a higher or lower value.

BUT remember there is variation. If you take a second sample of data from an unchanged process and calculate that sample’s average, YOU WILL GET A DIFFERENT AVERAGE. It might be higher than the first sample, it might be lower, but the likelihood that it will exactly the same is very, very small.

The averages will be different even if you haven’t changed anything.

You can’t just look at the two numbers and say “It’s better.” If you try, the NEXT sample you take might look worse. Or it might not. Or it might look better, and you will congratulate yourself.

If you start turning knobs in response, you are chasing your tail and making things worse because you are introducing even more variables and increasing the variation. Deming called this “Tampering” and people do it all of the time.

Before you can say “This is better” you have to calculate, based on the amount of variation in the data, how much better the average needs to be before you can say, with some certainty, that this new sample is from a process that is different than the first one.

The more variation there is, the more difference you need to see. The more certainty you want, the more difference you need to see. This is called “statistical significance” and is why you will see reports that seem to show something is better, but seem to be dismissed as a “statistically insignificant difference” between, for example, the trial medication and the placebo.

Unless you are applying statistical tests to the samples, don’t say “the average is better, so the process is better.” The only exception would be if the difference is overwhelmingly obvious. Even then, do the math just to be sure.

I have personally seen a Six Sigma Black Belt(!!) fall into this trap – saying that a process had “improved” based on a shift in the mean of a short sample without applying any kind of statistical test.

As I said, averages have a valuable purpose – when used as part of a robust statistical analysis. But usually that analysis isn’t there, so unless it is, I always want to see the underlying numbers.

Sometimes I hear “We only have the averages.” Sorry, you can’t calculate an average without the individual data points, so maybe we should go dig them out of the spreadsheet or database. They might tell us something.

The Problem with Percentages

Once again, percentages are valuable analysis tools, so long as the underlying information isn’t lost in the process. But there are a couple of scenarios where I always ask people not to use them.

Don’t Make Me Do Math

“We predict this will give us a 23% increase in output.”

That doesn’t tell me a thing about your goal. It’s like saying “Our goal is better output.”

Here is my question:

“How will you know if you have achieved it?”

For me to answer that question for myself, I have to do math. I have to take your current performance, multiply x 1.23 to calculate what your goal is.

If that number is your goal, then just use the number. Don’t make me do math to figure out what your target is.

Same thing for “We expect 4 more units per hour.”

“How many units do you expect per hour?” “How many are you producing now?” (compared to what?)

Indicators of a W.A.G.

How often do you hear something like  “x happens 90 percent of the time”?

I am always suspicious of round numbers because they typically have no analysis behind them. When I hear “75%” or “90%” I am pretty sure it’s just speculation with no data.

These things sound very authoritative and it is easy for the uncertainty to get lost in re-statement. What was a rough estimate ends up being presented as a fact-based prediction.

At Boeing someone once defined numbers like this as “atmospheric extractions.”

If the numbers are important, get real measurements. If they aren’t important, don’t use them.

Bottom Line Advice:

Avoid averages unless they are part of a larger statistical testing process.

Don’t set goals as “percent improvement.” Do the math yourself and calculate the actual value you are shooting for. Compare your actual results against that value and define the gap.

When there is a lot of variation in the number of opportunities for success (or not) during a day, a week, think about something that conveys “x of X opportunities” in addition to a percent. When you have that much variation in your volume, fluctuations in percent of success from one day to the next likely don’t mean very much anyway.

Look at the populations – what was different about the ones that worked vs. the ones that didn’t — rather than just aggregating everything into a percentage.

Be suspicious of round numbers that sound authoritative.

_______________

*These charts are simply independent random numbers with upper and lower bounds on the range. Real data is likely to have something other than a flat distribution, but these make the point.

Toyota Kata: Is That Really an Obstacle?

“What obstacles do you think are preventing you from reaching the target condition?”

When the coach asks that question, she is curious about what the learner / improver believes are the unresolved issues, sources of variation, problems, etc. that are preventing the process from operating routinely the way it should (as defined by the target condition).

I often see things like “training” or worse, a statement that simply says we aren’t operating the way the target says.

Here is a test I have started applying.

Complete this sentence:

“We can’t (describe the target process) because ________.”

Following the word “because,” read the obstacle verbatim. Read exactly what it says on the obstacle parking lot. Word for word.

If that does not make a grammatically coherent statement that makes sense, then the obstacle probably needs to be more specific.

 

 

Prediction Doesn’t Equal Understanding

Lunar Eclipse over Everett, WA. Photo by Mark Rosenthal, © 2015Sometimes people fall into a trap of believing they understand a process if they can successfully predict it’s outcome. We see this in meetings. A problem or performance gap will be discussed, and an action item will be assigned to implement a solution.
Tonight those of us in the western USA saw the moon rise in partial eclipse.

We knew this would happen because our understanding of orbital mechanics allows us to predict these events… right?

Well, sort of. Except we have been predicting astronomical events like this for thousands of years, long before Newton, or even Copernicus.

The photo below is of a sophisticated computer that predicted lunar eclipses, solar eclipses, and other astronomical events in 1600BC (and earlier). Click through the photo for an explanation of how Stonehenge works:

Photo of Stonehenge
Creative Commons flickr user garethwiscombe

Stonehenge represented a powerful descriptive theory. That is, a sufficient level of understanding to describe the phenomena the builders were observing. But they didn’t know why those phenomena occurred.

Let’s go to our understanding of processes.

The ability to predict the level of quality fallout does not indicate understanding of why it occurs. All it tells you is that you have made enough observations that you can conclude the process is stable, and will likely keep operating that way unless something materially changes. That is all statistical process control tells you.

Likewise, the ability to predict how long something takes does not indicate understanding of why. Obviously I could continue on this theme.

A lot of management processes, though, are quite content with the ability to predict. We create workforce plans based on past experience, without ever challenging the baseline. We create financial models and develop “required” levels of inventory based on past experience. And all of these models are useful for their intended purpose: Creating estimates of the future based on the past.

But they are inadequate for improvement or problem solving.

Let’s say your car has traditionally gotten 26 miles-per-gallon of fuel. That’s not bad. (For my non-US readers, that’s about 9 liters / 100 km.) You can use that information to predict how far a tank of fuel will get you, even if you have no idea how the car works.

If your tank holds 15 gallons of fuel, you’ll be looking to fill after driving about 300 miles.

But what if you need to get 30 miles-per-gallon?

Or what if all of a sudden you are only getting 20 miles-per-gallon?

If you are measuring, you will know the gap you need to close. In one case you will need to improve the operation of the vehicle in some way. In the other case, you will need to determine what has changed and restore the operation to the prior conditions.

In both of those cases, if you don’t know how the car operated to deliver 26 miles-per-gallon, it is going to be pretty tough. (It is a lot harder to figure out how something is supposed to work if it is broken before you start troubleshooting it.)

Here’s an even more frustrating scenario: On the last tank of fuel, you measured 30 miles per gallon, but have no idea why things improved! This kind of thing actually happens all of the time. We have a record month or quarter, it is clearly beyond random fluctuation, but we don’t know what happened.

The Message for Management:

If you are managing to KPIs only, and can’t explain the process mechanics behind the measurements you are getting, you are operating in the same neolithic process used by the builders of Stonehenge. No matter how thoroughly they understood what would happen, they did not understand why.

If your shipments are late, if your design process takes too long, if your quality or customer service is marginal, if the product doesn’t meet customer’s expectations, and you can’t explain the mechanisms that are causing these things (or the mechanisms of a process that operates reliably and acceptably) then you aren’t managing, you are simply directing people to make the eclipse happen on a different day.

“Seek first to understand.”

Dig in, go see for yourself. Let yourself be surprised by just how hard it is to get stuff done.

 

 

How Do We Deal With Multiple Shifts?

This is a pretty common question.

Today I was talking to a department director in a major regional hospital that is learning Toyota Kata. She picked it up very quickly, and wants to take the learning to the off-shifts.

She (rightly) doesn’t want the night shift to just be deploying what day shift develops, she wants night shift totally involved in making improvements as well. Awesome.

Her question was along the lines of “How do I maintain continuity of the effort across both shifts?” She was jumping into asking how to provide good coaching support, whether there were separate boards, or a single board etc. and playing out the problems with each scenario.

My reply was pretty simple. “I don’t know.”

“What do you want to see your learners doing if they are working the way you envision?”

In other words, “What is your target condition?”

But… how do we coach them, and so on?

I don’t know. But until we understand what we want the improvement process to look like, especially across the shift boundaries, we can’t say. Different target conditions will have different obstacles.

And what worked at Boeing, or Genie, or Kodak, or even another hospital I’ve worked with likely won’t work here in your hospital. The conditions are different. The conditions are different in different departments in the same hospital!

She admitted that she was having a hard time thinking about a target without dealing with all of the potential obstacles first. My suggestion was that this challenge is her improvement board, and the best way to work out a solution was to actually follow the Improvement Kata (that she has been doing such a great job at coaching for the last month).

Trust the process. Once there is a clear target condition for the people doing the work (in this case, the learners / improvers), then we’ll better understand the obstacles we actually have to deal with. That will likely be fewer than every possible problem we can think of right now.

Establish your target condition, then list your obstacles, then start working on them one by one.

The Improvement Kata is exactly the tool to apply when you know you want to do something, but can’t figure out exactly how to do it.

Step by step.

Keep it up, Susan.  Smile