Toyota Kata: What is the Learner Learning?

In the language of Toyota Kata we have a “coach” and a “learner.” Some organizations use the word “improver” instead of “learner.” I have used those terms more or less interchangeably. Now I am getting more insight into what the “learner” is learning.

The obvious answer is that, by practicing the Improvement Kata, the learner is learning the thinking pattern that is behind solid problem solving and continuous improvement.

But now I am reading more into the role. The “learner” is also the one who is learning about the process, the problems, and the solutions.

Steve Spear has a mantra of “See a problem, solve a problem, teach somebody.” This is, I think, the role of the learner.

What about the coach?

The coach is using the Coaching Kata to learn how to ask questions that drive learning. He may also be un-learning how to just have all of the answers.

As the coach develops skill, I advise sticking to the Coaching Kata structure for the benefit of beginner learners. It is easier for them to be prepared if they understand the questions and how to answer them. That, in turn, teaches them the thinking required to develop those answers.

Everybody is a Learner

The final question in the “5 Questions” is “When can we go and see what we have learned from taking that step?” It isn’t when can I see what You have learned. It is a “we” question because nobody knows the answers yet.

The Root (Cause) Of All Problems

One of my clients has been working with Steve Spear. They shared a great point he made with me, and I want to share it with you:

“The root cause of all problems is ignorance.”

— Steven Spear

I can’t argue with that. If you don’t understand the root cause, you need to learn more about the problem. And to be clear, “problem solving” and “improvement” are learning processes. If you didn’t learn, you didn’t solve the problem, and you didn’t improve anything. At best you suppressed the symptoms.

So… next time you encounter a problem (which is likely as soon as you put your head up from reading this), instead of asking “What should we do?” ask “What do we need to learn about this?” It will set people off in an entirely different (and far more robust) direction.

Averages, Percentages and Math

As a general rule I strongly discourage the use of averages and “percent improvement” (or reduction) type metrics for process improvement.

The Problem with Averages

Averages can be very useful when used as part of a rigorous statistical analysis. Most people don’t do that. They simply dump all of their data into a simple arithmetic mean, and determine a score of sorts for how well the process is doing.

The Average Trap

There is a target value. Let’s say it is 15. Units could be anything you want. In this example, if we exceed 15, we’re good. Under 15, not good.

“Our goal is 15, and our average performance is 20.”

Awesome, right?

Take a look at those two run charts below*. They both have an average of 20.

On the first one, 100% of the data points meet or exceed the goal of 15.

Run chart with average of 20, all points higher than 15.

On the one below, 11 points miss the goal

image

But the both have an average 5 points over the goal.

In this case, the “average” really gives you almost no information. I would have them measure hits and misses, not averages. The data here is contrived, but the example I am citing is something I have seen multiple times.

Why? Most people learned how to calculate an arithmetic mean in junior high school. It’s easy. It’s easier to put down a single number than to build a run chart and look at every data point. And once that single number is calculated, the data are often thrown away.

Be suspicious when you hear “averages” used as a performance measurement.

Using Averages Correctly

(If you understand elementary statistical testing you can skip this part… except I’ve experts who should have known better fall into the trap I am about to describe, so maybe you shouldn’t skip it after all.)

In spite of what I said above, there are occasions when using an average as a goal or as part of a target condition makes sense.

A process running over time produces a range of values that fall into a distribution of some kind.

Any sample you take is just that – a sample. Take a big enough sample, and you can become reasonably confident that the average of your sample represents (meaning is close to) the average of everything.

The move variation there is, the bigger sample you need to gain the same level of certainty (which is really expressed as the probability you are wrong).

The more certain you want to be, the bigger sample you need.

Let’s say you’ve done that. So now you have an average (usually a mean) value.

Since you are (presumably) trying to improve the performance, you are trying to shift that mean – to change the average to a higher or lower value.

BUT remember there is variation. If you take a second sample of data from an unchanged process and calculate that sample’s average, YOU WILL GET A DIFFERENT AVERAGE. It might be higher than the first sample, it might be lower, but the likelihood that it will exactly the same is very, very small.

The averages will be different even if you haven’t changed anything.

You can’t just look at the two numbers and say “It’s better.” If you try, the NEXT sample you take might look worse. Or it might not. Or it might look better, and you will congratulate yourself.

If you start turning knobs in response, you are chasing your tail and making things worse because you are introducing even more variables and increasing the variation. Deming called this “Tampering” and people do it all of the time.

Before you can say “This is better” you have to calculate, based on the amount of variation in the data, how much better the average needs to be before you can say, with some certainty, that this new sample is from a process that is different than the first one.

The more variation there is, the more difference you need to see. The more certainty you want, the more difference you need to see. This is called “statistical significance” and is why you will see reports that seem to show something is better, but seem to be dismissed as a “statistically insignificant difference” between, for example, the trial medication and the placebo.

Unless you are applying statistical tests to the samples, don’t say “the average is better, so the process is better.” The only exception would be if the difference is overwhelmingly obvious. Even then, do the math just to be sure.

I have personally seen a Six Sigma Black Belt(!!) fall into this trap – saying that a process had “improved” based on a shift in the mean of a short sample without applying any kind of statistical test.

As I said, averages have a valuable purpose – when used as part of a robust statistical analysis. But usually that analysis isn’t there, so unless it is, I always want to see the underlying numbers.

Sometimes I hear “We only have the averages.” Sorry, you can’t calculate an average without the individual data points, so maybe we should go dig them out of the spreadsheet or database. They might tell us something.

The Problem with Percentages

Once again, percentages are valuable analysis tools, so long as the underlying information isn’t lost in the process. But there are a couple of scenarios where I always ask people not to use them.

Don’t Make Me Do Math

“We predict this will give us a 23% increase in output.”

That doesn’t tell me a thing about your goal. It’s like saying “Our goal is better output.”

Here is my question:

“How will you know if you have achieved it?”

For me to answer that question for myself, I have to do math. I have to take your current performance, multiply x 1.23 to calculate what your goal is.

If that number is your goal, then just use the number. Don’t make me do math to figure out what your target is.

Same thing for “We expect 4 more units per hour.”

“How many units do you expect per hour?” “How many are you producing now?” (compared to what?)

Indicators of a W.A.G.

How often do you hear something like  “x happens 90 percent of the time”?

I am always suspicious of round numbers because they typically have no analysis behind them. When I hear “75%” or “90%” I am pretty sure it’s just speculation with no data.

These things sound very authoritative and it is easy for the uncertainty to get lost in re-statement. What was a rough estimate ends up being presented as a fact-based prediction.

At Boeing someone once defined numbers like this as “atmospheric extractions.”

If the numbers are important, get real measurements. If they aren’t important, don’t use them.

Bottom Line Advice:

Avoid averages unless they are part of a larger statistical testing process.

Don’t set goals as “percent improvement.” Do the math yourself and calculate the actual value you are shooting for. Compare your actual results against that value and define the gap.

When there is a lot of variation in the number of opportunities for success (or not) during a day, a week, think about something that conveys “x of X opportunities” in addition to a percent. When you have that much variation in your volume, fluctuations in percent of success from one day to the next likely don’t mean very much anyway.

Look at the populations – what was different about the ones that worked vs. the ones that didn’t — rather than just aggregating everything into a percentage.

Be suspicious of round numbers that sound authoritative.

_______________

*These charts are simply independent random numbers with upper and lower bounds on the range. Real data is likely to have something other than a flat distribution, but these make the point.

The Improvement Kata: Next Step and Expected Result

In the Improvement Kata sometimes it helps to think about the outcome desired and then the step required to accomplish it.

A couple of months ago, I gave a tip I’ve learned for helping a coach vet an obstacle.

Another issue I come across frequently is a weak link between the “Next Step” and the “Expected Outcome.”

In the “Five Questions” of the Coaching Kata we have:

What is your next step or experiment?” Here we expect the learner / improver to describe something he is going to do. I’m looking for a coherent statement that includes a subject, verb, object here.

Then we ask “What do you expect?” meaning “What do you expect to happen?” or “What do you expect to learn?” from taking that step?

I want to see that the “Expected Result” is a clear and direct consequence of taking the “Next Step.”

Often, though, the learner struggles a bit with being clear about the expected outcome, or just re-states the next step in the past tense.

While this is the order we ask the questions, sometimes it helps to think about them in reverse.

Reverse the Order

Have the learner first, think about (and then describe) what she is trying to accomplish with this step. Look at the obstacle being addressed, and what was learned from the last step.

Based on those things, ask “what do you want to accomplish with your next step?”

The goal here is to get the learner to think about the desired result. Don’t be surprised if that is still stated as something to do, because we are all conditioned to think in terms of action items, not outcomes.

“What do you need to learn?” sometimes helps.

“I need to learn if ______ will eliminate the problem.” might be a reply.

Even a proposed change to the process usually has “to learn if” as an expected outcome, because we generally don’t know for certain what the outcome will be until we try it.

Have the learner fill in the “Expected Outcome” block.

NOW ask “OK, what do you have to do to ______ (read what is in the expected outcome)?”

PDCA Outcome-Activity

That should get your learner thinking about the actions that will lead to that outcome.

A Verbal Test

A verbal test can be to say “In order to ______ (read the expected outcome), I intend to _____ (read the next step.”

If that makes sense grammatically and logically, it is probably well thought out.

The Destructiveness of “What Can You Improve?”

“What Can You Improve?”

Leaders often ask “What can you improve?” as an empowerment question. In reality, it may have the opposite effect.

I am coming to the belief that “What can you improve?” (about your job, about your process) is possibly one of the most demotivating, disempowering, destructive questions that can be asked.

What can you work on?” is another one of many forms this question takes. “How could you improve this process?” is another. What they all have in common is the psychological trap they set.

Now this really isn’t that much of a problem in a company that has a history of transparency in leadership, comfort with discussing the truth, and no need for excuses or justifications. Then again, those companies tend not to ask these questions straight-on.

But the vast majority of organizations aren’t like that. That doesn’t mean they are unkind. Rather, they operate in an environment where truthfully answering this question is difficult at best.

The Psychological Trap

To answer that question with anything other than trying to guess what you want, implies I have:

  • Thoroughly examined my results and the underlying processes.
  • Identified gaps in performance.
  • Know what to do about those gaps.
  • And haven’t done anything about it until you asked.

This puts me in the position of either defending the status-quo, or saying that I need to improve something that is out of my control – someone else’s process needs improvement so I can do better.

Hint: If you are a leader, and you ask a “What can you improve?” question and get an answer like the above – defending the status-quo or pointing to an outside problem –, there is fear in your organization. Justified or not, the person answering is struggling to maintain the impression that everything they can do is being done. Why do they feel the need to do this? Think about it.

This is especially pervasive in support / staff departments with a charter of influencing how other organizations perform, or in those who must work together with line organizations to succeed in their tasks. In industry this might be maintenance, HR, industrial engineering, or even the “improvement office” (who are often not a  beacon of internal efficiency or effectiveness).

A Bit of Background

When I start working with an organization, we usually start with practicing the basic mechanics of the Improvement Kata in a classroom setting. We then follow up immediately with kick-starting some live improvement cycles so we can begin practical application. Classroom learning really doesn’t do much good unless it is applied immediately.

Applying the Improvement Kata is a lot harder in the real world than it is in the classroom. I could go into a tangential rant on why I think our primary and secondary education system makes it harder, but I’ll save that for another day.

Even though I am as adamant as I can be on the importance of the organization identifying challenges for the new improvers / learners, the reality is that most organizations don’t know how to do this, or at least aren’t comfortable with it.*

As a result, the new improvers often struggle to define a “challenge” for themselves.

They guess – because they haven’t yet studied their process (which is the next step once context is established, they haven’t yet established a target condition (which is the step after that), and therefore, they haven’t identified what improvements they must make to get to the challenge state.

And if that guess is something in someone else’s domain, or worse if the “coach” has something else in mind, they are told “That’s not it,” they guess again, and eventually get defensive or give up.

Now – to be clear, this doesn’t happen every time. But I have seen it enough, across multiple organizations in very different domains that it’s a problem. And it is frustrating for everyone when it happens.

I indirectly addressed this topic a long time ago in “How the Sensei Sees.” Now, though I am talking about my own direct observation of the effect. And I am still learning how to deal with the fallout without becoming part of the problem.

It’s not the learner’s problem. It is a leadership problem.

________

*Dave Kilgore at Continental Automotive had the additional insight that it is important for beginners that this challenge should be something important but not urgent so they don’t feel pressured to jump to an immediate solution. This is a good example of “constancy of purpose” – his priority is developing the skill level for improvement first.