Leadership: Deal With The True Constraint

I am starting to read a review copy (courtesy of McGraw-Hill) of Jeff Liker and Gary Convis’ new book, The Toyota Way to Lean Leadership. (The hot link goes to my Amazon page.)

In the spirit of one-piece-flow, I am to share key thoughts as I go rather than save everything for a thousand word review at the end.

One of the first points that comes out – in the prologue no less – is the acknowledgement that people development is a constraint to growth that you ignore at your peril.

One of the results of Toyota’s breakneck pace of growth in the first half of the last decade was that they were still making North American decisions in Japan.

They were doing this because, in the authors’ words, “…Toyota did not develop enough leaders, or did not develop leaders that it trusted sufficiently, in the North American operation to allow decisions making and problem solving to be as close to the gemba as they should have been.”

But rather than say “we grew too fast,” the President, Akio Toyoda sees the limits and the relationship:

“The problem was that the pace of growth was faster than the pace of human resource development… It is not the growth pace itself, but it is the relationship between the pace of grown and the pace of [people development].”

When traditionally trained managers think about constraints to growth, they typically think about things they can buy. “People” as a constraint comes in only as a hiring problem.

But it takes time to develop “people” into a team that thinks and moves in unison. Today’s leaders, up to this point Toyota included, underestimate both the time and the effort it takes to do that.

Any good sports team knows what it takes to build a team. So does the military. We understand the science, the psychology. But perhaps because it is difficult and sometimes messy to deal with people (and it is certainly impossible to reduce the effect of good teamwork to a stoplight report and a spreadsheet), “people development” gets delegated to HR, or people are sent to classroom training and given “certifications.” Doesn’t work, never has.

Akio Toyoda was acknowledging an uncomfortable truth – that they had fallen behind on people development and they had continued anyway, without pulling the metaphorical andon and addressing the issue as soon as it came up.

This simple insight hits at the very core of what we, as a community, need to address, and what the flag-bearing institutions in our community still need to fully embrace.

“Continuous Improvement” means “continuously improving people.”

While just about every “lean overview” I have ever seen uses some form of lip service to the concept of “people based system” everything then goes straight into describing the technical characteristics of everything but how people are developed.

What I like is that in the last couple of years the mainstream books are starting to address this topic in a meaningful way. This, of course, isn’t the first of Jeff Liker’s books to hit here. And Toyota Kata is really the first to address the mechanics of people development as thoroughly as we have addressed the mechanics of kanban.

I am liking what I am reading in this book so far, and I’ll be working to correlate what I read with other works out there plus my own experiences. This should also tie in nicely with points I want to continue to make on Bill Constintino’s presentation.

Stay tuned.

Automating the Coaching Questions

Hopefully that title got some attention.

In Toyota Kata, Mike Rother frames a PDCA coaching process around five questions.

The first three questions are:

  1. What is the target condition?
  2. What is the current condition?
  3. What problems or obstacles are preventing you from reaching the target?

Wouldn’t it be wonderful if we could build a machine that asked and answered those questions for us?

Of course automated processes do not improve themselves (yet). But they can be made to compare current operation against a standard.

When Sakichi Toyoda was working on automated weaving looms, he was actually striving to reduce the need to have an operator overseeing each and every machine. That was the point of automating the equipment. One of the problems he encountered was that threads break. When that happened, the machine would continue to run, producing defective material.

So in order to reach his goal, he needed to replace the need for a human operator to be asking these questions and give that ability to the process itself.

What is the target condition?

The loom continues to run and produces defect free material. For this to occur, the threads must remain intact.

What is the current condition?

The threads are either intact, or they are broken.

But if the machine cannot continuously ask, and answer, that second question then a human must do it. Otherwise, nobody gets to the third question, “What is stopping us?” unless they happen to notice the machine is smoothly producing defective material.

Since his goal was to reduce the need for human oversight, he had to solve this problem.

Toyoda’s (now classic, and still used) response was to put thin metal floaters on each thread. If a thread broke, the floater dropped, triggering an automatic machine shutdown.

The machine was now asking the second coaching question with each and every cycle, comparing the actual situation with the target situation.

The event of the machine shutting down triggered the attention of a human operator with the answer to the third question.

What problem or obstacle is preventing you from reaching the target?

Right now, there is a broken thread. I cannot produce defect-free material until this situation is corrected. Please assist me.

The process was named jidoka and in that moment, the foundation for what grew into the Toyota Production System was set.

Without reliable and consistent production, one-by-one flow and just-in-time are impossible. The options are to either work on the problem, or stop improving.

It is the leader’s responsibility to ensure that there are processes in place to do these things. Sitting still is not an option, there is nothing in these techniques that is a secret. Your competitors are doing it. It is only a matter of who can solve problems faster and better.

 

Knowing vs. Knowing How To Learn

On the way to the airport a few days ago a couple of thoughts occurred to me that I wanted to toss out there and see how you all responded. This is one of them.

What separates an expert from a master? Actually I need to ask in more prejudicial terms. Some people who are truly experts are also “stuck” in that they try to fit new things they encounter in to an analogy within their (vast) experience. When they find it, they apply the analogy and often come up with a pretty good solution. But they can have problems relating when the encounter something that doesn’t compare with anything they have seen before.

As an example, the classic elements of standard work are described as

  • a repeating work sequence
  • balanced to a takt time
  • standard in-process stock

And, indeed, these things are the elements of standard work when there is a repeating work sequence and when there is a takt time.

Some experts at applying standard work, however, have a hard time seeing application outside of this scope. They know work needs to be standardized, but they continue to try to shoe horn what they see into this model.

Another, more general, lean manufacturing model is the notion that this is about manufacturing, or that it “doesn’t apply” to true job-shops or non-repetitive environments. But this, too, is just a limitation of an “analogy” model. It is the analogy that breaks down, not the concept.

In the analogy model, we try to educate by providing more analogies, more examples of different applications in order to expand the base for comparison.

And, to be honest, this works to a degree. Some people get it, others simply don’t want to expand their analogy base. They are the ones who say “This (model) does not apply to (whatever is their current paradigm)     .

Indeed, people who are tightly holding the view that kaizen events led by trained specialists are the only way to drive improvement can easily be blind to the possibility that an organization that is successfully running daily kaizen is operating at a fundamentally different level. I have seen that as well. And I have seen the same excuses made to explain away the difference in performance. “It isn’t different;”  “it doesn’t scale;”  “it isn’t repeatable.” All of this is defending a mental model – a paradigm – an analogy.

On the other hand, I want to contend here that a true master is not one who has mastered a process, but rather one who has mastered the process of learning about a process.

At an organizational level, true continuous improvement starts to engage when “process” and “standard” become baselines to gain higher understanding. Rather than trying to audit and enforce compliance, they are genuinely curious about the reason why a process is not being carried out as it should. This thinking requires far more work because it is empowering – it simply does not allow playing victim to “they won’t.” It puts the spotlight right back on what can (or should) be learned from the experience.

Put another way, the “expert” knows.

The “master” knows that there is much to learn.

Continuous Erosion

“Sustaining the gains” is a frequent topic of discussion in the continuous improvement world. Often the discussion degenerates into a rant about “management commitment.”

But in the real world, people generally don’t sabotage improvements on purpose. (Though I have seen it happen, but only once.) The mechanism is far more subtle.

Before we get into what happens after improvements are made, let’s look at a common improvement process itself.

In many companies, the primary method for making improvements is through special events or projects. These are usually planned, organized and led by a staff specialist.

Although the exact methods and words vary, the general process usually looks something like this:

  • Identify an opportunity, select an area for improvement.
  • Analyze the current state.
  • Select an improvement team.
  • Teach the team members how to apply the improvement tools.
  • Facilitate the development of improvement ideas.
  • Work with the team members to implement them.
  • Wrap up with a report or presentation, including remaining action items for management.

A variation on this is where the team is chartered, and it is up to them to identify an opportunity. This approach was more common in the late 1980’s than it is today.

This process actually works. It is capable of making pretty dramatic changes over a short period of time, often only a few days.

So why do the results erode? Or put another way, what is the problem?

Take a look at the routine decisions that are made during normal work, especially the ones that result in some change to the process. Those decisions must be made, because people have to get something done. The question comes down to whether those decisions result in improving the new process, or eroding it somehow.

Once things are in operation, some kind of unforeseen event always happens. Guaranteed. It can be something that the improvement team didn’t think of. It can be a piece of malfunctioning equipment. Maybe there is a material shortage or a defective part is delivered. The same things happen in administrative processes, only the words are different. Incomplete information arrives.

It could even be a deliberate decision. A production rate change. A software upgrade. Making room for some other activity.

All of these things, no matter how small or inconsequential, force decisions to be made. “How do we deal with this this and get production going again?”

The person making that decision is either:

  1. Fully capable of applying kaizen principles, and applies them in a solution to the problem.
  2. Is not fully fully capable of applying kaizen principles, but knows that, and seeks assistance in finding a solution to the problem.
  3. Is not fully capable, may or may not know this, and does the best he can to get production back on track.

Both (1) and (2) result in making the system better, more robust and more responsive.

(3) usually results in a little bit of erosion. Variation is accommodated, things are made a bit more complex, the layout is now less than optimal, the old process does not work as designed anymore so the team member must improvise a bit. That, in turn, introduces more variation into the process, and usually drives a cascade of these little decisions.

If there is no mechanism for problem escalation (and if there was, we would likely be in (1) or (2) in the first place), then this becomes the new way, and things are steadily creeping closer to where they were before the event happened.

Given enough time (which can be amazingly brief), the process reverts back to where it was, or morphs into something else entirely – but equally wasteful.

Meanwhile the improvement specialists have moved on to the next project. Even if the local leader did ask for assistance, they might not be available, or worse, they tell him to figure it out.

Follow this with an “audit” that dings the local leader for “not supporting the changes” and wonder why he is less than enthusiastic about this kind of help.

Here is the question I want to leave hanging out there:

What was the intent of the kaizen event? (and was that intent accomplished?)

Audits vs. Leader Standard Work

5S audits, standard work audits, and for that matter ISO-900x audits, are a frequent source of questions in various online discussion forums. At the same time, the topic of “leader standard work” comes up frequently, as it did in a recent question / comment on “Walking the Gemba.”

I think the topic is worth exploring a bit.

Let’s start with audits.

Typically the purpose of an audit is to check compliance with a standard. The auditor has a checklist of some kind that defines various levels of compliance. He evaluates the current situation against the checklist, and produces a score, a report of discrepancies, a pass/fail evaluation of some kind.

So, for example, a typical 5S audit would assign various criteria in each of the 5 ‘S’ words, and assign a 1-5 scale against each of them. Periodically, the person responsible for 5S will come into the work area, do an audit, and post the score. Often there is a campaign to “get to level 3” or something.

Although there are fewer boilerplate checklists out there, “standard work audits” tend to be pretty similar, at least the ones I have seen.

Further up the scale is something like an ISO 900x audit, or an “Class-A MRP II” audit or a corporate “lean assessment.” These are often done by outside agencies to certify the organization. There is a lot of work up front to pass the audit, a plaque goes on the wall, and everybody is happy.

So what’s the problem? (this is turning into one of my favorite questions)

The key is in the difference between a “check” and a “countermeasure.”

A countermeasure is a change or adjustment to the system itself so that the root cause of a problem, or at least its effect, is eliminated.

Audits, on the other hand, actually change nothing about the underlying system. All they do is assess the current state against some (presumed) standard.

Yet so many organizations try to use “audits” as a means to alter the system.

What an audit is good for (if it is planned and performed well – a big assumption) is to CHECK to see if the other things you are doing are working. But, by itself, it is “management by measurement.” People will do what they must to pass an audit (if it even matters that much to them), then go back to what they were doing before.

Leader Standard Work operates at a much lower level of granularity, and looks for different things. Think of the analogy in a previous post about cost accounting:

When dieting, standard cost accounting would advise you to weigh yourself once a week to see if you’re losing weight. Lean accounting would measure your calorie intake and your exercise and then attempt to adjust them until you achieve the desired outcome.

So, to paraphrase, audits are weighing yourself once a week (or once a quarter!) to see if you are losing weight. Leader standard work, on the other hand, is a process to continuously verify that the calorie intake is as specified, and the exercise is as specified, while those things are being done.

That, in turn, implies that there is a daily plan for calorie intake, and a daily plan for exercise. Without those specifications, there is nothing to check.

Leader standard work defines what the leader will check, when it will be checked, and how it will be checked. It also defines how the leader will respond if there is a problem.

He is looking for solid evidence of control.

Are things going as planned?

Is anything disrupting the work cycles or flow of material?

Are quality checks being made as specified?

And, in my opinion, the most important: Are problems being handled correctly, or worked around?

This is important because a culture of working around problems is one in which problems are routinely hidden, often without malice and with the best of intentions. But hidden problems remain, come up again tomorrow, and become part of the routine, adding a little waste, a little friction, making the system a little worse every day.

The typical effort to “pass an audit” reinforces this – it actually hides problems, and the auditor’s job is to ferret them out. This is the exact opposite of the kind of problem transparency we need.

It is human nature to work around problems, and it is the default behavior, everywhere. It takes constant leader vigilance, coaching, response to prevent it.

If you want to go faster, stop.

Mark’s post on The Whiteboard tells a pretty common story. The good news is that this company has more business than they can handle. Pretty good results in these times. The bad news is that they are having problems ramping up production to meet the demand. In Mark’s words:

I’m working for a company that is very, very busy. They developed a new process that is the first of it’s kind and have taken the market share away from their competition. But they have not spent enough time making the process robust enough to handle the increase demand and the scrap costs are going out of the roof. Currently about 65K a day. Any suggestions? Our number 1 scrap producer is a machine that can not perform at the same capability as when Engineering did their run off…

At the risk of coming across as flip, the very first thing to do if a machine starts producing scrap material is to shut it down.  It is better to make nothing because that is a cheaper alternative than making stuff you can’t use.

However, it goes deeper than that.

Engineering had done a “run off” (which I presume was a test on theoretical speeds). Now actual performance isn’t meeting expectation. This is a problem.

But let’s rewind a bit and talk about how to manage a production ramp-up. Hopefully it is a problem more people will be having as the economy begins to recover.

Although this is in the context of the machine, exactly the same principles apply to any type of production. Only the context and the constraint changes.

Presumably there was some speed for this machine where it didn’t produce scrap, or the scrap was minimal. Going back to that time, here is what should have happened.

Promise production at the rate the machine is known to support.

Now crank up the speed a bit and see what happens. In the best case, you are overproducing a bit, but you are learning what the machine is actually capable of doing.

Crank it up a little more. Oops, scrap.

STOP!

Because you have been running a little faster than required, you have bought a little time. Understand why that scrap happened. Try to replicate it. Dig into the problem solving. Try to replicate the problem under controlled conditions. LEARN.

Hopefully you can find the cause and fix it.

Try it. Run the machine again, at the faster speed. Scrap? Back around to the “problem solving” cycle. Repeat until you can reliably run at the faster speed without scrap.

Then, and only then, promise the higher rate, because now you can reliably deliver it.

Then notch it up a bit until you encounter the next problem.

This cycle of promising only what you can actually deliver protects the customer while you are pushing the envelop internally to discover the next problem.

The alternative? Make a promise knowing you actually have no clue whether or not you can meet it.

But that’s what they did. So now they are burning a lot of money every day making scrap material.

The same principles apply, however. They are already not delivering what they promised. So throttle things back to the point where they can predict the results, and go from there. Pretending they can run faster than they can is not accomplishing anything other than burning money. Deal with facts, no matter how uncomfortable.

If you make a schedule based on what you wish you could do, you will have a schedule you wish you could meet.

No matter what, each time scrap is produced, the fact must be acknowledged. That allows the immediate response that is framed around a simple question:

“How the hell did this happen?”

Put another way, “What have we just learned about the limits of this process?”

It is only within that framework that you actually get any better. Anything else is relying on luck, and in this case at least, that didn’t work.

Andon Leadership

On a world-class automobile assembly line, the actual work is continuously being compared to the planned work. In each work zone, there is a planned sequence of tasks which are expected to produce a specified output.

If there is any departure at all from the planned sequence, if things get behind the planned timeline, if the necessary conditions are not there, if any process step does not complete as required then either the Team Member or an automated system turns on a help call – an andon.

The response is immediate. The first response is within seconds with a priority of clearing the problem. The line itself is still moving, but if the problem is not cleared before the end of the takt time the line automatically stops – things go from “yellow” to “red.” When that happens, the responsibility for the problem also shifts up a level in the response chain.

The priority is still to clear the problem quickly – and clearing the problem means to restore conditions required for safety and quality without compromise.

Once the problem is cleared, and the line re-started, the rest of the problem solving process engages to find the cause of the problem and address it in the system itself. If the problem is outside of the bounds, or outside of the capability, of the original Team Leader to solve, then his chain-of-support will engage to help him solve it so that his skills can be improved.

In summary – we have a sequence of tasks to be accomplished over about six hours that, in the end, will result in a car. That sequence of tasks is further broken down into sub-intervals where progress against the plan is checked. If problems develop, the system responds, swarms the problem to clear it, understand it, and adjust the process if necessary.

None of the above should be a surprise.

But what happens in your management monthly reviews?

Huh? How are management monthly reviews related to an assembly line?

Let’s see. In a reasonably functional organization the business plan is (or should be!) a series of tasks to be accomplished over a period of time that, in the end, will deliver specified results. That sequence of tasks is further broken down into sub-intervals where progress of the plan is checked.

But in a typical monthly review, the analogy ends there. When problems develop, the reasons are discussed, mentioned, and worked around. In dysfunctional organizations only the objectives are discussed, and in truly dysfunctional organizations the objectives themselves are adjusted to match what is accomplished.

Shift your thinking a bit, and apply the assembly line analogy to its end-state.

The monthly review is the fixed-stop position. Up to this point, the person responsible for the task “owns” it. He should be checking progress and ensuring that things are getting done as specified, and that the results being achieved are as anticipated (predicted). He should be monitoring conditions and making sure that the operating assumptions are holding. Ideally those checks are built-in to the process and planning so that they are at least semi-automatic.

If all is “green” going into the monthly review, then things are great.

But if something is “off” – yellow – that is equivalent to an andon call. The responsible person is that first-responder. He has until the next review to clear the problem. Think about the ramifications here. This means that if the reviews with the boss are every month, the responsible manager better not wait until the week before to find out what is happening so he can report on it. He has to stay in touch with what is happening.

The monthly review is the fixed-stop-position at the review, and the “andon” goes red.

The reviewing manager now “owns” the problem. His job is now to ensure that there are effective countermeasures in place to get things back on track. That does not absolve the responsible manager, but rather, this becomes a “check” and an “act” for his professional development by the more senior leader.

Again, think of the ramifications. The responsible manager must not only be in touch with what is happening, he also needs to make sure his people are being developed and pushed to fully understand the problems as they occur.

The job of these leaders, at each level, is not only to keep intimately in touch with what is going on, but also be fully aware of the skills, gaps and development of their people. If someone should be able to handle an issue, but can’t, that is a skill gap that, like any other gap, must be addressed (in this case by developing the person).

In mediocre organizations, professional development is owned and overseen by H.R. and may be tied to the annual review process. In organizations that “get it” professional development is a natural part of the leadership process, and happens at all levels in the natural course of getting things done.

Hidden Negative Consequences

“Stop the line if there is a problem” is a common mantra of lean manufacturing. But it is harder than first imagined to actually implement.

The management mindset that “production must continue, no matter what” is usually the first obstacle. But even when that is overcome, I have seen two independent cases where peer social pressure between the workers discouraged anyone from signaling a problem. The result was that only problems that could not be ignored would come to anyone’s attention – exactly the opposite of the intention.

Both of these operations had calculated takt time using every available minute in the day. (Shift Length – Breaks etc.) Further, they had a fairly primitive implementation of andon and escalation: The line stop time would usually be tacked onto the end of the shift as overtime. (The alternative, in these instances, is to fall behind on production, and it has to be made up sometime.)

So why were people reluctant to call for help? Peer pressure. Anyone engaging the system was forcing overtime for everyone else.

Countermeasure?

On an automobile line, the initial help call does not stop the line immediately. The Team Leader has a limited time to clear the problem and keep the line from stopping. If the problem is not cleared in time, the problem stops the line, not the person who called attention to it. This is subtle, but important. And it gives everyone an incentive to work fast to understand and clear the problem – especially if they know it takes longer to clear the problem if it escapes down-line and more parts get added on top.

There are a number of ways to organize your problem escalation process, but try to remember that the primary issue is human psychology, not technology.