Managing To Learn (the book) – my first impressions

Managing to Learn bookManaging to Learn by John Shook is the latest in the classic series of books published by the Lean Enterprise Institute.

It is subtitled “Using the A3 management process to solve problems, gain agreement, mentor, and lead” and that pretty well sums it up.

Like many of the previous LEI books, it is built around a straightforward working example as a vehicle to demonstrate the basic principles. As such, it shares all of the strengths and shortcomings of its predecessors.

In my admittedly limited experience, I have seen a couple of companies try to embrace similar processes without real success. The main issue has been that the process quickly became “filling out a form.” Soon the form itself became (if you will pardon the term) a pro-forma exercise. Leadership did not directly engage in the process; and no one challenged a shortcut, jumping to a pet solution, or verified actual results (much less predicted them). Frankly, in these cases, it was either taught badly from the beginning, or (probably worse) the fad spread more quickly than the organization could learn how to do it well.

I must admit that with the recent flood of books and articles about the “A3” as the management and problem solving tool of lean, that we will see a macro- case of the same though throughout industry.

Managing to Learn tries to address this by devoting most of its emphasis on how the leader teaches by guiding and mentoring a team member through the problem solving process. The reader learns the process by following along with this experience, vs. just being told what to put in each block of the paper.

To illustrate this in action, the narrative is written in two tracks. One column describes the story from the problem solver’s viewpoint as he is guided through the process. He jumps to a conclusion, is pulled away from it, and gently but firmly directed through the process of truly understanding the situation; the underlying causes; developing possible countermeasures and implementing them.

Running parallel to that is another column which describes the thoughts of his teacher / manager.

Personally, found this difficult to follow and would prefer a linear narrative. I am perfectly fine with text that intersperses the thoughts and viewpoints of the characters with the shared actions. But given this alternative choice of format, I would have found it a lot easier to track if there had been clear synchronization points between the two story lines. I found myself flipping back and forth between pages, where sentences break from one page to the next in one column, but not the other, and it was difficult to tell at what point I was losing pace with the other column.

To be clear, Shook says in the introduction that the reader can read both at once (as I attempted to do), read one, then the other, or any other way that works. I tried, without personal success, to turn it into a linear narrative by reading a bit of one, then the other.

The presentation format not withstanding, Shook drives home a few crucially important points about this process.

  • It is not about filling out the form. It is a thought process. Trying to impose a structured form inevitably drives people’s focus toward filling out the form “correctly” rather than solving the problem with good thinking.
  • He emphasizes the leadership aspect of true problem solving. The leader / teacher may even have a good idea what the problem and solution are, but does not simply direct actions. Instead, the leader guides his team member through the process of discovering the situation for himself. This gets the problem solved, but also develops the people in the organization.

There are also a few real gems buried in the text – a few words, or a phrase in a paragraph on a page – that deserve to have attention called out to them. I am going to address those in separate posts over the next few days.

In the end this book can provide a glimpse of a future state for leadership in a true learning and problem solving culture. But if I were asked if the message is driven home in a way that passes the “sticky test” then I have to say, no. I wish it did, we need this kind of thinking throughout industry and the public and government sectors.

Thus, while this book is very worthwhile reading for the engaged practitioner to add to his insight and skill set, it is not a book I would give someone who was not otherwise enlightened and expect that he would have a major shift in his approach as a result of reading it.

The bottom line: If you are reading this in my site, you will probably find this book worthwhile, but don’t expect that having everyone read it will cause a change in the way your organization thinks and learns.

Computers and Muda

In a short, but interesting, thread on lean.org, Emma asks an interesting question: “Can an IT system support Lean?” She goes on to point out a general trend she has seen where the “kaizen guys” offer a lot of resistance to the introduction of sophisticated I.T. systems.

“Lean” stuff aside, I offer this recent post on Tech Republic, “The Seven Habits Of Wildly unsuccessful CIOs.”

Though the article is written about individual CIO’s, it applies to the sub-culture of information technology in general. However, there are a few points where I think the scope should be expanded.

#1, “Acquire technology simply because it’s new” talks mainly about upgrades. I would expand this to discuss “Acquire technology simply because it’s cool.” I.T. folks are biased toward computerized solutions. Now that’s great, and it is what we pay them for. But often there is a simpler way, perhaps with the information system as an enabler rather than the centerpiece. One of the fundamental principles we taught at a previous company was “Simple is best.”

In #3 “Create solutions in search of a problem” the article emphasizes off-the-shelf solutions vs. an in-house solution to everything. Again, though, I want to take the title of the paragraph and expand the scope. Just because it is slick and “does this” and “does that” does not mean it is useful. More features, and for that matter, more information, is not necessarily better. I want just what is necessary for the next step in the process. Any more provides a distraction, an opportunity for error.

#4 “Reach beyond the competency level” is classic. If you don’t know how to do it, please don’t say you do. It is OK to take on an experiment, but please control it and understand it before it is imposed on everyone. And make sure the “solution” solves a “problem.”

#6 “Failing to understand the relationship between technology and business” – read the technology chapter of “Good to Great” and “The Toyota Way.” Those both offer great insights on how great companies use technology as a multiplier without making it a boat anchor.

If you are an I.T. person, and are encountering the same push-back that Emma is from your kaizen people (or for that matter, push-back from anyone), I would recommend you read the article. Then do the following:

For each key point in the article, write down how an I.T. organization that does exactly the opposite would perform and behave with its customers. This is creating the idea of the ideal supplier.

Then take a hard look in the mirror. Compare your actual attitudes, beliefs, behaviors with that list. Talk to your customers, and ask them how they perceive your organization.

You are likely to find gaps between the ideal state you defined and your actual performance. If so, develop a plan to close those gaps.

Jim Collins: “Good to Great” Website

Jim Collins book “Good to Great” has been a best selling business book for several years. But I am not so sure everyone knows about Jim Collins web site. It as on-line mini-lectures, and much more material that reinforces the concepts outlined in the book.

As for how the concepts in the book relate to “lean thinking” – I believe they are 100% congruent. Examining Toyota in the context of the model outlined in the book shows everything Collins calls out as the crucial factors that separate sustainable improvement from the flash-in-the-pan unsustainable variety.

The only difference I can see between Toyota and the companies that were profiled is that Toyota has had these ingredients pretty much from the beginning, and Collins’ research was looking at companies that acquired them well into their existence.

Lean in Health Care

A little over a month ago I had an opportunity to spend about 4 hours in a small-group session with Steven Spear. For those readers who don’t already know, Steve is a researcher and practitioner who has made his name in understanding the Toyota Production System as Toyota actually does it.

He first came to the attention of the “lean” community with the 1999 publication of Decoding the DNA of the Toyota Production System, a paper summarizing his PhD research and dissertation.

Since then he has become active in improving the health care system. I’ll leave a full bio to your Google skills.

While health care certainly has a lot of room for improvement, I think the rest of us have a lot to learn from experience and insights being gained in efforts to apply the TPS to health care.

I plan to put together the next few posts intertwining these topics. But first, I wanted to share some interesting performance numbers we heard from Steven Spear.

  • During a hospital stay in the U.S. health care system, the patient’s risk of being injured or killed by the efforts to deliver treatment are roughly on-par with the risks of base jumping. For those of you who don’t know, base jumping is jumping off ‘B’uildings, ‘A’erials, ‘S’pans and ‘E’arth and (hopefully) then parachuting to a (hopefully) safe landing without hitting the thing you are jumping from.
  • Hour-per-hour, you would be safer on an infantry patrol in Bagdad than staying in a hospital.

Now, before you start shaking your head and pointing out just how terrible this is, please consider:

  • As a product moves through your own system, what are the odds it makes it through absolutely unscathed and in perfect condition? Do you do any better than this?
  • If you were to study your own processes do you honestly think they are any more effective at delivering a defect-free outcome at each and every step?
  • Do you even know where, when, and why, those processes fail to deliver a defect-free outcome?

I would contend that most of us have no idea.

Systematic Problem Solving

If I were to look at the experience of the organization profiled in the last three posts “A Systematic Approach to Part Shortages” I believe their biggest breakthrough was cultural. By applying the “morning market” as a process of managing problems, they began a shift from a reactive organization to a problem solving culture.

I can cite two other data points which suggest that when an organization starts managing problem solving in a systematic way, their performance begins to steadily improve. Even managing problem solving a little bit better results in much more consistent improvement and less backsliding. Of course my personal experience is only anecdotal. That is certainly true by the time you read it here as I try to filter things. But consider this: The key difference with Toyota’s approach that Steven Spear pointed out in “Decoding the DNA of the Toyota Production System” (as well as his PhD dissertation) was that the application of rules of flow triggered problem solving activity whenever there was a gap between expected and actual process or expected and actual outcome.

Does this mean you should go out and implement morning market’s everywhere? Again, based on my single company data point, no. That doesn’t work any more than attaching kanban cards to all of your parts and calling it a pull system. It is not about the white boards, it is not even about reviewing the problems every day. Reactive organizations do that too. In most cases in the Big Company that implemented morning markets everywhere, that is what happened – they morphed into another format for the Same Old Stuff.

Here are some of the things my example organization did that I think contributed to success in their cultural shift.

  • They separated “containment” and “countermeasure” as two separate and distinct responses. This was to make sure that they all understood “containment” is what is done immediately to re-start production while preserving safety and quality. “Containment” was not a countermeasure as it rarely (if ever) actually addressed the root cause. It only isolated the effect of the problem as far upstream as possible. The rule of thumb was simple:
    • Containment nearly always adds time, cost, resources, etc.
    • A true countermeasure nearly always removes not only the containment, but reduces time, cost, resources.
  • They didn’t use the meetings to discuss solutions. They only addressed two things:
    • A quick update on the status of ongoing problem solving.
    • A quick overview of new problems from yesterday.

I think this is an important point because too many meetings get bogged down with people talking about problems, and speculating what the causes are. That is completely non-productive.

  • The actual people working on problems attended the meeting. I cannot over-emphasize how important this is. They did not send a single representative. Each person with expected activity reported his or her progress over the last 24 hours. It is difficult to stand in front of a group and say “I didn’t do anything.”
  • They blocked out time to work on problems. I probably should have put this one first. The manufacturing engineers and other professional problem solvers agreed not to schedule anything else for at least two hours every morning. This time was dedicated to working on the shop floor to understand the problem, and physically experiment with solutions. There was a lot of resistance to this. But over a couple of months it became close to the norm. It helped a lot because it started to drive the group to consider where they really spent their time vs. what they needed to get done. There was no doubt in the past that solving these problems was important, but it was never urgent. Nobody was ever asked why they weren’t working on a shop floor issue. That had been a “when I am done with everything else activity.” Gradually the group developed a stronger sense of the shop floor as their customer.
  • They didn’t assign problems until someone was available to work on them. This came a little later, when the problem-solvers were missing deadlines. The practice had been to assign a responsible person in the morning when the problem was first reviewed. Realistically a person can work on one problem a time, and perhaps work on another when waiting for something. They established a priority list. The priority was set primarily by manufacturing. When a problem-solver became available, the next item on the list was assigned. Once a problem was assigned, nothing would over-ride that assignment except a safety issue or a defect that had actually escaped the plant and reached a customer.
  • They got everyone formal training on problem solving with heavy emphasis on true root cause. People were expected to follow the method.
  • A problem was not cleared from the board until a long term countermeasure had been implemented and verified as working.

By blocking out time, they were able to establish some kind of expectation for productivity. After that, if problems were accumulating faster than they were being cleared, they knew they had a methods or resource issue. The same was true for their other tasks which were worked on during the rest of the day.

This was the start of establishing a form of standard work for the problem solvers.

21 Oct 08 – There is more on the subject here and here

A Systematic Approach to Part Shortages – Part 3

The third element of this organization’s successful drive to eliminate part shortages was a systematic approach to problem solving. They made it a process, managed just like any other process, rather than something people did when they had time. Even though this is “Part 3” of this series, in reality they put this into place at the same time, and actually a little ahead, of kanban and leveling.

The Morning Market

The idea of the “morning market” came from a chapter in Imai’s book “Gemba Kaizen.” He describes a process where the previous day’s defects are physically set out on a table and reviewed first thing in the morning – “while they are fresh” hence the analogy to the morning markets.

This organization had been trying to practice the concept of a morning market for a few weeks, and was beginning to get it into an actual process. Because supplier problems constituted a major cause of disruption, they set up a separate morning market for defective purchased parts.

That process branched yet again into a morning market for part shortages. And this evolved into a bit of a mental breakthrough.

They started looking at process defects.

Every shortage, every day, was recorded on the board.

Each morning the previous day’s shortages were reviewed. They were grouped into three categories based on knowledge of the cause – just like outlined in the book.

  • “A” problems – they knew the cause, knew the countermeasure, but had some excuse reason why it could not be implemented right away.
  • “B” problems – they knew the cause, but did not have a good countermeasure yet.
  • “C” problems – knew the symptom (parts weren’t there) but didn’t know why.

The mental breakthrough was systematically investigating the reason each and every shortage occurred. What they found was that in the vast majority of cases it was an internal process breakdown, rather than some problem at the supplier, that caused the shortage. This was a bit of a revelation.

They began systematically fixing their processes, one problem at a time.

Over time things got better. Simultaneously they were implementing the kanban system. Kanban comes with its own set of possible problems, like cards getting lost. Once again, when they found problems they went into the morning market and were systematically addressed.

After a few months into their kanban implementation, for example, they started turning in card audits with far less than 2% irregularities, and then it was not unusual for a card audit to find no problems at all. Why? They had addressed the reasons why cards end up somewhere other than where they should be. Instead of blaming people, they looked for why people acting in good faith would not follow the process.

This was also an attitude shift – assume a flaw in the process itself, or in communication, before looking for “who did it.”

Eventually the warehouse team had their own morning market. As did the receiving team. As did the parts picking team. As did assembly. Each looked at any case where they were not able to deliver exactly what their downstream customer needed.

About 8 months into this, another group in an adjacent building, was trying to work through their own issues. They came over for a tour. One of the supervisors, visibly shaken, came to me and said

“Now I get it. These people work together in a fundamentally different way.”

And they did. They worked as a team, focusing on the problems, not on each other.

And that, readers, is the goal of “lean manufacturing.” If you aren’t working toward that, then you aren’t really implementing anything.

What Nukes – a little more clear.

I re-read my “What Nukes?” post and realized I was really rambling. I want to reiterate a key point more clearly because I think it is important.

In the “Bad Apple” theory there is an implied assumption that the cause of an accident or other problem was one person who, at that moment in time, was not following the documented rules or procedures.

Except in the most egregious cases, such as deliberate misconduct, that is likely not the case. Most organizations have a set of “norms” that operate at some level of violation of the written or established procedures. The reasons for this are many, but usually it is because good people are doing the best they can, in the conditions they are given, to get the job done.

Failure to follow the rules does not result in an accident or incident.

Have you every run a red light or a stop sign? It happens thousands of times every day. It almost never results in an accident. Only when other contributing conditions are ripe will an accident result. Running a stop sign AND a car coming through the intersection.

The same goes for quality checks, and the more reliable an “almost 100%” process becomes, the more vulnerable you are. If a defect is only rarely produced, it is unlikely that any kind of human-based inspection will catch it. The faster the work cycle, the more this is true. The mind numbs, it is impossible to always pay attention to the detail, and the mind sees what it expects. “Failure to pay attention” is never an adequate root cause. It is blaming an unlucky Team Member for an omission that everyone makes every day just going through life. It is just, in this case, “there was a car coming through the intersection.” It is bad luck. It is being blamed for red beads in Deming’s paddle experiment.

So attaching the failure of an individual, while it is easy, avoids the core issue:

People’s failure in critical processes is a SYSTEM PROBLEM. You must investigate from the viewpoint of the person at the pointy end. What did he see? What did he perceive? What did he believe was happening and why was that belief reasonable given his interpretation of the circumstances at the time.

The post about “sticky visual controls” got to this. Your mistake-alerts or problem signals must penetrate conciousness and demand attention if they do not actually shut down the process.

The Seventh Flow

Those of you who are familiar with Shingijutsu’s materials and teaching (or at least familiar with Nakao-san’s version of things) have heard of “The Seven Flows.” As a brief overview for everyone else, the original version, and my interpretations are:

  1. The flow of people.
  2. The flow of information.
  3. The flow of raw materials (incoming materials).
  4. The flow of sub-assemblies (work-in-process).
  5. The flow of finished goods (outgoing materials).
  6. The flow of machines.
  7. The flow of engineering. (The subject of this post.)

A common explanation of “the flow of engineering” is “the footprints of the engineer on the shop floor.” I suppose that is nice-sounding at a philosophical level, but it doesn’t do anything for me because I still didn’t get what it looks like (unless we make engineers walk through wet paint before going to the work area).

Common interpretations are to point to all of the great gadgets, gizmos and devices that it does take an engineer (or at least someone with an engineer’s mindset, if not the formal training) to design and produce.

I think that misses the point.

All of those gizmos and gadgets should be there as countermeasures to real, actual problems that have either been encountered or were anticipated and prevented. But that is not a “flow.” It is a result.

My “put” here is that “The Flow Of Engineering” is better expressed as “The Flow of Problem Solving.”

When a problem is encountered in the work flow, what is the process to:

  • Detect that there even is a problem. (“A deviation from the standard”)
  • Stop trying to continue to blindly execute the same process as though there was no problem.
  • Fix or correct the problem to restore (at a minimum) safety and protect downstream from any quality issues.
  • Determine why it happened in the first place, and apply an effective countermeasure against the root cause.

If you do not see plain, clear, and convincing evidence that this is happening as you walk through or observe your work areas, then frankly, it probably isn’t happening.

Other evidence that it isn’t happening:

At the cultural and human-interaction level:

  • Leaders saying things like “Don’t just bring me the problem, bring a solution!” or belittling people for bring up “small problems” instead of just handling them.
  • People who bring up problems being branded as “complainers.”
  • A system where any line stop results in overtime.
  • No simple, on/off signal to call for assistance. No immediate response.
    • If initially getting help requires knowing who to phone, and making a long explanation before anyone else shows up, that ain’t it.
  • “Escalation” as something the customer (or customer process) does when the supplying organization doesn’t respond. Escalation must be automatic and based on elapsed-time-without-resolution.

Go look. How is your “Flow of Problem Solving?

What Nukes?

Cruise Missiles

Warning to Reader: This piece has a lot of free-association flow to it!

Oops. A few weeks ago a story emerged in the press that a B-52 had flown from North Dakota to Louisiana with half-a-dozen nuclear armed missiles under its wing. The aircrew thought they were transporting disarmed missiles. This is a rather major oh-oh for the USAF, as in general, they are supposed to keep track of nuclear warheads. (Yeah, I am understating this. I, by the way, can speak from a small amount of experience as I once held a certification to deal with these things, so I have some idea how rigorous the procedures are.)

Normally the military deals with nuclear weapons issues with a simple “We do not confirm or deny…” but in this case they have released an unprecedented amount of information, including a confirmation that nukes were on a particular plane in a particular location at a particular time.

The news story of the report summarized a culture of casual disregard for the procedures – the standard work – for handling nukes. I quote the gist of it here:

A main reason for the error was that crews had decided not to follow a complex schedule under which the status of the missiles is tracked while they are disarmed, loaded, moved and so on, one official said on condition of anonymity because he was not authorized to speak on the record.

The airmen replaced the schedule with their own “informal” system, he said, though he didn’t say why they did that nor how long they had been doing it their own way.

“This was an unacceptable mistake and a clear deviation from our exacting standards,” Air Force Secretary Michael W. Wynne said at a Pentagon press conference with Newton. “We hold ourselves accountable to the American people and want to ensure proper corrective action has been taken.”

So what’s the point, and what has this got to do with lean manufacturing?

The right process produces the right result.

As true as this is, it isn’t the point. The point is that the Airmen didn’t follow the procedures. And now the Air Force will apply the “Bad Apple” theory, weed out the people who are to blame, re-emphasize the correct procedures everywhere else, and call it good.

How often do you do this when there is a quality problem, an accident or a near miss? How often to you cite “Human Error” or “not following procedures” or “didn’t follow standard work” as a so-called root cause?

You need to keep asking “why” some more, probably three or four more times.


Field Guide to Understanding Human ErrorTo this end, I believe Sydney Dekker’s book “Field Guide To Understanding Human Error” should be mandatory reading for all safety and quality processionals.

Dekker has done most of his research in the aviation industry, and mostly around accidents and incidents, but his work applies anywhere that people’s mistakes can result in problems.

In the USAF case cited above, there was (according to the reports in the open press) a culture of casual disregard for the established procedures. This probably worked for months or years because there wasn’t a problem. The “norms” of the organization differed from “the rules” and I would speculate there was considerable peer pressure, and possibly even supervisory pressure, to stick with the “norms” as they seemed to be adequate.

Admittedly, in this case, things went further than they normally do, but let’s take it away from nuclear weapons and into an industrial work environment.

Look at your fork truck drivers. Assuming they got the same training I did, they were taught a set of “rules” regarding always fastening seat belts, managing the weight of the load, keeping speed down and under control, checking what is behind and to the sides before starting a turn (as the rear-end swings out.. the opposite of a car). All of these things are necessary to ensure safe operation.

Now go to the shop floor. Things are late. The place is crowded. The drivers are under time pressure, real or perceived. They have to continuously mount and dismount. The seatbelt is a pain. They get to work, have the meeting, then are expected to be driving, so there is no real time for the “required” mechanical checks. They start taking little shortcuts in order to get the job done the way they believe they are expected to do it. The “rules” become supplemented by “the norms.” This works because The Rules apply an extra margin of safety that is well above the other random things that just happen around us every day. The Norms – the way things are actually done erode that safety margin a little bit, but normally nothing happens.

Murphy’s Law is wrong. Things that could go wrong usually don’t.

The “Bad Apple” theory suggest that accidents (and defects) are the fault of a few people who refuse to follow the correct procedures. “If only ‘they’ followed ‘the rules’ then this would not have happened.” But that does not ask why they didn’t do it that way.

Recall another couple of catastrophes: We have lost two Space Shuttle crews to the same problem. In both the Challenger and Columbia accident reports, the investigators cite a culture where a problem which could have caused an airframe loss happened frequently. Eventually concern about it became routine. Then, one time, other factors come into play and what usually happens didn’t happen and we are wringing our hands about what happened this time. Truth is it nearly happened every time. But we don’t see that because we assume that every bad incident is an exception, the result of something different this time. In reality, it is usually just bad luck in a system which eroded to the point where luck was relied upon to ensure a safe, quality outcome. In this case they didn’t single out “bad apples” because the investigations were actually done pretty well. Unfortunately the culture at NASA didn’t adjust accordingly. (Plus Space Flight involves the management of unimaginable amounts of energy, and sometimes that energy goes where we don’t want it to.)

So – those quality checks in your standard work. Do you have explicit time built in to the work cycle to do them? Are your team members under pressure real or perceived to go faster?

What happens if there is an accident or a defect? Does the single team member who, today, was doing the same thing that everyone does every day get called out and blamed? Just look at your accident reports to find out. If the countermeasure is “Team Member trained” or “Team Member told to pay more attention” or just about anything else that calls out action on a single Team Member then… guilty.

What about everybody else? Following an incident or accident, the organization emphasizes following The Rules. They put up banners, have all-hands meetings, maybe even tape signs up in the work place as reminders and call them “visual controls.” And everything goes great for a few weeks, but then the inevitable pressure returns and The Norms are re-asserted.

Another example: Steve and I were watching an inspection process. The product was small and composed of layers of material assembled by machine. Sometimes the machine screwed up and left one out. More rarely, it screwed up and doubled something up. As a countermeasure, the Team Member was to take each item and place it on a precise scale, note the weight, and compare the weight to a chart of the normal ranges for the various products.

There were a couple of problems with this. First, the human factors were terrible. The scale had a digital readout. The chart was printed and taped to the table. The Team Member had to know what product it was, reference the correct line on the chart, and compare a displayed number with a set of displayed numbers which were expressed to two decimal places. So the scale might say “5.42” and she had to verify whether that was in or out of the range of “5.38 – 5.45”

Human nature, when reading numbers, is that you will see what you expect to see. You might recall that it was different after five or six more reads. So telling the Team Member to “pay more attention” if she made a mistake was unreasonable. Remember, she is doing this for a 12 hour shift. There is no way anyone could pay attention continuously in this kind of work. If a defective item got through, though, there would be a root cause of “Team Member didn’t pay attention.” She is set up to fail.

But wait, there’s more!

She was weighing the items two at a time. Then she was mentally dividing the weight by two, and then looking it up. Even if she was very good at the mental math and had the acceptable range memorized, that isn’t going to work. Plus, and this is the key point, in the unlikely but possible scenario where the machine left out a layer in one item, then doubled up the next, the net weight of the two defective items together would be just fine.

“Why do you weight two at a time?” Answer: “It’s faster.” This is true, but:

  • It doesn’t work.
  • She doesn’t need to go faster.

Her cycle time for weighing single items was well within the required work pace. But the supervisor was under pressure for more output because of problems elsewhere, and had translated that pressure to the Team Member in a vague “work faster if you can” way. It was the norm in that area, which was different from the rules.

Where is all of this going?

The Air Force has ruined 70 careers as a result of the cruise missile incident. They may have been right to do so, I wasn’t there, and this was a pretty serious case. But the fact that it got to this point is a process and system breakdown, and it goes way beyond the base involved.

Go to your own shop floor. Stand in the chalk circle. Watch, in detail, what is actually happening. Compare it with what you believe should be happening. Then start asking “Why?” and include:

“Why do people believe they have to take this shortcut?”

Invert the Problem

One very good idea-creation tool is “inverting the problem” – developing ideas on how to cause the effect you are trying to prevent. This is a common approach for developing mistake-proofing, but I just saw a great use of the idea for general teaching.

Ask “How could we make this operation take as long as possible?” Then collect ideas from the team. Everything on the flip chart will be some form of waste that you are trying to avoid. In many cases, I think, even the most resistant minds would concede that nothing on this list is something we would do on purpose.

It follows, then, that if we see we are doing it that we ought to try to stop doing it. And that is what kaizen is all about.