There Are (almost) No Big Problems

In a “management by metrics” world, problems are detected when performance indicators are off track. Perhaps inventory is too high, first pass quality is a problem. Maybe operational availability is tanking.

Once the problems are abstracted into numbers, the numbers become the problem. The solution, then, is usually a directive to reverse the trend, to improve this measurement or that measurement, to get things back on track.

Here is the rub: The numbers aren’t the problem. They don’t even tell you what the problem is without a whole lot of investigation, digging, and stratification. And why was this investigation and stratification necessary in the first place? Because now you are trying to sort out a batch of problems that weren’t dealt with, one-by-one, at the moment they occurred.

The further you go up in the organization, the more things are aggregated and summarized. And the more they are aggregated and summarized, the more things look like big problems, even though they are composed of many (hundreds, sometimes) of little problems.

Let me cite an example. I was in a factory where the site manager was proudly showing off his real time display of Overall Equipment Effectiveness (OEE). Each time the equipment slowed, for any reason, the display would immediately change. He could see, at an instant, how the day’s OEE compared with his target, and, he said, “take action.”

But exactly what action is he going to take?
Even the real-time display lags the actual problem.
Two weeks ago a lubrication point was missed. Today something is overheating, and we need to slow or stop the machine to deal with it.

A bracket securing a roller is loose, today the machine is stopping all the time as they clear jams.

A hole in a screen let chips and debris clog a filter, and the coolant pumps are overheating.

This list can go on. The point is that the things that actually slow down or stop the machinery are second, third, fourth levels in chains of events. The actual problem had no immediate effect on the machinery.

In this system, the very best they will ever do is fix it fast and hold the number at some level. They will never be able to actually improve it, because they aren’t dealing with the underlying systemic issue: They aren’t finding and dealing with the actual root causes.

When you get to financial measures, things are even more abstract.

Inventory levels (or inversely, inventory turns) is a great example. “We have too much inventory!” “Our inventory levels are above the industry norm!”

In the boardroom, or in the Chief of Operations’ office, this is all they can see. Because most leaders in that position really like direct action, they.. um.. “direct” that some “action” be taken.

So the organization goes to war against inventory.

Two levels down, “We have too much inventory” is still characterized as the problem because that is what they have to report every month.

Two more levels down it is often still attacking the symptom – too much inventory.

But inventory isn’t the problem.

The problem is an engineering change that missed one part on the bill of material update, delaying production for a day while all of the other parts continued to roll in.

The problem is a paint system that is unreliable, so the factory obsesses on maintaining four hours of buffer on either side of it.

The problem is pressure to keep the line moving, so people work ahead, and push incomplete or problem units into the “hospital” for later repair.

The problem is that the upstream processes have to be ready to respond to massive fluctuations in assembly’s demand.

The problem is a broken jig, local initiative to make something else instead, resulting in too much of what nobody needs and none of what they do – because they don’t want to waste time doing nothing.

An additional problem is that the typical organizational response to these problems is to add more inventory because each local unit needs to protect itself from the dysfunction problems of their suppliers and customers.

Worse, just “taking out inventory” is going to tank everything else because the underlying problems are still there.

Inventory isn’t the problem.

The problem is that each of these small problems is too small to bother addressing as a systematic breakdown that leads to “too much inventory.” While the high-level leaders are looking for a big problem, they are missing their real responsibility.

It isn’t to “do something about inventory.”
It is to ensure that the small problems that occur every day, the ones which cause all of that inventory to accumulate in the first place, actually get addressed. Actually no. Their responsibility is to ensure there are systems in place which catch those problems and address them. They do this by teaching, setting example, then checking.

Leaders – if you want to “do something about inventory” you have to do it by setting an expectation that:

  • Processes are well defined.
  • Those processes are followed.
  • When there is a problem following the process, it is raised immediately.
  • When a problem is raised, it is swarmed, fixed, understood and prevented.

You handle big problems by making sure the small ones are dealt with, every day, all day.

Then your OEE will go up quickly. OEE is an indicator of how well you respond to problems, not how well your machines run.

One more thing – for my Health Care readers.
You can substitute “medication errors” for any of this. But until there is a system in place that alerts anytime someone spots the possibility of confusion, medication errors will occur at pretty much the same rate they always have.

Attack on Ambiguity

When real effort is spent getting to the cause of problems (vs. a reflex to find someone to blame), ambiguity often enters into the picture.

Problem solving is a process of asking questions and clarification.

Is a “defect-free” outcome of the process specified? Does the Team Member know what “success” is?
Is there a way for the Team Member to actually verify the result?
Does that check give the Team Member a clear Yes/No; Met/Not Met; Pass/Fail response? Or is there interpretation and judgment involved?

If there is good specification for “defect-free” – is there a specification for how to achieve it? Do you know what must be done to assure the result that you want? Does the Team Member know? Is the Team Member guided through the process? Are there verifications (poka-yoke, etc) at critical points?

If all of the above is in place, do you know what conditions must be in place for success? What is the minimum pressure for your air tools? Is there a pressure gage? Does it cut off the tool if the pressure is too low? Is there some visual check that all of the required parts, pieces, tools are there before work starts? Thank about the things you assume are there when the Team Member gets started.

For ALL OF THE ABOVE, if something isn’t right, is there a clear, unambiguous way to alert the Team Member immediately?
Is there a clear way for the Team Member to alert the chain-of-support that something isn’t right?

If there is a defined result, a defined process to achieve it, and all of the conditions required for success are present –
Is the Team Member alerted if he deviates from the specified process, or if a critical intermediate result is not right?
More poka-yoke.

Now you have the very basics for consistent execution.

Is the process carried out as you expect?
Is the result what you expected?

If not, then the process may be clear, but it clearly does not work. Stop, investigate. Fix it.

All of this is about getting more and more about what is SUPPOSED to happen and compare it to what is REALLY happening… continuously. I say continuously because “continuous improvement” does not happen unless there is continuous checking and continuous correction and problem solving. If you want “continuous improvement” you cannot rely on special “events” to get it. It has to be embedded into the work that is done every day.

Ask “Why?” – but How?

Get to the root cause by “Asking Why?” five times.
We have all heard it, read it. Our sensei’s have pounded it into us. It is a cliché, obviously, since getting to the root cause of a problem is (most of the time) a touch more complicated than just repeatedly asking “Why?”

Isn’t it?

Maybe not. Maybe it is a matter of skill.

Some people are really good at it. They seem to instinctively get to the core issue, and they are usually right. Others take the “Problem Solving” class and still seem to struggle. So what is it that the “naturals” do unconsciously?

Let me introduce another piece of data here. “Problem Solving” is taught to be an application of the scientific method. The scientific method, in turn, is hypothesis testing. How does that relate to “ask why? five times” ?

Each iteration of asking “why?” is an iteration of hypothesis testing.

How do you “ask why?”

Observe and gather information.
Formulate possible hypotheses.
For each reasonable possibility, determine what information would confirm or refute it. (Devise experiments, which really means “Decide what questions to ask next and figure out a way to answer them.”)
Observe, gather information, experiment. Get answers to those questions.
Confirm or refute possible causes.

At each level, a confirmed cause is the result of “observe and gather information” so the process iterates back to the top.

Eventually, though, a point is reached where going further either obviously makes no sense, or is of no additional help. If you are now looking at something you can fix, you are at the “root cause” for the purpose of the exercise. Yes, you can probably keep going, but part of this is knowing how far is far enough.

Is this something I can fix easily?
Does it make sense to go further?

Otherwise, iterate again.

Now: Devise a countermeasure.

A countermeasure, itself, is a hypothesis. You are saying “If I take this action, I should get this result” i.e. the problem goes away.

Put the countermeasure into place. Does it work? That is yet another experiment only now you are (hopefully) confirming or refuting your fix on every production cycle. The andon will tell you if you are right.

We tell people “Ask why five times” but we really don’t teach them how to “ask why.”

The book examples usually show this neat chain of cause/effect/cause/effect, but the real world isn’t that tidy. When the problem is first being investigated, each level often has many possibilities. Once the chain is built then the chain can be used as a check.

But that isn’t how you GET there.

Why don’t the books do a good job teaching this?

“They” say that critical thinking is difficult to teach. I disagree. If the people who do it unconsciously can step back and become consciously competent, and know how they do it, then it breaks down into a skill, and a skill can be taught.

A Real World Example
My computer works, but it’s network connection to the outside world doesn’t.
OK. What could be wrong?
It could be software in the computer.
It could be a problem with the hardware.

Look at where the cable plugs into the back of the computer. Are the little lights flashing? No? Then there is no data going through that connection.

How could that be?

Well.. the it could be a problem in the computer or operating system.
It could be a problem with the hardware in the computer.
It could be a bad cable.
It could be a problem behind the network jack on the wall.

The QUICKEST thing to do is unplug the cable and plug my co-worker’s cable into the computer. (Please make sure he isn’t busy with email before you do this!). Do the lights come on? Yes? Does your network stuff work now? Yes? Then it isn’t anything in the computer. You have just done a hypothesis test – conducted an experiment.

Take his KNOWN GOOD cable out of the wall and plug it into your jack. Does your network work now? Yes? You have a bad cable. No? It is a problem behind the jack.. call I.T. and tell them. (unless you are at home, then head to the little blue box in the basement and start looking at flashing lights down there. But same process.. as you systematically eliminate internal causes, you are left with an external one.)

This is a natural flow, but most people wouldn’t describe it as “asking why?” or “hypothesis testing” – and the big words scare them off.

Still – when you (the lean guru) are teaching others, it is important for them to understand HOW TO ASK WHY is just a process of learning by systematic elimination of the impossible. (Whatever remains, however unlikely, must be the truth – Author Conan Doyle through Sherlock Holmes)

How The Sensei Sees

Steven Spear told an interesting story in our session with him.

A Toyota sensei, very senior, was looking at a process unlike anything in his previous experience base. The researchers watching expected him to do “analysis by analogy” – to take what he observed, find a matching analogy in his deep experience, and then draw conclusions about the current situation.

This model, by the way, is a commonly accepted one for how “experts” work with new situations.

But that isn’t what happened. The insights were very fundamental, and quite specific to the process he was seeing for the first time.

The way he worked was revealed in the way he described the issues.

“Ideally,” he would say, “it should be ___________ . But the problem is __________ .”

In describing the “problem” he would describe the departure from the ideal situation. In so doing he was seeing problems, not as “seeing waste” but as seeing “departure from the ideal.”

This was, at least for me, a fairly significant ah-ha. I realized two things immediately.

  1. If I may be so bold, I got some insight into what I did in the same situation. At the risk of over-stating myself, I have found I am fairly good at getting to the core issues when looking at a process. Becoming a little more concious about it will, first, let me be better at it and, more importantly it will allow me to be much better at teaching others to do it.
  2. Tying back to #1, we teach this wrong. We teach people to look for “waste.” We teach them to look for ways to “make the process better.” We are always measuring “what could be” from a baseline of “what is.” This is totally backwards.

What we should be doing is measuring “what is” from a baseline of “what is perfect?”

What is the difference? I think it is important on a couple of levels. First is simple engagement of the workforce.

Ask someone “How can we make your work better than it is?” And the question carries all kinds of baggage. It says “Obviously you don’t do it as well as you could.” Whether or not it is meant this way is irrelevant. That is how it, all too often, comes across. The common symptom of this thinking is when you hear “This is as good as we can make it.”

Ask, instead, “Where is this process imperfect?” or “What gets in your way of doing this perfectly?” and you disarm the above objection. Anyone who works in the midst of complexity encounters dozens or hundreds of things every day that must be worked around or somehow coped with. All of those things take time, effort, energy, and each decision about how to handle something unforseen brings in the possibility of getting it wrong – making a mistake.

Think about it – how many mistakes result from someone just trying to figure out what should be done to correct some kind of anomaly, and making the wrong judgment?

Over the next few posts I am going to continue to beat these concepts to death from different angles. Forgive me in advance – it is my way of exploring it in my mind, and I am using you, the reader, as a sounding board. Writing things down forces me to think about them in more detail.

What Nukes – a little more clear.

I re-read my “What Nukes?” post and realized I was really rambling. I want to reiterate a key point more clearly because I think it is important.

In the “Bad Apple” theory there is an implied assumption that the cause of an accident or other problem was one person who, at that moment in time, was not following the documented rules or procedures.

Except in the most egregious cases, such as deliberate misconduct, that is likely not the case. Most organizations have a set of “norms” that operate at some level of violation of the written or established procedures. The reasons for this are many, but usually it is because good people are doing the best they can, in the conditions they are given, to get the job done.

Failure to follow the rules does not result in an accident or incident.

Have you every run a red light or a stop sign? It happens thousands of times every day. It almost never results in an accident. Only when other contributing conditions are ripe will an accident result. Running a stop sign AND a car coming through the intersection.

The same goes for quality checks, and the more reliable an “almost 100%” process becomes, the more vulnerable you are. If a defect is only rarely produced, it is unlikely that any kind of human-based inspection will catch it. The faster the work cycle, the more this is true. The mind numbs, it is impossible to always pay attention to the detail, and the mind sees what it expects. “Failure to pay attention” is never an adequate root cause. It is blaming an unlucky Team Member for an omission that everyone makes every day just going through life. It is just, in this case, “there was a car coming through the intersection.” It is bad luck. It is being blamed for red beads in Deming’s paddle experiment.

So attaching the failure of an individual, while it is easy, avoids the core issue:

People’s failure in critical processes is a SYSTEM PROBLEM. You must investigate from the viewpoint of the person at the pointy end. What did he see? What did he perceive? What did he believe was happening and why was that belief reasonable given his interpretation of the circumstances at the time.

The post about “sticky visual controls” got to this. Your mistake-alerts or problem signals must penetrate conciousness and demand attention if they do not actually shut down the process.

What Nukes?

Cruise Missiles

Warning to Reader: This piece has a lot of free-association flow to it!

Oops. A few weeks ago a story emerged in the press that a B-52 had flown from North Dakota to Louisiana with half-a-dozen nuclear armed missiles under its wing. The aircrew thought they were transporting disarmed missiles. This is a rather major oh-oh for the USAF, as in general, they are supposed to keep track of nuclear warheads. (Yeah, I am understating this. I, by the way, can speak from a small amount of experience as I once held a certification to deal with these things, so I have some idea how rigorous the procedures are.)

Normally the military deals with nuclear weapons issues with a simple “We do not confirm or deny…” but in this case they have released an unprecedented amount of information, including a confirmation that nukes were on a particular plane in a particular location at a particular time.

The news story of the report summarized a culture of casual disregard for the procedures – the standard work – for handling nukes. I quote the gist of it here:

A main reason for the error was that crews had decided not to follow a complex schedule under which the status of the missiles is tracked while they are disarmed, loaded, moved and so on, one official said on condition of anonymity because he was not authorized to speak on the record.

The airmen replaced the schedule with their own “informal” system, he said, though he didn’t say why they did that nor how long they had been doing it their own way.

“This was an unacceptable mistake and a clear deviation from our exacting standards,” Air Force Secretary Michael W. Wynne said at a Pentagon press conference with Newton. “We hold ourselves accountable to the American people and want to ensure proper corrective action has been taken.”

So what’s the point, and what has this got to do with lean manufacturing?

The right process produces the right result.

As true as this is, it isn’t the point. The point is that the Airmen didn’t follow the procedures. And now the Air Force will apply the “Bad Apple” theory, weed out the people who are to blame, re-emphasize the correct procedures everywhere else, and call it good.

How often do you do this when there is a quality problem, an accident or a near miss? How often to you cite “Human Error” or “not following procedures” or “didn’t follow standard work” as a so-called root cause?

You need to keep asking “why” some more, probably three or four more times.


Field Guide to Understanding Human ErrorTo this end, I believe Sydney Dekker’s book “Field Guide To Understanding Human Error” should be mandatory reading for all safety and quality processionals.

Dekker has done most of his research in the aviation industry, and mostly around accidents and incidents, but his work applies anywhere that people’s mistakes can result in problems.

In the USAF case cited above, there was (according to the reports in the open press) a culture of casual disregard for the established procedures. This probably worked for months or years because there wasn’t a problem. The “norms” of the organization differed from “the rules” and I would speculate there was considerable peer pressure, and possibly even supervisory pressure, to stick with the “norms” as they seemed to be adequate.

Admittedly, in this case, things went further than they normally do, but let’s take it away from nuclear weapons and into an industrial work environment.

Look at your fork truck drivers. Assuming they got the same training I did, they were taught a set of “rules” regarding always fastening seat belts, managing the weight of the load, keeping speed down and under control, checking what is behind and to the sides before starting a turn (as the rear-end swings out.. the opposite of a car). All of these things are necessary to ensure safe operation.

Now go to the shop floor. Things are late. The place is crowded. The drivers are under time pressure, real or perceived. They have to continuously mount and dismount. The seatbelt is a pain. They get to work, have the meeting, then are expected to be driving, so there is no real time for the “required” mechanical checks. They start taking little shortcuts in order to get the job done the way they believe they are expected to do it. The “rules” become supplemented by “the norms.” This works because The Rules apply an extra margin of safety that is well above the other random things that just happen around us every day. The Norms – the way things are actually done erode that safety margin a little bit, but normally nothing happens.

Murphy’s Law is wrong. Things that could go wrong usually don’t.

The “Bad Apple” theory suggest that accidents (and defects) are the fault of a few people who refuse to follow the correct procedures. “If only ‘they’ followed ‘the rules’ then this would not have happened.” But that does not ask why they didn’t do it that way.

Recall another couple of catastrophes: We have lost two Space Shuttle crews to the same problem. In both the Challenger and Columbia accident reports, the investigators cite a culture where a problem which could have caused an airframe loss happened frequently. Eventually concern about it became routine. Then, one time, other factors come into play and what usually happens didn’t happen and we are wringing our hands about what happened this time. Truth is it nearly happened every time. But we don’t see that because we assume that every bad incident is an exception, the result of something different this time. In reality, it is usually just bad luck in a system which eroded to the point where luck was relied upon to ensure a safe, quality outcome. In this case they didn’t single out “bad apples” because the investigations were actually done pretty well. Unfortunately the culture at NASA didn’t adjust accordingly. (Plus Space Flight involves the management of unimaginable amounts of energy, and sometimes that energy goes where we don’t want it to.)

So – those quality checks in your standard work. Do you have explicit time built in to the work cycle to do them? Are your team members under pressure real or perceived to go faster?

What happens if there is an accident or a defect? Does the single team member who, today, was doing the same thing that everyone does every day get called out and blamed? Just look at your accident reports to find out. If the countermeasure is “Team Member trained” or “Team Member told to pay more attention” or just about anything else that calls out action on a single Team Member then… guilty.

What about everybody else? Following an incident or accident, the organization emphasizes following The Rules. They put up banners, have all-hands meetings, maybe even tape signs up in the work place as reminders and call them “visual controls.” And everything goes great for a few weeks, but then the inevitable pressure returns and The Norms are re-asserted.

Another example: Steve and I were watching an inspection process. The product was small and composed of layers of material assembled by machine. Sometimes the machine screwed up and left one out. More rarely, it screwed up and doubled something up. As a countermeasure, the Team Member was to take each item and place it on a precise scale, note the weight, and compare the weight to a chart of the normal ranges for the various products.

There were a couple of problems with this. First, the human factors were terrible. The scale had a digital readout. The chart was printed and taped to the table. The Team Member had to know what product it was, reference the correct line on the chart, and compare a displayed number with a set of displayed numbers which were expressed to two decimal places. So the scale might say “5.42” and she had to verify whether that was in or out of the range of “5.38 – 5.45”

Human nature, when reading numbers, is that you will see what you expect to see. You might recall that it was different after five or six more reads. So telling the Team Member to “pay more attention” if she made a mistake was unreasonable. Remember, she is doing this for a 12 hour shift. There is no way anyone could pay attention continuously in this kind of work. If a defective item got through, though, there would be a root cause of “Team Member didn’t pay attention.” She is set up to fail.

But wait, there’s more!

She was weighing the items two at a time. Then she was mentally dividing the weight by two, and then looking it up. Even if she was very good at the mental math and had the acceptable range memorized, that isn’t going to work. Plus, and this is the key point, in the unlikely but possible scenario where the machine left out a layer in one item, then doubled up the next, the net weight of the two defective items together would be just fine.

“Why do you weight two at a time?” Answer: “It’s faster.” This is true, but:

  • It doesn’t work.
  • She doesn’t need to go faster.

Her cycle time for weighing single items was well within the required work pace. But the supervisor was under pressure for more output because of problems elsewhere, and had translated that pressure to the Team Member in a vague “work faster if you can” way. It was the norm in that area, which was different from the rules.

Where is all of this going?

The Air Force has ruined 70 careers as a result of the cruise missile incident. They may have been right to do so, I wasn’t there, and this was a pretty serious case. But the fact that it got to this point is a process and system breakdown, and it goes way beyond the base involved.

Go to your own shop floor. Stand in the chalk circle. Watch, in detail, what is actually happening. Compare it with what you believe should be happening. Then start asking “Why?” and include:

“Why do people believe they have to take this shortcut?”

Getting A Plant Tour

A couple of days ago I wrote about how to host a tour. Here are some thoughts on how to get one. As always, I’d love to hear your comments and experiences.

Don’t expect your hosts to change your “cement heads.” I have had requests from groups who wanted to send their “resistant managers” to our factory so we can show them things that will change their minds. Doesn’t work. Sorry, that is your job. My experience is that people who don’t want to see the benefits will always find all of the things that are “unique” about their circumstance, and special case reasons why the other place is doing so much better.

Go to learn, not to look. In my last post I made reference to “industrial tourists.” Those are groups that are more interested in the layout and clever gizmos than in the thinking behind them. They are, at best, looking for ideas and technical solutions to their problems. Copying others’ solutions is not thinking.

Going to learn is a different attitude. When you look at a layout, or other technical solution, ask yourself this: “What problem does that solve?” How does it save time? How does it remove variation from the process? What did the operation look like before they did that? Force yourself to think in four dimensions. Not just what you see now, but what it would have looked like in the past. WHY did they do this?

Although many people think lean manufacturing is counter-intuitive, I think that with this line of thinking you will find it actually is just common-sense solutions to the problems that everyone has, every day.

Nobody is perfect. Even a Toyota plant has obvious issues. If you end up fault-finding, you will miss the good stuff. I was touring a Toyota plant with a group a couple of years ago and it had obviously slipped. This is old news, and one of the reasons for their internal back-to-basics approach. But two things came to light: The rich visual controls made it easy for total strangers on the 1 hour tour to SEE the difference between “what should be” and “what is.” Wow. Try that in YOUR factory. And, reading the news stories, it was a problem they were taking very seriously and doing something about it vs. not noticing the deterioration and just letting things go.

Every plant has issues. Some have great material flow and pull systems, but only average problem solving. Others have a great technical base for home-grown tools, fixtures and machines. A few have great problem solving (They seem to be doing better than others.) Take in what is working, and what is holding them back. What would be the next problem they are working on?

Pay attention to the people. People are the system. How do they interact with the physical artifacts (layout, machines, etc.) An operation that has their stuff together will have people who are obviously comfortable with the pace of work. It will be obvious they get support when there are problems.

Don’t ask too many questions. What? Aren’t you there to learn? Yes. But try to learn with your eyes first. Even if you are moving, “stand in the chalk circle” and see the problems and the solutions. Sharpen your observation skills before you take the tour. Practice in your own plant. When I am hosting visitors and we have the time, my response to a question is to show them where to look for their answer, then ask them what they saw.

If allowed, make sketches. Most operations will have a prohibition against photographs. Even if they allow photos, however, you will capture much more if you stand and sketch what you see. You don’t have to produce a work of art. The purpose is to force your eye to pay attention to the small details. You will see much more through the eyes of the artist than you will through a camera.

Remember they are in the business of production, not consulting.
“Be a good guest” and remember that everybody there has a real job.

Edit 5 Sept: And Jon Miller correctly pointed out something I missed:

Give Back. You will bring “fresh eyes” to their environment and see things they do not. Everyone suffers from a degree of blindness to the familiar. If you are really going to see and learn, you will gain insights that can help your hosts in their own improvements. Ask them the questions that will help them see what you see.

“Sticky” Visual Controls

The textbook purpose of visual controls is “to make abnormal conditions obvious to anyone.” But do your visual controls pass the Sticky test, and compel action?

Simple: Does your control convey a single, simple message? Or does it “bury the lead story” in an overwhelming display of interesting, but irrelevant, information. According to Spear and Bowen (“Decoding the DNA of the Toyota Production System”) information connecting one process to another is “binary and direct.” The signal is either “On” – something is required of “Off” – nothing is required. There is no ambiguity.

Take a look at some of your visual controls. Do they pass the test? Do they clearly convey that something needs attention, or is that fact subject to interpretation?

Unexpected: Why would a visual control need to be “unexpected?” Consider the opposite. Who pays attention to car alarms these days? Yes, they are annoying, but because they so often mean nothing, nobody pays attention to them. We expect car alarms to be false alarms. If your visual control is to mean something, you must respond each time it tells you to. If it is a false alarm, you have detected a problem. Congratulations, your system is working. But it will only continue to work if you follow-through: STOP your routine; FIX or correct the condition; INVESTIGATE the root cause and apply a countermeasure. All of this jargon really means you must adjust your system to prevent the false alarm. Failure to do so will render the real alarm meaningless. It will “Cry Wolf” and no one will take it seriously.

Concreteness: Is it very clear? Do people relate to what your visual control is telling them? Does the Team Leader know that the worker in zone 4 needs help, and that the line will stop in a few minutes if he doesn’t get it?

Credibility: If the condition is worsening, does your visual control show it? Does it warn of increased risk? A typical example would be an inventory control rack with a yellow and red control point on it. Yellow means “Do something” Red means “You better start expediting or making alternate plans because you are going to run out.” Setting the red limit too far up, though, sends out false alarms (see unexpected), and eventually everyone “knows” the process can eat a little into the red with no problem. Why have yellow? What visual control can you put at the yellow line that tells you someone has seen it and is responding to the problem? (Left as an exercise for the reader.)

Emotions: How does your visual control compel action? Does it penetrate consciousness? A few words of warning on an obscure LCD panel aren’t going to mean very much unless someone reads them. How do you get the attention of the person who is supposed to respond? “He should have paid more attention” is the totally wrong way to approach missed information.

Stories: I really connected with this one. Stories are a great way to teach. Simulations are interactive stories. When teaching the andon / escalation process in a couple of different plants we divided the group into small teams, gave them a real-life defect or problem scenario and had them construct a stick-figure comic book that told the story of what would happen. That has proven a great way to reinforce and personalize the theoretical learning.

I will admit that these analogies can be a bit of a stretch, but the real issue is there. Visual controls are critical to your operation because they highlight things that must compel a response.

Your system is not static, or even really stable. It is either improving continuously through your continuous intervention, correction and improvement based on the problems you discover; or it is continuously deteriorating because those little problems are slowly eroding the process with more and more work-arounds and accommodations.

Go to your work area and watch. What happens when there is a problem or break in the standard? What do people do? Can they tell right away that something is out of the ordinary? How can they tell? For that matter, how can you tell by watching? If you are not sure, then first work to clarify the situation and put in more visuals. That will force you to consider what your standard expectations are, and think about responding when things are different than your standard.

5S – Learning To Ask “Why?”

ShadowboardThis photo could have been taken anywhere, in any factory I have ever seen. The fact that I do not have to describe what is out of place is a credit to the visual control. It is obvious. But one of my Japanese sensei’s once said “A visual control that does not trigger action is just a decoration.”

What action should be triggered? What would the lean thinker do?

The easy thing is to put the tape where it belongs.
But there is some more thinking to do here. Ask “Why?”

Why is the tape out of place? Is this part of the normal process? It the tape even necessary? If the Team Member feels the need to have the tape, what is it used for? If the Team Member needs the tape there has the process changed? Or did we just design a poor shadow board?

That last question is important because when you first get started, it is usually the case. We make great looking shadow boards, but the tools and hardware end up somewhere else when they are actually being used.

Why? Where is the natural flow of the process?

Before locking down “point of use” for things, you need to really understand the POINT where things are actually USED. If the location for things like this does not support the actual flow of the normal process, then you will have no way to tell “the way things are” from “the way they should be.”

The purpose of 5S is not to clean up the shop. The purpose is to make it easier to stand in your chalk circle and see what is really happening. The purpose is to begin to ask “Why?”

By the way – if you see an office chair or a trash can being used as an assembly bench, you need to spend a little more time in your chalk circle. 🙂

5 Seconds Matter

I was with the factory’s kaizen leader, and we were watching an operation toward the end of the assembly line.The takt time of this particular line was on the order of 400 minutes, about one unit a day. The exact takt really doesn’t matter, it was long compared to most.

One of the Team Members needed to pump some grease into a fitting on the vehicle he was building. But his grease bucket was broken. We watched as he wandered up the line until he found a good grease bucket, retrieved it, went back to his own position and continued his work. The entire delay was much less than a minute. No big deal when you compare it to 400+ minutes, right?

Let’s do some math.

There are six positions on this particular line. Each one has two workers, a few have three, for a total of 14.

What if, every day, each worker finds three improvements that each save about 5 seconds. That is a total of 15 seconds per worker, per day. Getting a working grease bucket would certainly be one (maybe two) of those improvements. (Consider that the worker he took it from now doesn’t have to come and get it back!)

That is 14 workers x 15 seconds = 210 seconds a day.
210 seconds x 200 days / year = 42,000 seconds / year.
42,000 seconds / 60 = 700 minutes
700 minutes / the 400 minute takt time = we are close to having a line that works with 12 instead of 14 workers.

What is that grease bucket really worth?

Of course your mileage may vary.

But how often do you pay attention to 5 second delays?

Of course getting the grease bucket is really just simple 5S — making sure the Team Member has the things he needs, where and when he needs them.

So how would 5S apply in this case?
Mainly a good visual control would alert the Team Leader, or any other alert leader, to the fact that the grease bucket is out of place. A good leader will see that and ask a simple question:

Why?

And from that simple question comes the whole story, and an improvement opportunity.
But in order to ask “Why?” there must first be recognition that something isn’t right. And this is the power of a standard.