Recovering the Reasons for 5S

5S has become an (almost) unchallenged starting point for converting to lean production. Although the basics are quite simple, it is often a difficult and challenging process.

After the initial push to sort stuff out and organize what remains, sustaining  often usually almost always becomes an issue.

Again, because of early legacy, the most common response is what I call the 5×5 audit. This is a 5×5 grid, assigning 5 points across each of the “S” categories. It carries an assumption that managers will strive for a high audit score, and thus, work to sustain and even improve the level of organization.

Just today I overheard a manager trying to make the case that an audit score in his area ought to be higher. It was obvious that the objective, at least in the mind of the manager, was the audit score rather than solving problems.

The target condition had become abstract, and 5S had become a “program” with no evident or obvious purpose other than the general goodness that we talk about upon its introduction.

If the audit score is not the most important thing, then why do we emphasize it so much? What is our fascination with assigning points to results vs. looking at the actual results we are striving to achieve?

To digress a bit, many will say at this point that this is an example of too much emphasis on audits. And I agree. But this is more common than not, so I think of this as an instance of a general problem rather than a one-off exception.

Our target condition is a stable process with reduced, more consistent cycle times as less time is spent hunting for things. Though we may see a correlation between 5S audit scores and stability, it is all to easy to focus on the score and forget the reason.

Shop floor people tend to be intelligent and pragmatic types. They do not deal in a world of abstraction. While the correlation might make sense to a manager used to dealing in an abstract world of measurements and financials, that is often not the case where the work is actually done.

The challenge is: How do we make this pragmatic so it makes sense to pragmatic people?

Let’s start by returning the focus to pragmatic problems. Instead of citing general stories where people waste time looking for things so we can present a general solution of 5S, let’s keep the focus on specifics.

What if (as a purely untried hypothetical), we asked a team member to put a simple tick mark //// on a white board when he has to stop and hunt for something, or even dig through a pile to get something he knows is in there? If you multiply that simple exercise times all of the people in the work area, add up the tick marks every day, and then track the trend, you may just get more valuable information than you would with the 5×5 audit done once a month.

What if we actually track stability and cycle times. Isn’t this avoiding these wastes the case we make for 5S in the first place?  So perhaps we should track actual results to see if out understanding is correct, or if it has gaps (which it does, always).

What if we taught area leaders to see instability, off-task motions, and to see those things as problems. Let them understand what workplace dis-organization causes.

How about tracking individual problems solved rather than a general class of blanket countermeasure?

How many sources of work instability did we address today? I’d like to see what you learned in the process. What sources of instability did you uncover as you fixed those? What is your plan to deal with them? Great!

No problems today? OK – let’s watch and see if we missed anything. OH! What happened there? Why did we miss that before? Could we have spotted that problem sooner? What do we need to change so we can see it, and fix it, before it is an issue with the work?

These are all questions that naturally follow a thorough understanding of what 5S actually means.

But we have had 5S freeze dried and vacuum packed for easy distribution and consumption. At some point along the way, we seem to have forgotten its organic state.

Coffee + Electrical Panels = 7500

A reader, Josh, sent me this link.

Spilled coffee in 777 cockpit leads to inadvertent hijack warning, FAA-mandated sippy cups look likely

The more compete, technical version, is here.

The short version is:

  1. Airline pilot spills coffee on cockpit panel.
  2. Coffee (or scalded pilot) causes airplane to send out the HIJACK transponder code.
  3. Many people become involved quickly.
  4. Frankfurt bound plane returns diverts to Toronto, passengers are returned (it doesn’t say how) to Chicago, which the author considers a bad thing.

tippy-cup

Given that even highly improbable events are nearly certain given enough opportunities, I am actually rather surprised this hasn’t happened already, given the sheer number of opportunities calculated by (#flights x #pilots x #cups of coffee) over, say, a ten year period.

So, given that we have an undesirable outcome (though perhaps not quite as embarrassing as the .45 hole in the cockpit floor from a few years ago), what is the root cause, and what is the countermeasure?

Whatever we do, we should probably have it cost somewhat less than scrambling F-15’s to go ask what the problem is.

(Yes, I am being somewhat serious here, but also struggling just a little to keep a straight face.)

Many companies respond to a similar problem opportunity by banning drinks altogether in the work areas. My guess is that if we wanted to continue to have pilots operate the airplanes, we might consider something else.

f-15-sparrowI will leave my readers to ponder the thought, and remind you that there are worse outcomes than returning to Chicago via Toronto.

 

Forcing Compliance or Leader Development?

“Are we trying to force compliance or develop leaders?”

The answer to this question is going to set your direction, and (in my opinion) ultimately your success.

It comes down to your strategy for “change.”

When people talk about “change” they are usually talking about “changing the culture.” Digging down another level, “changing the culture” really means altering the methods, norms and rituals that people (including leaders) use to interact with one another.

In a “traditional” organization, top level leaders seek reports and metrics. Based on those reports and metrics, they ask questions, and issue guidance and direction.

The reports and metrics tend to fall into two categories.

  • Financial metrics that reflect the health of the business.
  • Indicators of “progress” toward some kind of objective or goal – like “are they doing lean?”

Floating that out there, I want to ask a couple of key questions around purpose.

There are two fundamental approaches to “change” within the organization.

You can work to drive compliance; or you can work to develop your leaders.

Both approaches are going to drive changes in behavior.

What are the tools of driving compliance? What assumptions do those tools make about how people are motivated and what they respond to?

What are the tools of leader development? What assumptions do those tools make about how people are motivated and what they respond to?

Which set of tools are you using?

We all say “respect for people.”

Which set of assumptions is respectful?

Just some questions to think about.

Biggest ERP Failures of 2010

pc pointed out a great little article in a post on the discussion forum.

The article touches lightly on why ERP implementations are so hazard prone, and then lists the “Biggest Failures” of 2010.

Of note is that the majority of the listed failures are governments. I can see why. Governments, by their nature, have a harder time concealing the budget over runs, process breakdowns and other failures that are endemic with these implementations.

A corporation can have the same, or even a worse, experience, but we are unlikely to know. They are going to make the best of it, work around it, and make benign sounding declarations such as “the ERP implementation is six months behind schedule” if for no other reason than to protect themselves from shareholders questioning their competence.

Does anybody have any of their own stories to share?

Keep Visual Controls Simple

In this world of laser beams and ultrasonic transducers, we sometimes lost sight of simplicity.

Remember- the simplest solution that works is probably the best. A good visual control should tell the operator, immediately, if a process is going beyond the specified parameters.

Ideally the process would be stopped automatically, however a clear signal to stop, given in time to avoid a more serious problem, is adequate.

So, in that spirit I give you (from Gizmodo) the following example:

Warning Sign

Motivation, Bonuses and Key Performance Indicators

I have posted a few times about the “management by measurement” culture and how destructive it can be. This TED video by Daniel Pink adds some color to the conversation.

Simply put, while traditional “incentives” tend to work well when the task is rote and the solution is well understood, applying those same incentives to situations where creativity is required will reduce people’s performance.

We saw this in Ted Wujec’s Marshmallow Challenge video as well, where an incentive reduced the success rate of the teams to zero.

This time of year companies are typically reviewing their performance and setting goals and targets for next year.

It is important to keep in mind that there is overwhelming evidence that tying bonuses to key performance indicators is the a reliable way to reduce the performance of the company.

Teaching the Scientific Method on TV

So the Mythbusters are teaching the scientific method as entertainment, and somehow industry is not making the leap that the same thinking applies to management.

Do financial management methods developed by Alfred P. Sloan have such a mass and momentum that there is no way to overcome?

All of the discussions about “change” in the organization really come down to trying to overpower the way business leaders have been taught to think about decision making.

He Should Have Seen It

In many processes, we ask people to notice things. Often we do this implicitly by blaming people when something is missed. This is easy to do in hindsight, and easy to do when we are investigating and knowing what to look for. But in the real world, a lot of important information gets lost in the clutter.

We talk about 5S, separating the necessary from the unnecessary, a lot, but usually apply it to things.

What about information?

How is critical information presented?

How easy is it for people to see, quickly, what they must?

This is a huge field of study in aviation safety where people get hyper focused on something in an emergency, and totally miss the bigger picture.

This site has a really interesting example of how subtle changes in the way information is presented can make a huge difference for someone trying to pull out what is important. The context is totally different, so our challenge is to think about what is revealed here, and see if we can see the same things in the clutter of information we are presenting to our people.

The purpose of good visual controls is to tell us, immediately, what we must pay attention to. Too many of them, or too much detail – trying to present everything to everyone – has the opposite effect.

Evidence of Success with MRP?

An old, very esoteric, post got a four word comment today that sent my mind thinking. And because the topic is esoteric, this post is as well – my apologies.

The post, Is the MRP Algorithm Fatally Flawed, gets a lot of search hits because of the title. The post discusses an obscure PhD dissertation that asserts that the underlying logic of MRP systems share defining characteristics with a debunked model for computational intelligence. The researcher makes a compelling case.

The comment, from Indonesia, said “please send for example”

Assuming I did not misinterpret the comment, I believe the writer was asking for examples of what does not work.

Here is what got me thinking.

In order to refute Dr. Johnston’s thesis, we have to find a non-trivial case where an unaltered application or the MRP algorithm works as intended. Just one. Then we would have to carefully understand that instance to determine if it was truly a case where MRP is working as intended, or something else.

Ironically, the working examples I have seen have gotten there by combining work centers into value streams with pull and systematically turning off the inventory netting and detailed scheduling functions of their MRP. In other words, they are migrating the system toward something that directly connects supplying and consuming processes with each other. These systems are far more able to respond to the small fluctuations that trip up the MRP logic. Those examples, however, confirm, rather than refute, what Dr. Johnston is saying.

Considering that the vast majority of factories are still trying to make the MRP algorithm work, does anyone have an example of where discrete manufacturing order scheduling of each operation actually gives a workable production plan that can be followed without hot lists and other forms of outside-the-system intervention? Just curious now.

 

How Do You Deal With Marshmallows?

Yesterday, Kris left great comment with a compelling link to a TED presentation by Tom Wujec, a fellow at Autodesk.

Back in June, I commented on Steve Spear’s article “Why C-Level Executives Don’t Engage in Lean Initiatives.” In that commentary, Spear contends that business leaders are simply not taught the skills and mindset that drives continuous improvement in an organization. They are taught to decide rather than how to experiment and learn. Indeed, they are taught to analyze and minimize risk to arrive at the one best solution.

Tom Wujec observes exactly the same thing. As various groups are trying to build the tallest structure to support their marshmallow, they consistently get different results:

So there are a number of people who have a lot more “uh-oh” moments than others, and among the worst are recent graduates of business school.

[…]

And of course there are teams that have a lot more “ta-da” structures, and, among the best, are recent graduates of kindergarten.[…] And it’s pretty amazing.

[…] not only do they produce the tallest structures,but they’re the most interesting structures of them all.

What is really interesting (to me) are the skills and mindsets that are behind each of these groups’ performance.

First, the architects and engineers. Of course they build the tallest structures. That is their profession. They know how to do this, they have done it many thousands of times in their careers. They have practiced. Their success is not because they are discovering anything, rather, they are applying what they already know.

In your kaizen efforts, if you already know the solution, then just implement it! You are an architect or engineer.

BUT in more cases than we care to admit, we actually do not know the solution. We only know our opinion about what the solution should be. So, eliminating the architects and engineers – the people who already know the solution – we are left with populations of people who do not know the solution to the problem already. This means they can’t just decide and execute, they have to figure out the solution.

But decide and execute is what they are trained to do. So the CEOs and business school graduates take a single iteration. They make a plan, execute it, and fully expect it to work. They actually test the design as the last step, just as the deadline runs out.

The little kids, though, don’t do that.

First, they keep their eye on the target objective from the very beginning.

Think about the difference between these two problem statements:

  • Build the tallest tower you can, and put a marshmallow on top.

and

  • Support the marshmallow as far off the table as you can.

In the first statement, you start with the tower – as the adults do. They are focused on the solution, the countermeasure.

But the kids start with the marshmallow. The objective is to hold the marshmallow off the table. So get it off the table as quick as you can, and try to hold it up there. See the difference?

More importantly, though, is that the kids know they do not know what the answer is. So they try something fast. And fail. And try something else. And fail. Or maybe they don’t fail… then they try something better, moving from a working solution and trying to improve it. And step by step they learn how to design a tower that will solve the problem.

Why? Simply because, at that age, we adults have not yet taught the kids that they are supposed to know, and that they should be ashamed if they do not. Kids learn that later.

Where the adults are focused on finding the right answer, the kids are focused on holding up a marshmallow.

Where the adults are trying to show how smart they are, the kids are working hard to learn something they do not know.

Third – look what happened when Wujac raised the stakes and attached a “big bonus” to winning?

The success rate went to zero. Why? He introduced intramural competition and people were now trying to build the best tower in one try rather than one which simply solved the problem.

Now – in the end, who has advanced their learning the most?

The teams that make one big attempt that either works, or doesn’t work?

Or the team that makes a dozen attempts that work, or don’t work?

When we set up kaizen events, how do we organize them?

One big attempt, or dozens of small ones?

Which one is more conducive to learning? Answer: Which one has more opportunities for failure?

Keep your eye on the marshmallow  – your target objective.

Last thought… If you think you know, you likely don’t. Learning comes from consciously applied ignorance.


Edited 2 August 2016 to fix dead link. Thanks Craig.