“Are we trying to force compliance or develop leaders?”
The answer to this question is going to set your direction, and (in my opinion) ultimately your success.
It comes down to your strategy for “change.”
When people talk about “change” they are usually talking about “changing the culture.” Digging down another level, “changing the culture” really means altering the methods, norms and rituals that people (including leaders) use to interact with one another.
In a “traditional” organization, top level leaders seek reports and metrics. Based on those reports and metrics, they ask questions, and issue guidance and direction.
The reports and metrics tend to fall into two categories.
Financial metrics that reflect the health of the business.
Indicators of “progress” toward some kind of objective or goal – like “are they doing lean?”
Floating that out there, I want to ask a couple of key questions around purpose.
There are two fundamental approaches to “change” within the organization.
You can work to drive compliance; or you can work to develop your leaders.
Both approaches are going to drive changes in behavior.
What are the tools of driving compliance? What assumptions do those tools make about how people are motivated and what they respond to?
What are the tools of leader development? What assumptions do those tools make about how people are motivated and what they respond to?
The article touches lightly on why ERP implementations are so hazard prone, and then lists the “Biggest Failures” of 2010.
Of note is that the majority of the listed failures are governments. I can see why. Governments, by their nature, have a harder time concealing the budget over runs, process breakdowns and other failures that are endemic with these implementations.
A corporation can have the same, or even a worse, experience, but we are unlikely to know. They are going to make the best of it, work around it, and make benign sounding declarations such as “the ERP implementation is six months behind schedule” if for no other reason than to protect themselves from shareholders questioning their competence.
Does anybody have any of their own stories to share?
In this world of laser beams and ultrasonic transducers, we sometimes lost sight of simplicity.
Remember- the simplest solution that works is probably the best. A good visual control should tell the operator, immediately, if a process is going beyond the specified parameters.
Ideally the process would be stopped automatically, however a clear signal to stop, given in time to avoid a more serious problem, is adequate.
So, in that spirit I give you (from Gizmodo) the following example:
I have posted a few times about the “management by measurement” culture and how destructive it can be. This TED video by Daniel Pink adds some color to the conversation.
Simply put, while traditional “incentives” tend to work well when the task is rote and the solution is well understood, applying those same incentives to situations where creativity is required will reduce people’s performance.
We saw this in Ted Wujec’s Marshmallow Challenge video as well, where an incentive reduced the success rate of the teams to zero.
This time of year companies are typically reviewing their performance and setting goals and targets for next year.
It is important to keep in mind that there is overwhelming evidence that tying bonuses to key performance indicators is the a reliable way to reduce the performance of the company.
All of the discussions about “change” in the organization really come down to trying to overpower the way business leaders have been taught to think about decision making.
In many processes, we ask people to notice things. Often we do this implicitly by blaming people when something is missed. This is easy to do in hindsight, and easy to do when we are investigating and knowing what to look for. But in the real world, a lot of important information gets lost in the clutter.
We talk about 5S, separating the necessary from the unnecessary, a lot, but usually apply it to things.
What about information?
How is critical information presented?
How easy is it for people to see, quickly, what they must?
This is a huge field of study in aviation safety where people get hyper focused on something in an emergency, and totally miss the bigger picture.
This site has a really interesting example of how subtle changes in the way information is presented can make a huge difference for someone trying to pull out what is important. The context is totally different, so our challenge is to think about what is revealed here, and see if we can see the same things in the clutter of information we are presenting to our people.
The purpose of good visual controls is to tell us, immediately, what we must pay attention to. Too many of them, or too much detail – trying to present everything to everyone – has the opposite effect.
An old, very esoteric, post got a four word comment today that sent my mind thinking. And because the topic is esoteric, this post is as well – my apologies.
The post, Is the MRP Algorithm Fatally Flawed, gets a lot of search hits because of the title. The post discusses an obscure PhD dissertation that asserts that the underlying logic of MRP systems share defining characteristics with a debunked model for computational intelligence. The researcher makes a compelling case.
The comment, from Indonesia, said “please send for example”
Assuming I did not misinterpret the comment, I believe the writer was asking for examples of what does not work.
Here is what got me thinking.
In order to refute Dr. Johnston’s thesis, we have to find a non-trivial case where an unaltered application or the MRP algorithm works as intended. Just one. Then we would have to carefully understand that instance to determine if it was truly a case where MRP is working as intended, or something else.
Ironically, the working examples I have seen have gotten there by combining work centers into value streams with pull and systematically turning off the inventory netting and detailed scheduling functions of their MRP. In other words, they are migrating the system toward something that directly connects supplying and consuming processes with each other. These systems are far more able to respond to the small fluctuations that trip up the MRP logic. Those examples, however, confirm, rather than refute, what Dr. Johnston is saying.
Considering that the vast majority of factories are still trying to make the MRP algorithm work, does anyone have an example of where discrete manufacturing order scheduling of each operation actually gives a workable production plan that can be followed without hot lists and other forms of outside-the-system intervention? Just curious now.
Yesterday, Kris left great comment with a compelling link to a TED presentation by Tom Wujec, a fellow at Autodesk.
Back in June, I commented on Steve Spear’s article “Why C-Level Executives Don’t Engage in Lean Initiatives.” In that commentary, Spear contends that business leaders are simply not taught the skills and mindset that drives continuous improvement in an organization. They are taught to decide rather than how to experiment and learn. Indeed, they are taught to analyze and minimize risk to arrive at the one best solution.
Tom Wujec observes exactly the same thing. As various groups are trying to build the tallest structure to support their marshmallow, they consistently get different results:
So there are a number of people who have a lot more “uh-oh” moments than others, and among the worst are recent graduates of business school.
[…]
And of course there are teams that have a lot more “ta-da” structures, and, among the best, are recent graduates of kindergarten.[…] And it’s pretty amazing.
[…] not only do they produce the tallest structures,but they’re the most interesting structures of them all.
What is really interesting (to me) are the skills and mindsets that are behind each of these groups’ performance.
First, the architects and engineers. Of course they build the tallest structures. That is their profession. They know how to do this, they have done it many thousands of times in their careers. They have practiced. Their success is not because they are discovering anything, rather, they are applying what they already know.
In your kaizen efforts, if you already know the solution, then just implement it! You are an architect or engineer.
BUT in more cases than we care to admit, we actually do not know the solution. We only know our opinion about what the solution should be. So, eliminating the architects and engineers – the people who already know the solution – we are left with populations of people who do not know the solution to the problem already. This means they can’t just decide and execute, they have to figure out the solution.
But decide and execute is what they are trained to do. So the CEOs and business school graduates take a single iteration. They make a plan, execute it, and fully expect it to work. They actually test the design as the last step, just as the deadline runs out.
The little kids, though, don’t do that.
First, they keep their eye on the target objective from the very beginning.
Think about the difference between these two problem statements:
Build the tallest tower you can, and put a marshmallow on top.
and
Support the marshmallow as far off the table as you can.
In the first statement, you start with the tower – as the adults do. They are focused on the solution, the countermeasure.
But the kids start with the marshmallow. The objective is to hold the marshmallow off the table. So get it off the table as quick as you can, and try to hold it up there. See the difference?
More importantly, though, is that the kids know they do not know what the answer is. So they try something fast. And fail. And try something else. And fail. Or maybe they don’t fail… then they try something better, moving from a working solution and trying to improve it. And step by step they learn how to design a tower that will solve the problem.
Why? Simply because, at that age, we adults have not yet taught the kids that they are supposed to know, and that they should be ashamed if they do not. Kids learn that later.
Where the adults are focused on finding the right answer, the kids are focused on holding up a marshmallow.
Where the adults are trying to show how smart they are, the kids are working hard to learn something they do not know.
Third – look what happened when Wujac raised the stakes and attached a “big bonus” to winning?
The success rate went to zero. Why? He introduced intramural competition and people were now trying to build the best tower in one try rather than one which simply solved the problem.
Now – in the end, who has advanced their learning the most?
The teams that make one big attempt that either works, or doesn’t work?
Or the team that makes a dozen attempts that work, or don’t work?
When we set up kaizen events, how do we organize them?
One big attempt, or dozens of small ones?
Which one is more conducive to learning? Answer: Which one has more opportunities for failure?
Keep your eye on the marshmallow – your target objective.
Last thought… If you think you know, you likely don’t. Learning comes from consciously applied ignorance.
Edited 2 August 2016 to fix dead link. Thanks Craig.
“What have you learned?” It is a question I hear often at the end of kaizen events and other improvement activity. The key points of a typical report-out, though, seem to be on how much was accomplished, and what was learned comes as an afterthought.
A typical week-long kaizen event is organized like this:
Monday: There may be some classroom type training followed by studying the process. This study is often collecting cycle times and building spaghetti charts.
Tuesday: The team develops their vision or target state – they decide what they are going to do.
Wednesday / Thursday: Make some pretty dramatic changes to the process.
Friday: Report the results followed by pizza for everybody.
The pre-planning often includes some targets for cycle time or inventory, and sometimes even qualifies as a “target condition” that is focused on larger level objectives. But equally often, it doesn’t. The target results are vague or simply “Look at this process and improve it.”
Here is a question – how many coaching cycles – instances where a situation was understood, a target was established, an attempt was made to hit the target, and learning was assessed – actually happen in the course of this week?
In the worst case, zero. Those are the instances where the Friday report-out is followed on Monday by leaving work team to bask in their newly improved process. There is no attempt at all to see what they are struggling with, or if there is, it is an “audit” with the idea of “ensuring compliance” with the new process. No learning at all takes place in this situation. There is only blame shifting.
Nearly as bad is when the answer is “one.” That is, there is some attempt prior to Friday to see if the target condition is achieved, and to understand what new issues have emerged. The problem here is that it is usually too late to do anything. The improvement experts are moving on to planning another event, and whatever is left behind is usually an action item list of incompletes from the week.
That doesn’t work very well either.
Neither does trying to capture “learnings” on a flip chart at the end of the event. That is a nice feel-good exercise, but rarely translates into improving the process of process improvement
So if the above is a “current condition” then what is the target?
How can improvement itself be organized so that we learn how to do it better?
One thing would be to structure kaizen events to cycle through the coaching process many more times during the week. Take the five day agenda, and carry it through EVERY day. That is a start. How would your kaizen events be different if you did five one-day events in a row rather than one five day event? Isn’t that close to one-piece-flow?
Now and then, usually when coaching or teaching someone, I get what I think is a flash of insight. Then I realize that, no, there is nothing new here, it is just a different way to say the same thing. Still, sometimes finding a different way of expressing a concept helps people grasp it, so here is one I jotted down while I was working with a plant.
One of the myths of “lean production” is the idea that, at some point, you achieve stability in all of your processes.
Nothing could be further from the truth.
Failure is a normal condition.
The question is not, whether or not you have process breakdowns.
The question is how you respond to them. Actually, a more fundamental question is whether you even recognize “process failure” that doesn’t knock you over. Our reflex is to try to build failure modes that allow things to continue without intervention. In other words, we inject the process with Novocain so we don’t feel the pain. That doesn’t stop us from hitting our thumb with the hammer, it just doesn’t hurt so much.
But think about it a different way.
“What failed today?”
Followed by
“How do we fix that?”
Now you are on the continuous improvement journey. You are using the inevitable process failure as a valuable source of information, because it tells you something you didn’t know.
There is a huge, well established, body of theory in psychology and neuroscience that says that we learn best when three things happen:
We have predicted, or at least visualized, some kind of result.
We try to achieve that result, and are surprised by something unexpected.
We struggle to understand what it is that we didn’t know.
In other words, when we (as humans) are confronted with an unexpected event, we are flooded with an emotional response that we would rather avoid. In simple terms, this translates to “we like to be right.” The easiest way to “be right” is to anticipate nothing.
This takes a lot of forms, usually sounding like excuses that explain why stability is impossible, so why bother trying?
Why indeed? Simple – if you ever want to get out of perpetual chaos, you first have to embrace the idea that you must try, and fail, before you even know what the real issues are.