Biggest ERP Failures of 2010

pc pointed out a great little article in a post on the discussion forum.

The article touches lightly on why ERP implementations are so hazard prone, and then lists the “Biggest Failures” of 2010.

Of note is that the majority of the listed failures are governments. I can see why. Governments, by their nature, have a harder time concealing the budget over runs, process breakdowns and other failures that are endemic with these implementations.

A corporation can have the same, or even a worse, experience, but we are unlikely to know. They are going to make the best of it, work around it, and make benign sounding declarations such as “the ERP implementation is six months behind schedule” if for no other reason than to protect themselves from shareholders questioning their competence.

Does anybody have any of their own stories to share?

Keep Visual Controls Simple

In this world of laser beams and ultrasonic transducers, we sometimes lost sight of simplicity.

Remember- the simplest solution that works is probably the best. A good visual control should tell the operator, immediately, if a process is going beyond the specified parameters.

Ideally the process would be stopped automatically, however a clear signal to stop, given in time to avoid a more serious problem, is adequate.

So, in that spirit I give you (from Gizmodo) the following example:

Warning Sign

Motivation, Bonuses and Key Performance Indicators

I have posted a few times about the “management by measurement” culture and how destructive it can be. This TED video by Daniel Pink adds some color to the conversation.

Simply put, while traditional “incentives” tend to work well when the task is rote and the solution is well understood, applying those same incentives to situations where creativity is required will reduce people’s performance.

We saw this in Ted Wujec’s Marshmallow Challenge video as well, where an incentive reduced the success rate of the teams to zero.

This time of year companies are typically reviewing their performance and setting goals and targets for next year.

It is important to keep in mind that there is overwhelming evidence that tying bonuses to key performance indicators is the a reliable way to reduce the performance of the company.

Teaching the Scientific Method on TV

So the Mythbusters are teaching the scientific method as entertainment, and somehow industry is not making the leap that the same thinking applies to management.

Do financial management methods developed by Alfred P. Sloan have such a mass and momentum that there is no way to overcome?

All of the discussions about “change” in the organization really come down to trying to overpower the way business leaders have been taught to think about decision making.

He Should Have Seen It

In many processes, we ask people to notice things. Often we do this implicitly by blaming people when something is missed. This is easy to do in hindsight, and easy to do when we are investigating and knowing what to look for. But in the real world, a lot of important information gets lost in the clutter.

We talk about 5S, separating the necessary from the unnecessary, a lot, but usually apply it to things.

What about information?

How is critical information presented?

How easy is it for people to see, quickly, what they must?

This is a huge field of study in aviation safety where people get hyper focused on something in an emergency, and totally miss the bigger picture.

This site has a really interesting example of how subtle changes in the way information is presented can make a huge difference for someone trying to pull out what is important. The context is totally different, so our challenge is to think about what is revealed here, and see if we can see the same things in the clutter of information we are presenting to our people.

The purpose of good visual controls is to tell us, immediately, what we must pay attention to. Too many of them, or too much detail – trying to present everything to everyone – has the opposite effect.

Evidence of Success with MRP?

An old, very esoteric, post got a four word comment today that sent my mind thinking. And because the topic is esoteric, this post is as well – my apologies.

The post, Is the MRP Algorithm Fatally Flawed, gets a lot of search hits because of the title. The post discusses an obscure PhD dissertation that asserts that the underlying logic of MRP systems share defining characteristics with a debunked model for computational intelligence. The researcher makes a compelling case.

The comment, from Indonesia, said “please send for example”

Assuming I did not misinterpret the comment, I believe the writer was asking for examples of what does not work.

Here is what got me thinking.

In order to refute Dr. Johnston’s thesis, we have to find a non-trivial case where an unaltered application or the MRP algorithm works as intended. Just one. Then we would have to carefully understand that instance to determine if it was truly a case where MRP is working as intended, or something else.

Ironically, the working examples I have seen have gotten there by combining work centers into value streams with pull and systematically turning off the inventory netting and detailed scheduling functions of their MRP. In other words, they are migrating the system toward something that directly connects supplying and consuming processes with each other. These systems are far more able to respond to the small fluctuations that trip up the MRP logic. Those examples, however, confirm, rather than refute, what Dr. Johnston is saying.

Considering that the vast majority of factories are still trying to make the MRP algorithm work, does anyone have an example of where discrete manufacturing order scheduling of each operation actually gives a workable production plan that can be followed without hot lists and other forms of outside-the-system intervention? Just curious now.

 

How Do You Deal With Marshmallows?

Yesterday, Kris left great comment with a compelling link to a TED presentation by Tom Wujec, a fellow at Autodesk.

Back in June, I commented on Steve Spear’s article “Why C-Level Executives Don’t Engage in Lean Initiatives.” In that commentary, Spear contends that business leaders are simply not taught the skills and mindset that drives continuous improvement in an organization. They are taught to decide rather than how to experiment and learn. Indeed, they are taught to analyze and minimize risk to arrive at the one best solution.

Tom Wujec observes exactly the same thing. As various groups are trying to build the tallest structure to support their marshmallow, they consistently get different results:

So there are a number of people who have a lot more “uh-oh” moments than others, and among the worst are recent graduates of business school.

[…]

And of course there are teams that have a lot more “ta-da” structures, and, among the best, are recent graduates of kindergarten.[…] And it’s pretty amazing.

[…] not only do they produce the tallest structures,but they’re the most interesting structures of them all.

What is really interesting (to me) are the skills and mindsets that are behind each of these groups’ performance.

First, the architects and engineers. Of course they build the tallest structures. That is their profession. They know how to do this, they have done it many thousands of times in their careers. They have practiced. Their success is not because they are discovering anything, rather, they are applying what they already know.

In your kaizen efforts, if you already know the solution, then just implement it! You are an architect or engineer.

BUT in more cases than we care to admit, we actually do not know the solution. We only know our opinion about what the solution should be. So, eliminating the architects and engineers – the people who already know the solution – we are left with populations of people who do not know the solution to the problem already. This means they can’t just decide and execute, they have to figure out the solution.

But decide and execute is what they are trained to do. So the CEOs and business school graduates take a single iteration. They make a plan, execute it, and fully expect it to work. They actually test the design as the last step, just as the deadline runs out.

The little kids, though, don’t do that.

First, they keep their eye on the target objective from the very beginning.

Think about the difference between these two problem statements:

  • Build the tallest tower you can, and put a marshmallow on top.

and

  • Support the marshmallow as far off the table as you can.

In the first statement, you start with the tower – as the adults do. They are focused on the solution, the countermeasure.

But the kids start with the marshmallow. The objective is to hold the marshmallow off the table. So get it off the table as quick as you can, and try to hold it up there. See the difference?

More importantly, though, is that the kids know they do not know what the answer is. So they try something fast. And fail. And try something else. And fail. Or maybe they don’t fail… then they try something better, moving from a working solution and trying to improve it. And step by step they learn how to design a tower that will solve the problem.

Why? Simply because, at that age, we adults have not yet taught the kids that they are supposed to know, and that they should be ashamed if they do not. Kids learn that later.

Where the adults are focused on finding the right answer, the kids are focused on holding up a marshmallow.

Where the adults are trying to show how smart they are, the kids are working hard to learn something they do not know.

Third – look what happened when Wujac raised the stakes and attached a “big bonus” to winning?

The success rate went to zero. Why? He introduced intramural competition and people were now trying to build the best tower in one try rather than one which simply solved the problem.

Now – in the end, who has advanced their learning the most?

The teams that make one big attempt that either works, or doesn’t work?

Or the team that makes a dozen attempts that work, or don’t work?

When we set up kaizen events, how do we organize them?

One big attempt, or dozens of small ones?

Which one is more conducive to learning? Answer: Which one has more opportunities for failure?

Keep your eye on the marshmallow  – your target objective.

Last thought… If you think you know, you likely don’t. Learning comes from consciously applied ignorance.


Edited 2 August 2016 to fix dead link. Thanks Craig.

What Have You Learned?

“What have you learned?” It is a question I hear often at the end of kaizen events and other improvement activity. The key points of a typical report-out, though, seem to be on how much was accomplished, and what was learned comes as an afterthought.

A typical week-long kaizen event is organized like this:

Monday: There may be some classroom type training followed by studying the process. This study is often collecting cycle times and building spaghetti charts.

Tuesday: The team develops their vision or target state – they decide what they are going to do.

Wednesday / Thursday: Make some pretty dramatic changes to the process.

Friday: Report the results followed by pizza for everybody.

The pre-planning often includes some targets for cycle time or inventory, and sometimes even qualifies as a “target condition” that is focused on larger level objectives. But equally often, it doesn’t. The target results are vague or simply “Look at this process and improve it.”

Here is a question – how many coaching cycles – instances where a situation was understood, a target was established, an attempt was made to hit the target, and learning was assessed – actually happen in the course of this week?

In the worst case, zero. Those are the instances where the Friday report-out is followed on Monday by leaving work team to bask in their newly improved process. There is no attempt at all to see what they are struggling with, or if there is, it is an “audit” with the idea of “ensuring compliance” with the new process. No learning at all takes place in this situation. There is only blame shifting.

Nearly as bad is when the answer is “one.” That is, there is some attempt prior to Friday to see if the target condition is achieved, and to understand what new issues have emerged. The problem here is that it is usually too late to do anything. The improvement experts are moving on to planning another event, and whatever is left behind is usually an action item list of incompletes from the week.

That doesn’t work very well either.

Neither does trying to capture “learnings” on a flip chart at the end of the event. That is a nice feel-good exercise, but rarely translates into improving the process of process improvement

So if the above is a “current condition” then what is the target?

How can improvement itself be organized so that we learn how to do it better?

One thing would be to structure kaizen events to cycle through the coaching process many more times during the week. Take the five day agenda, and carry it through EVERY day. That is a start. How would your kaizen events be different if you did five one-day events in a row rather than one five day event? Isn’t that close to one-piece-flow?

What would be the next  step after that?

What Failed Today?

Now and then, usually when coaching or teaching someone, I get what I think is a flash of insight. Then I realize that, no, there is nothing new here, it is just a different way to say the same thing. Still, sometimes finding a different way of expressing a concept helps people grasp it, so here is one I jotted down while I was working with a plant.

One of the myths of “lean production” is the idea that, at some point, you achieve stability in all of your processes.

Nothing could be further from the truth.

Failure is a normal condition.

The question is not, whether or not you have process breakdowns.

The question is how you respond to them. Actually, a more fundamental question is whether you even recognize “process failure” that doesn’t knock you over. Our reflex is to try to build failure modes that allow things to continue without intervention. In other words, we inject the process with Novocain so we don’t feel the pain. That doesn’t stop us from hitting our thumb with the hammer, it just doesn’t hurt so much.

But think about it a different way.

“What failed today?”

Followed by

“How do we fix that?”

Now you are on the continuous improvement journey. You are using the inevitable process failure as a valuable source of information, because it tells you something you didn’t know.

There is a huge, well established, body of theory in psychology and neuroscience that says that we learn best when three things happen:

  1. We have predicted, or at least visualized, some kind of result.
  2. We try to achieve that result, and are surprised by something unexpected.
  3. We struggle to understand what it is that we didn’t know.

In other words, when we (as humans) are confronted with an unexpected event, we are flooded with an emotional response that we would rather avoid. In simple terms, this translates to “we like to be right.” The easiest way to “be right” is to anticipate nothing.

This takes a lot of forms, usually sounding like excuses that explain why stability is impossible, so why bother trying?

Why indeed? Simple – if you ever want to get out of perpetual chaos, you first have to embrace the idea that you must try, and fail, before you even know what the real issues are.

British NHS Executive Talks About Lean

Lesley Doherty, the Chief Executive at NHS Bolton in the U.K. was recently interviewed by IQPC as a precursor for her being a keynote speaker at a conference IQPC is sponsoring in December (Zurich). In the spirit of full disclosure, IQPC had invited me to participate in a “blogger’s panel discussion” (along with Karen Wilhelm, author of Lean Reflections) earlier this year in Chicago.

The Chicago conference turned out to be very Six Sigma centric – in spite of having Mike Rother as a keynote. But that is history.

I want to reflect a bit about this podcast. I invite you to listen yourself- it is an interesting perspective from a senior executive who discusses her own learning and discovery. I will warn you that you may have to “register” on the web site – though you can uncheck the “send marketing stuff” box. I will also say that the interview’s sound is pretty bad, so it is hard to hear the questions, but I was able to reconstruct most of it from context.

What is interesting, to me a least, is that the methods and experiences are pretty standard stuff – common to nearly all organization undertaking this kind of transformation.

A summary of the notes I took:

They have to deliver hard budget level savings on the order of 5% a year for the next several years. That is new to them as a government organization.

They started out with an education campaign across the organization.

Initial efforts were on increasing capacity, but those efforts didn’t result in budget savings. In one case, costs actually increased. They don’t need more capacity, they need to deliver the same with less.

They have identified process streams (value streams), and run “rapid improvement events.”

Senior people have been on benchmarking or study trips to other organizations, both within and outside of the health care arena.

They are struggling to sustain the momentum after the few months after an “event” and seeing the “standard” erode a bit – interpreting this as needing to increase accountability and saying “This is how we do things here.”

“Sustaining, getting accountability at the lowest level is the biggest challenge.”

In addition, now that they are under budget pressure, they are starting to look at how to link their improvements to the bottom line, but there isn’t a standardized way to do this.

They believe they are at a “tipping point” now.

There is more, having do to with Ms. Doherty’s personal journey and learning, and knowledge sharing across organizations who are working on the same things, but the key points I want to address are above.

Please don’t think that this interview is as cold as I have depicted it. It is about 20 minutes long, and Ms. Doherty is very open and candid about what is working and what is not. It is not a “rah-rah see what we have done?” session.

As I listened, I was intently trying to parse and pull out a few key points. I would have really liked it if these kinds of questions had been asked.

What is their overall long term vision? Other than meeting budgetary pressure and “radically reviewing” processes, and “transformation.” What is the “true north” or the guide point on the horizon you are steering for?

What is the leadership doing to set focus the improvement effort on the things that are important to the organization? What does the process have to look like to deliver the same level and quality of care at 5% lower costs? What kinds of things are, today, in the way of doing that? Which of those problems are you focused on right now? How is that going? What are you learning?

What did they try that didn’t work, and what did they learn from that experience?

When you say “local accountability” to prevent process erosion, what would that look like? What are you learning about the process when it begins to erode?

The “tipping point” is a great analogy. What behaviors are you looking for to tell you that a fundamental shift is taking place?

As you listen, see if you can parse out what NHS Bolton is actually doing.

Is their approach going to sustain, or are they about to hit the “lean plateau?”

What would the “tipping point” look like to you in this organization?

What advice would you give them, based on what you hear in this interview?