The Ecosystem of Culture

image

An organization’s culture and mindset evolve over time. When confronted with a problem or challenge, the organization (or more accurately, the people in the organization) view it through a filter of their experiences. Ideas that they believe have worked for them under past similar conditions are more likely to be applied again. Ideas that have seemed less successful, or more difficult, in the past are less likely to be applied again.

Over time, this collective experience determines how they respond to the day to day rough spots as well as more serious challenges. Those unconscious biases drive the responses, and in turn, shape how their processes are structured.

Different Cultures = Different Ecosystems

The process mechanics in a company like Toyota evolved over decades in a very specific organizational culture ecosystem, with specific values and beliefs shaped by their historic experiences.

When we are looking at the current processes in a different company, we are seeing the process mechanics that evolved in their management culture. Those process mechanics are optimized by the pressures that are exerted by the way THAT company is managed. Since Toyota is managed differently, its processes are optimized by different pressures, so will look different.

If we take Toyota’s process mechanics and shift them into a different ecosystem, they will have the different pressures exerted upon them. Different default decisions will be made. These alien process mechanics will likely begin to resemble the legacy processes rather quickly, if they survive at all.

This is why the promise of a rapid and dramatic change in operational results is frequently unfulfilled. The process mechanics are imported from a tropical rain forest, and installed in an alpine meadow. As beautiful as it looks in one environment, it won’t stand for long in the other.

image

Adjusting the Culture vs. Adjusting the Process Mechanics

If we want this transplant to work, we have to pay careful attention to those evolutionary pressures. In practical terms, this means we try the new mechanics, we must watch carefully to learn what problems they reveal. We also need to observe the decisions that are made when these problems come up.

What adjustments need to be made in the way people interact, and to the immediate response to problems or surprises if this new process is to thrive?

Having a formal structure for this deliberate self-reflection is critical.

The Improvement Kata is engineered to specifically drive this kind of reflection by making changes as experiments, then deliberately reflecting with the question “What have we learned?”

For this to work, of course, we must be honest with ourselves and not just issue a flip answer like “It doesn’t work.”

Because we are asking people to adjust their responses, we are asking them to do things which are unfamiliar and may well run opposite from what they have experienced as successful for them in the past. If we try to move too fast, we are asking them to trust an alien process which is, in their experience, unproven in their environment. We might be asking them to reveal their own limits of knowledge – which is very scary for most of us.

That, in turn, asks for reflection on why “I don’t know…” is so scary to admit in the organization’s culture.

We have sold “lean” as a deceptively simple set of common-sense process mechanics with the idea that if we just implement them, we’ll get incredibly great results. As true as that is, “just implement them” is a lot harder than most of the “rapid improvement” models imply.

There is a lot going on behind what appears to be well understood and simple on the surface.

Executive Rounding: Taking the Organization’s Vitals

Background:

image

I wrote an article appearing in the current (October 2017) issue of AME Target Magazine (page 20) that profiles two very different organizations that have both seen really positive shifts in their culture. (And yes, my wife pointed out the misspelling “continous” on the magazine cover.)

The second case study was about Meritus Health in Hagerstown, Maryland, and I want to go into a little more depth here about an element that has, so far, been a keystone to the positive changes they are seeing.

Sara Abshari and Eileen Jaskuta are presenting the Meritus story at the AME conference next week (October 9, 2017).

Sara is a manager (and excellent kata coach) in the Meritus CI office. Eileen is now at Main Line Health System, but was the Chief Quality Officer at Meritus at the time Joe was presenting at KataCon.

Their presentation is titled Death From Kaizen to Daily Improvement and outlines the journey at Meritus, including the development of executive rounding. If you are attending the conference, I encourage you to seek them out – as well as Craig Stritar – and talk to them about their experiences.

Mark’s Word Quibble

In addition, honestly, the Target Magazine editors made a single-word change in the article that I feel substantially changed the contextual meaning of the paragraph, and I am using this forum to explain the significance.

Here is paragraph from the draft as originally submitted. (Highlighting added to point out the difference):

[…][Meritus][…] executives follow a similar structure as they round several times a week to check-in with the front line and ensure there are no obstacles to making progress. Like the Managing Daily improvement meetings at Idex, the executive rounding at Meritus has evolved as they have learned how to connect the front-line improvements to the strategic priorities.

This is what appears in print in the magazine:

[…][Meritus][…] executives follow a similar structure as they visit several times a week to check in with the frontline and ensure there are no obstacles to making progress. Like the MDI meetings at Idex, the executive visiting at Meritus has evolved as they have learned how to connect the front-line improvements to the strategic priorities.

While this editing quibble can easily be dismissed as a pedantic author (me), the positive here is it gives me an opportunity to highlight different meanings in context, go into more depth on the back-story than I could in the magazine article, and invite those of you who will be attending the upcoming AME conference to talk to some of the key people who will be presenting their story there.

Rounding vs. Visiting

In the world of healthcare, “rounding” is the standard work performed by nurses and physicians as they check on the status of each patient. During rounds, they should be deliberately comparing key metrics and indicators of the patient’s health (vital signs, etc.) against what is expected. If something is out of the expected range, that becomes a signal for further investigation or intervention.

“Visiting” is what the patient’s family and friends do. They stop by, and engage socially.

In industry, we talk about “gemba walks,” and if they are done well, they serve the same purpose as “rounding” on patients in healthcare. A gemba walk should be standard work that determines if things are operating normally, and if they are not, investigating further or intervening in some way.

I am speculating that if I had used the term “structured leader standard work” rather than “rounding” it would not have been changed to “visiting.”

Executive Rounding

Joe Ross, the CEO at Meritus Health, presented a keynote at the Kata Summit last February (2017). You can actually download a copy of his presentation here: http://katasummit.com/2017presentations/. The title of his presentation was “Creating Healthy Disruption with Kata.” More about that in a bit.

The keystone of his presentation was about the executives doing structured rounding on various departments several times a week. These are the C-Level executives, and senior Vice Presidents. They round in teams, and change the routes they are rounding on every couple of weeks. Thus, the entire executive team is getting a sense of what is going on in the entire hospital, not just in their departments.

Rather than just “visiting,” they have a formal structure of questions, built from the Coaching Kata questions + some additional information. Since everyone is asking the same basic questions, the teams can be well prepared and the actual time spent in a particular department is programmed to be about 5 minutes. The schedule is tight, so there isn’t time to linger. This is deliberate.

After the teams round, the executives meet to share what they have learned, identify system-wide issues that need their attention, and reflect on what they have learned.

In this case, rather than rounding on patients, the executives are rounding to check the operational health of the hospital. They are checking the vital signs and making sure nothing is impeding people from doing the right thing – do people know the right thing to do? If not, then the executives know they need to provide clarity. Do people know how to do the right thing? If not, then the executives need to work on building capability and competence.

In both cases, executives are getting information they need so they can ensure that routine things happen routinely, and the right people are working to improve the right things, the right way. In the long-term, spending this time building those capabilities and mechanisms for alignment deep into the operational hierarchy gives those executives more time to deal with real strategic issues. Simply put, they are investing time now to build a far more robust organization that can take on bigger and bigger challenges with less and less drama.

Results

Though they were only a little more than a year in when Joe presented at KataCon, he reported some pretty interesting results. I’ll let you look at the presentation to see the statistically significant positive changes in employee surveys, patient safety and patient satisfaction scores. What I want to bring attention to are the cultural changes that he reported:

image

Leadership Development

Actually points 1. and 2. above are both about leadership development. The executives are far more in touch with what is happening, not only in their own departments, but in others. Even if they don’t round on their own departments, they hear from executives who did, and get valuable perspectives and questions from outsiders. This helps break down silo walls, build more robust horizontal linkages, and gives their people a stage to show what they are working on.

Since executives can’t be the ones with all of the solutions, they are (or should be) mostly concerned with developing the problem solving capabilities in their departments. At the same time, rounding gives them perspective on problems that only executive action can fix. In a many organizations mid-manager facing these systemic obstacles would try to work around them, ignore them, or just accept “that’s the way it is” and nothing gets done about these things. That breeds helplessness rather than empowerment.

On the other hand, if a manager should be able to solve the problem, then there is a leader development opportunity. That is the point when the executive should double down on ensuring the directors and upper managers are coaching well, have target conditions for developing their staff, and are aware of who is struggling and who is not. You can’t delegate knowing what is actually going on. Replying on reports from subordinates without ever checking in a couple of levels down invites well-meaning people to gloss over issues they don’t want to bother anyone about.

Breaking Down Silos by Providing Transparency

The side-benefit of this type of process is that the old cultures of “stay out of my area” silos get broken down. It becomes OK to raise problems. The opposite is a culture where executives consider it betrayal if someone mentions a problem to anyone outside of the department. That control of information and deliberate isolation in the name of maintaining power doesn’t work here. Nobody likes to work in a place like that. Once an organization has started down the road toward openness and no-blame problem solving, it’s hard to turn back without creating backlash of some kind within the ranks.

Creating Disruption

Joe used the term “Disruption” in the title of his presentation. Disruption is really more about emotions than process. There is a crucial period of transition because this new transparency makes people uncomfortable if they come from a long history of trying hard to make sure everything looks great in the eyes of the boss. Even if the top executive wants transparency and getting things out in the open, that often doesn’t play well with leaders who have been steeped in the opposite.

Thus, this process also gives a CEO and top leaders an opportunity to check, not only the responses of others, but their own responses, to the openness. If there are tensions, that is an opportunity to address them and seek to understand what is driving the fear.

In reality, that is very difficult. In our world of “just the facts, ma’am” we don’t like to talk about emotions, feelings, things that make us uncomfortable. Those things can be perceived as weakness, and in the Old World, weakness could never be shown. Being open about the issues can be a level of vulnerability that many executives haven’t been previously conditioned to handle. Inoculation happens by sticking with the process structure, even in the face of pushback, until people become comfortable with talking to each other openly and honestly. The cross-functional rounding into other departments is a vital part of this process. Backing off is like stopping taking your antibiotics because you feel better. It only emboldens the fear.

These kinds of changes can challenge people’s tacit assumptions about what is right or wrong. Emotions can run high – often without people even being aware of why.

Make Sure Failure = Learning

Take a look at this cool video from Space-X that highlights all of the failures that preceded their successful (and now more or less routine) landing of a recoverable orbital booster rocket. Then let’s discuss it a bit.

(Here is the direct link if you don’t get the embed in your feed: https://www.youtube.com/watch?v=bvim4rsNHkQ)

When we see failure, or even failure after failure, it is easy to forget that learning is rarely linear.

A Culture of Learning

Organizations like Space-X (and their counterparts such as Blue Origin) are in the business of learning. They are pushing the edges of what is known and moving into new territory. For organizations that understand that setbacks, mistakes, failures and the like are an inevitable part of learning, these things – while costly and unpleasant – are regarded as part of the process.

We have seen the same mechanisms in play – a process of experimentation toward progressive target conditions toward a visionary challenge – behind pretty much every breakthrough achievement throughout history.

No Mistakes = No Learning

At the opposite end of the spectrum are organizations with no tolerance for mistakes. They expect everything (and every one) to get everything right every time. They dismiss as incompetent any notion of failure, and attack as weakness any admission of “I don’t know” or “I don’t know how.”

A few years ago, as I was teaching Toyota Kata coaching with a client, a middle manager approached me during a break and said – point blank – that it was not his responsibility to develop his people. “Our policy is to hire competent people, and we expect them to be able to do the job.” He wasn’t the only one to say that, so I built the impression that this belief was, indeed, part of their culture. Needless to say they struggle a bit with getting innovation to happen because they try to mechanize the process.

Mistakes = Tuition

Here’s how I look at it. When a mistake happens – especially one that is expensive – you have paid considerable tuition. Your choice now is to either extract as much learning as you can from the event, or to try to ignore it and move on. The later choice is like paying your tuition up-front, then skipping all of your classes and wonder why you aren’t getting it.

Learning = Adapting to Change

Organizations that manage in ways that regard learning as part of their everyday experience are much more adaptive to changes and surprises than those who just execute their routines every day. The paradox here is that organizations who value learning are generally the most disciplined at following their routines. This discipline makes execution a hypothesis test, and they can quickly see when their process isn’t appropriate and adapt and learn quickly as an organization. They strengthen their routines, and through those routines, embed what they have learned in the organization’s DNA for future generations.

Organizations that figure it out as they go, on the other hand, tend to rely on individuals to adapt, but there is no mechanism to capture that learning beyond the individual or small group. Sometimes there is a “lessons learned” document, but that’s it. Those reports rarely result in the changes in organizational behavior that reflect learning. I suppose the most egregious case would be the loss of the space shuttle Columbia upon re-entry for exactly the same organizational failures that resulted in the loss of Challenger.

Technical vs. Cultural Learning

Space-X is solving a technical problem with science and engineering. I hope (and expect) that as they become more successful they will always be striving for something really hard that will drive them to the next level. Based on what I see publicly, I think that is embedded into their culture by Elon Musk. (But I don’t really know. If anyone from Space-X is reading this, how about getting in touch? I’d love to learn more.)

I expect this works for Space-X because they have a culture of learning.

What doesn’t work, though, is to try to apply technical solutions to transition a rote-execution culture into a learning culture. Changing the culture – the default behaviors and responses of people as they interact – isn’t about improving the mechanics of the work process. You certainly can work on the work processes, but the starting condition is what evolved in the context of the organization’s culture. The mechanics of the “improved” process that we try to duplicate evolved in the context of a learning culture. The ecosystems are different. It is difficult for a lean process to survive in a culture that expects everything to run perfectly and doesn’t have robust mechanisms to turn problems into improvements.

Creative Safety Supply: Kaizen Training and Research Page

Normally when I get an email from a company pointing me to the great lean resource on their web page, I find very little worth discussing. But Creative Safety Supply in Beaverton, Oregon has some interesting material that I think is worth taking a look at.

First, to be absolutely clear, I have not done business with them, nor do I have any business relationship. I can’t speak, one way or the other, about their products, customer service, etc

With that out of the way, I found their Kaizen Training and Research Page interesting enough to go through it here and comment on what I see.

What, exactly, is “PDCA?”

The section titled Kaizen History goes through one of the most thorough discussions of the evolution of what we call “PDCA” I have ever read, tracing back to Walter Shewhart. This is the only summary I have ever seen that addresses the parallel but divergent histories of PDCA through W. Edwards Deming on the one hand and Japanese management on the other. There has been a lot of confusion over the years about what “PDCA” actually is. It may well be that that confusion originates from the same term having similar but different definitions depending on the context. This section is summed up well here:

The Deming Circle VS. PDCA

In August of 1980, Deming was involved in a Roundtable Discussion on Product Quality–Japan vs. the United States. During the roundtable discussion, Deming said the following about his Deming Circle/PDSA and the Japanese PDCA Cycle, “They bear no relation to each other. The Deming circle is a quality control program. It is a plan for management. Four steps: Design it, make it, sell it, then test it in service. Repeat the four steps, over and over, redesign it, make it, etc. Maybe you could say that the Deming circle is for management, and the QC circle is for a group of people that work on faults encountered at the local level.”

So… I learned something! Way cool.

Rapid Change vs. Incremental Improvement

A little further down the page is a section titled Kaizen Philosophy. This section leans heavily on the thoughts / opinions of Masaaki Imai through his books and interviews. Today there is an ongoing debate within the lean community about the relative merits of making rapid, radical change, vs. the traditional Japanese approach of steady incremental improvement over the long-haul.

In my opinion, there is nothing inherently wrong with making quick, rapid changes IF they are treated as an experiment in the weeks following. You are running to an untested target condition. You will likely surface many problems and issues that were previously hidden. If you leave abandon the operators and supervisors to deal with those issues on their own, it is likely they simply don’t have the time, skill or clarity of purpose required to work through those obstacles and stabilize the new process.

You will quickly learn what the knowledge and skill gaps are, and need to be prepared to coach and mentor people through closing those gaps. This brings us to the section that I think should be at the very top of the web page:

Respect for People

Almost every discussion about kaizen and continuous improvement mentions that it is about people, and this page is no different. However in truth, the improvement culture we usually describe is process focused rather than people focused, and other than emphasizing the importance of getting ideas from the team, “employee engagement is often lip-service. There is, I think, a big difference between “employee engagement” and “engaging employees.” One is passive, waiting for people to say something. The other is active development of leaders.

Management and Standards

When we get into the role of management, the discussion turns somewhat traditional. Part of this, I think, is a common western interpretation of the word “standards” as things that are created and enforced by management.

According to Steve Spear (and other researchers), Toyota’s definition of “standard” is quite different. It is a process specification designed as a prediction. It is intended to provide a point of reference for the team so they can quickly see when circumstances force them to diverge from that baseline, revealing a previously unknown problem in the process.

Standards in this world are not something static that “management should make everyone aware of” when they change. Rather, standards are established by the team, for the team, so the team can use them as a target condition to drive their own work toward the next level.

This doesn’t mean that the work team is free to set any standard they like in a vacuum. This is the whole point of the daily interaction between leaders at all levels. The status-quo is always subjected to a challenge to move to a higher level. The process itself is predicted, and tested, to produce the intended quality at the predicted cost, in the predicted time, with the predicted resources. Because actual process and outcomes are continuously compared to the predicted process and outcomes, the whole system is designed to surface “unknowns” very quickly.

This, in turn, provides opportunities to develop people’s skills at dealing with these issues in near-real time. The whole point is to continuously develop the improvement skills at the work team level so we can see who the next generation of leaders are. (Ref: Liker and Convis, “The Toyota Way to Lean Leadership”)

Staging improvement as a special event, “limited time only” during which we ask people for input does not demonstrate respect, nor does it teach them to see and solve those small issues on a daily basis.

There’s more, but I’m going to stop here for now.

Summary

Creative Safety Supply clearly “gets it.” I think this page is well worth your time to read, but (and this is important), read it critically. There are actually elements of conflicting information on the page, which is awesome because it gives you (the reader) an opportunity to pause and think.

From that, I think this one-page summary really reflects the state of “lean” today: There IS NO CANONICAL DEFINITION. Anyone who asserts there is has, by definition, closed their mind to the alternatives.

We can look at “What Would Toyota Do?” as somewhat of a baseline, but ultimately we are talking about an organizational culture. Toyota does what they do because of the ways they structure how people interact with one another. Other companies may well achieve the same outcomes with different cultural mechanisms. But the interactions between people will override process mechanics every time.

Hopefully I created a lot of controversy here.  🙂

There Are No Silver Bullets

There are no quick, simple solutions

Occasionally I get an email from someone who asks a question like “How can I improve cycle time in the [fill in the blank here] industry?” Generally my reply is along the lines of “I don’t know, but I can help you figure it out.” I’ll give them some homework, often pointing them at Mike Rother’s Toyota Kata Practice Guide online, and asking them to do the Process Analysis step and get back to me with what they have seen and learned.

Lone Ranger with Silver Bullet
Who was that masked man?

This is usually followed by silence (cue the crickets here). Perhaps they think there is an easy answer and a single email can just tell them what to do to get that 20% performance improvement.

Unfortunately it doesn’t work like that. Process improvement involves work. There aren’t easy fixes (that last). There isn’t any solution anyone can give you that can just be implemented, nor can anyone learn it for you.

The real work is adjusting your culture

Digging a little deeper, if you want that productivity improvement to reach even a fraction of your full potential or sustain for any length of time, you have to go beyond technical solutions. When I said process improvement involves work, the technical mechanics are the easy part. The real work is understanding what social and cultural norms in your organization are holding you back and dealing with those.

Fortunately we have learned a lot more about the influence of the organization’s culture and how to influence the culture. But influencing the culture doesn’t happen by accident. And you can’t outsource your own thinking, reflection and learning.

 

Learning Starts With “I Don’t Know”

If an organization wants to encourage learning, they have to get comfortable with not having all of the answers. Learning only happens when we discover something we don’t know, and then actively pursue understanding it. Many organizations, though, equate “having the answers” or “already knowing” with “competence.” Thus, if I say “I don’t know” then I am setting myself up for being regarded as incompetent.

What I see in these organizations is people will take great pains to hide problems. They will try very hard to figure things out, but do so in the background always reporting that everything is going fine. They live in the hope that someone else’s problem will emerge as the show-stopper before theirs does, and give them the extra time to sort out their issue.

Meanwhile, the bosses are frustrated because people aren’t being truthful with them. But what should they expect if “truth” attracts accusations of being incompetent?

But… there is hope.

I was talking to a friend last week who works in a huge company that seems to be making an earnest effort to shift their culture. There is nearly unanimous agreement that the existing culture isn’t working for them. On the other hand, actually changing culture is really, really hard because it involves changing people’s immediate, habitual responses to things.

Nevertheless, I was encouraged when my friend recounted a recent meeting where someone admitted two things:

  1. There was an unexpected problem that came out in a recent test.
  2. They, right now, don’t know how to fix it.

Just to be clear, these two things coming out in this meeting is a big deal. This has been a culture where unexpected problems have not been warmly received. Bringing them up without a confident assessment about a prospective solution was inviting the kind of intervention that is rarely helpful.

This time, though, was a little different.

The leaders started going down the expected responses such as “What do you mean we don’t know what to do?” then… stopped short. They paused, and realized this was not in line with their newly stated values of creating trust and accepting failure as an inherent part of learning.

And they changed their tone. They shifted the conversation from trying to assign responsibility blame for the test failure toward asking what we, the organization, needed to learn to better understand what happened.

My thoughts are:

Kudos to the person who was brave enough to test the waters and admit “I don’t know.”

Obstacles: Right Now vs. Longer Term

A couple of weeks ago Gemba Academy filmed my Toyota Kata class and some shop floor work with a live audience at one of their customer’s sites. One of the participants asked a really good question. Upon reflection, I think I can answer it better here than I did “live,” so I’m going to take a do-over.

image

Background

The team had analyzed their current condition, had established a pretty good target condition, and was working through obstacles.

One of the obstacles was around the fact that the written procedures had not kept up with the way the work was really being performed. This is actually pretty common in industry. The people doing the work know how to do it, and get it done in ways that are better than what is in the documents.

Nevertheless, they needed to update those procedures. If they did not, then new people, or workers that might be rotated into the area temporarily for some reason, would struggle to perform the work in the best way.

The Question

This obstacle was not in the way of reaching their target condition process. However they knew the target process would not be stable in the face of people rotation or turnover. The question was along the lines of:

Is it an obstacle if we can’t sustain the target condition unless we address it?

Answer: It Depends

At the risk of bringing up some really old U.S. political humor, “It depends on what your definition of ‘target condition’ is.”

Here is what I am thinking now.

The first step is to get the target process, as they defined it, to work at all. To do this, I would work to control variables, including trying hard to avoid rotating people through there while I am getting it dialed in.

Once we have established that the target process can work with experienced people, then the next target condition might well be to get this process anchored well enough that it will sustain over time without tons of intervention.

Maybe my next target condition is to be able to sustain the target process no matter who is doing it (assuming they have the basic qualification to do that kind of work).

One of the obstacles in the way of that target condition could well be “Our documentation is obsolete.”

Most documentation I have encountered in any industry is actually pretty poor. So this represents an opportunity to experiment your way into developing process documentation that (1) can actually be followed as written and (2) might even be useful for training someone. I’ve never seen that work without a process of iterative trials.

So in this case, I would say “Get it to work the way you intend it to first.” Make that your target condition. THEN start looking at what erodes it if new people step in. In this case, especially, that is going to involve much more than simply updating documentation. How can you set up the work area so that anyone knows what must be done next? What do you need to teach? What do you need to communicate?

I’m Still Thinking About This

Finally, I think this is one of those real-world cases where there isn’t a hard right or wrong answer. There wouldn’t be any harm in updating the process documentation early – except that I expect they will have to do it over once they learn more.

And – not all “obstacles” are actually problems to solve. Sometimes (though less often than we think), there is just something that has to be done that we already know how to do – we just haven’t done it. In those cases, just do it and move on – EXCEPT: Make sure you predict the result of your “just do it,” and CHECK to make sure it worked the way you thought it would. I’ll lay even money it doesn’t, but you won’t know unless you construct it as an experiment. “Just do it-s” usually turn in to “Oh… that didn’t work quite like we thought it would.”

Just make sure you are deliberately learning rather than doing things by rote.

A Machine Productivity Search Question

Here is another test or quiz question that showed up in my search logs:

a machine tool is producing 90 pcs per day .using improved cutting tools,the output is raised to 120 pcs per day. what is the increase in productivity of the machine?

I guess the answer seems pretty obvious…

90 x 1.33, rounded a bit, is 120, which gives us a 33% productivity improvement, right?

Not so fast…

(pun intended)

How fast does the machine need to run?

Is there demand for the additional 30 pieces per day, or are they just being put into inventory with the hope they will sell at some point in the future?

This is actually pretty common where cost accounting systems allocate overhead against production output rather than actual sales.

But what if you are only selling 90 pieces per day?

After three days you will have a day’s worth in inventory. You are running the machine more than you have to, adding wear and tear. You are consuming material to make parts you aren’t selling. At some point you are going to have to shut down the machine – idle it. What is your productivity then?

What Problem Are You Trying to Solve?

It always comes down to this question. Is there a real-world, customer-impacting reason you need the additional output? If so, then yes, this is a valid countermeasure, similar to one I have overseen myself. If the machine is too slow, what do we have to do to run it faster (while maintaining quality and not breaking anything)?

But if the machine is fast enough, then why are you trying to make it run faster?

And what will happen if we do? Use real numbers. You don’t have revenue until a customer with real money (not transfer pricing) actually pays for your product. Pretending otherwise looks great on the balance sheet for a while, but the paper profits aren’t tangible, you can’t use that “money” to buy anything else, or distribute to share holders. In fact, it is just money you have spent not money you have earned.

Machine Utilization at Home

At least here in the USA, a typical home washing machine will run a cycle in about 25 minutes. The dryer takes about 40 minutes to complete a cycle. If you wanted “maximum efficiency” from the washing machine, all you will get is a big pile of wet laundry. There is no point in running the washer any more often than once every 40 minutes. The dryer is pacing the system.

If I could modify the washing machine to run in 15 minutes instead of 25, how much more productivity do I have? The question is nonsense.

This example makes perfect sense to people. Then I often get arguments about how the factory floor is somehow different?

It’s The System, not the Machine

Key Point: You can’t look at one machine in isolation and calculate how “efficient” or “productive” it is unless it is pacing your system. In this case, we don’t have enough information.

Now, I know this example was just a made up case. But I have seen well-meaning production people fall into this trap all of the time. You have to look at the system, not individual machines.

Think Big, Change Small

Anton, my Dutch friend, had a study mission group of Healthcare MBA students from the University of Amsterdam visiting Seattle last week.

Friday morning I spent about four hours with them going through the background and basics of the Improvement Kata and Coaching Kata, and worked to tie that in to what they observed in their visits to local companies. They were a great, engaging group that was fun to work with.

One thing I do to close out every session I do with a group is ask “What did we learn?” and write down their replies on a flip chart. I find that helps foster some additional discussion and consolidate learning. It also gives me feedback on what “stuck” with them.

Sometimes I get a gem that would make a good title for a blog post. The title of this post is one of those.

Think Big

Alice and the Cat

Alice went on, “Would you tell me, please, which way I ought to go from here?”

“That depends a good deal on where you want to get to,” said the Cat.

“I don’t much care where…“ said Alice, “…so long as I get somewhere,” Alice added in explanation.

“Oh, you’re sure to do that,” said the Cat, “if only you walk long enough.”

The question, of course, is whether or not where Alice ends up is where she intends to go. A lot of continuous improvement activity takes this approach –  Look for waste. Brainstorm ideas. Implement them. Just take steps. And, like Alice, you will surely end up somewhere if you do this enough.

I had a former boss (back in the late 90’s) advocate this approach. “We are painting the wall with tennis balls dipped in paint.” The idea, I think, was that sooner or later all of the splotches would start to connect into a coherent color. Maybe. But, at the same time, he was also very impatient for tangible results. Actually that isn’t true. He was impatient for tangible activity which is not the same thing at all.

Direction and Challenge Establish Meaning

In her journeys through Wonderland, Alice learns that objective truth has no meaning in a world of random nonsense. The story, of course, is a parody of the culture and times of Victorian England. It does, however, reflect the frustrations many practitioners can feel when they are just trying to “make improvements.”

As one thing is “fixed,” another pops up for any number of reasons:

  • The “new” problem may well have been hidden by the “fixed” one.
  • Leadership may be chasing short-term symptoms and constantly redirecting the effort.

Day to day it just seems like random stuff, and can get pretty demoralizing.

The point of “Think Big” is that being clear about where WE (not just you) are trying to go helps everyone understand the meaning of what they are doing. That is the whole point of “Understand the Direction and Challenge” as the first step of the Improvement Kata. “What is the meaning behind what you are working on?” It is really a verification check by the coach that the coach has adequately communicated meaning to the learning.

Establish “Why” not “What”

At the same time, it is important for the organization to be clear on why improvement is necessary. I have discussed this a number of times, but keep referring back to Learning to See in 2013 where I ask “Why are you doing this at all?” as the question everyone skips past.

“Where we are going” should not simply be your model of your [Fill Company Name In Here] Production System.

No matter how well explained or understood, a model does not directly address the “Why are we doing this at all?” question that provides meaning to the effort.

It may well establish a good representation of what you would like your process structure to look like, but it does not give people any skill in actually putting these systems into practice, nor a reason to put in the effort required to learn something completely new.

Change Small

Small changes = fast progress as long as there is a coherent direction.

The classic 5 day kaizen event is often an attempt to make a radical improvement in a short period of time. Things usually look really impressive at the end of the week, and even into the next few weeks. What happens, though, is that the follow-up is usually more about finishing up implementation action items than it is working to stabilize the new process.

The problem comes from the baseline assumption that we already understand all of the problems, and our changes will solve them. We line things up, get 1:1 flow running, and yes, there is a dramatic reduction in the nominal throughput time simply because we have eliminated all of the inventory queues.

There is tons of research that backs up the assertion we can’t expect people to be creative when they are under pressure to perform. They are going to revert to their existing habits. During the event itself,  the short time period and high expectations put pressure on people to just implement stuff. People are likely to defer to the suggestions and lead of the workshop leader and install the standard “lean tools” without full understanding of how they work or what effect they will have on the process and people dynamics.

Come Monday morning, we put all of those changes to the test… at once. The people are working in a different way. The problems that will be surfaced are different. The tighter the flow, the more sensitive the system will be to small problems. It is pretty easy to overwhelm people, especially the supervisors who have to decide right now what to do when things don’t seem to be working.

That same pressure to perform exists, only now it is pressure to produce, and possibly even catch up production from what was lost during the previous week. Once again, we can’t expect people to think creatively when these new issues come up, they are going to revert to what they know.

When we do see successful “big change” it is usually the result of many small changes that have each been tested and anchored.

So why is the “blitz” approach so appealing? I think I got some insight into the reason in a conversation with a continuous improvement director in a large corporation. He had so little opportunity to actually engage and break things loose that, when he did, he felt the need to push in everything he could.

My interpretation of this goes back to the first line above: Small changes = fast progress as long as there is a coherent direction. In his case, there wasn’t coherent direction. He had a week, maybe two, to push as hard as he could in the direction he felt things should go. The rest of the time, things were business as usual.

This is why “think big” is important. It provides organizational alignment, and reduces the pressure to seize a limited opportunity and, frankly, inject chaos.

Small, Quick Changes

Because we often don’t see just how long it takes to stabilize a “quick, big change,” we tend to think that quick small changes are slower. I disagree. In my experience the opposite is true.

When there is a clear Challenge and Direction, and frequent check-ins via coaching cycles (or less formal means) on what changes are being made, no time is wasted working on the wrong things.

When small changes are made and tested as part of experiments vs. just being implemented, then there is less chance of erosion later. Rather than overwhelming people with all of the problems at once from a bunch of changes, one-by-one lets them learn what problems must be dealt with. They have an opportunity to always take the next step from a working process rather than struggling to get something that is totally unfamiliar to work at all.

That, in turn, builds confidence and capability.

In a mature organization that has practiced this for years, an outside observer might well see “big changes” being made. But that organization is operating from a base of learning and experience, and what might look big to you might not be big to them. It is all a matter of perspective.

What Do You Think?

I’m throwing this out there, hoping to hear from practitioners. What have you struggled with getting changes made that actually shift people’s behavior (vs. just implementing tools and techniques). What has worked? What hasn’t worked? I’d love to hear in the comments.

Push Improvement vs. Pull Improvement

I’m writing up a proposal for a benchmarking and study week, and just typed the term “push improvement” as a contrast to a true “continuous improvement culture.” I wanted to explore that a bit here, with you, and perhaps I’ll fill in the idea in my own mind.

Push Improvement

A few years ago I was working with a few sites of a multi-national corporation. In one of their divisions, their corporate lean office was pushing various programs into place. Each site was required to implement, and was graded on:

  • 5S
  • Jidoka
  • Heijunka
  • Toyota Kata

plus a few other things. Those are the ones I really remember. Each of these programs was separate and distinct, they had been deployed on some kind of phased time-table over a few years. They also had requirements to maintain some number of Six-Sigma Black Belts and Green Belts in their facilities, with the requirement to report on a number of projects.

They brought me in to teach the Toyota Kata stuff.

In reality, while people taking the class generally thought it was worthwhile, the leadership teams were also a little resentful that all of this stuff was being, in the words of one plant manager, “pushed down our throats.”

Even the Toyota Kata “implementation” had a specific sequence of steps that the site was expected to check off and report their progress on. In addition, there were requirements for how the improvement boards would look, including the position of the corporate logo and the colors used – all in the name of “standardization.”

At the same time, of course, the plants were also measured on their performance – financial, delivery, quality, etc. These metrics were separate and distinct from their audit scores on all of the lean stuff.

On the shop floor, they had the artifacts in place – the boards, the charts, the lines on the floor, but were struggling with making it all work. There was no integration into the management system. Each of these was a program.

I should also point out that, ironically, the plant that was getting the most traction with continuous improvement at the time was the one that was pushing back the hardest against this rote, “standard” approach.

A few years before that, I had been working (as an employee) in another multi-national company. There was a big push from the executives at the very top to develop a set of metrics they could use to determine if a site was “doing lean correctly.” They wanted a set of measurements they could monitor from corporate headquarters that would ensure that any business performance improvement was the result of “doing lean” rather than something else.

Both of these organizations were looking at this as an issue with compliance rather than developing their leaders.

Implementations like these typically involve things like:

  • Developing some kind of “curriculum” and rolling it out.
  • Requiring sites to have a “value stream map” and a “lean plan.”
  • Audits and “lean assessments” against some kind of checklist.
  • Monitoring the level of activity, such as how many kaizen events sites are running.

I have also seen cases of an underlying assumption that “if only we can explain it well enough, then managers will understand and do it.” This assumption drives building models and diagrams that try to explain how everything works, or the relationships between the tools. I’ve seen “pillar models,” puzzles, gears, notional value stream maps, and lots of other diagrams.

In summary, “push improvement” is present when implementing an improvement program is the goal. Symptoms are things such as a project plan with milestones for specific “lean tools” to be in place.

The underlying thinking is that you are doing improvement because you can, not because you must.

This is, I believe, what Jeff Liker and Karyn Ross refer to as mechanistic lean in their book The Toyota Way to Service Excellence.

Relationship to the Status Quo

One of the obstacles that organizations face with this approach is the traditional relationship to the status quo. Most business leaders are trained to evaluate any change against a financial return based on the cost. In other words, we look at the current level of performance as a baseline; evaluate the likely improvement; look at what it would cost to get to that new level and ask “Is it worth doing this?”

If the answer comes up short of some threshold of return, then the answer is “no,” and the status-quo remains as a rational optimum.

Implication: The status-quo is OK unless there is a compelling reason to change it.

For the lean practitioner, this presents a problem. We have to convince management that there is a short-term return on what we are proposing to do in order to justify the effort, time and expense. Six Sigma black belt projects are intently focused on demonstrating very high cost benefit, for example. And, honestly, if you are spending the money to bring in a consultant from Japan and an interpreter, I can see wanting some assurance there is a payback.1

If the discussions are around “What improvements can we make?” and then working up the benefit, you are in this trap. Those benefits, by the way, rarely find their way to the actual P&L unless there is already a business plan to take advantage of them.

The root cause of this thinking may well be disconnection of continuous improvement from whatever challenges the organization is facing; coupled with assumptions that a “continuous improvement program” can be implemented as a project plan on a predictable timeline.

Pull Improvement

Solving Problems

The purpose of any improvement activity is to solve problems. Real problems. In my classes, I often ask for a show of hands from people who have a shortage of problems on a daily basis. I never get any takers. Since there are usually lots of problems – too many to deal with all of them – we need to be careful to work on the right ones. The ROI approach I talked about above is a common way to sort through which ones are worth it. We can also get locked into “Pareto Paralysis”

So which ones should we work on?

Inverting this question, when someone wants to make a change, put in a “lean tool,” etc., my question is “What problem are you trying to solve?” And “we don’t have standard work” is not a problem, it is lack of a proposed solution. In these cases I might ask “…. and therefore?” to try to understand the consequence of this “lack of…” The problem might be buried in there somewhere. But if the issue at hand is trying to raise an audit score for its own sake, we are pack to “Push Improvement.”

A meeting gets derailed pretty fast when people are debating which solution should be applied without first agreeing on the problem they are solving. The Coaching Kata question “Which one [obstacle] are you addressing now?” is intended to get help the learner stay clear and focused on this.

But long before we are talking about problems, we need to know where we are trying to end up. This is why the idea of a clear and compelling challenge is critical.

Challenge First

Organizations that are driven by continuous improvement have a different relationship with the status-quo. Those organizations always have a concrete challenge in front of them. In other words, the status-quo is unacceptable. “Today is the worst we will ever be.”

Cost enters into the discussion when looking at possible ways to reach that destination. But it does not drive the decision about whether or not to try. That decision has already been made. The debate is on how to do it.

The Challenge with Challenges

“Wait a minute… we have objectives to reach too!” Yes, lots of organizations set out objectives for the year or longer. Here are some of the scenarios I have seen, and why I think they are different.

The goal setting is bottom-up. The question “What can we improve?” cascades down the organization, and individual managers set their goals for the year and roll them back up. There may be a little back-and-forth, and discussion of “stretch goals,” but the commitment comes from below. In most of the cases I have seen these goals are carefully worded, the measurements delicately negotiated, with bonuses riding on the level of attainment of these objectives. I have used the word “goal” here because these are rarely challenges or even challenging.

The goal is based strictly on hitting metrics. I have discussed the dangers of “management by measurement” a few times in the past.

A pass is given if there is a strong enough justification made for missing the goal. Thus the incentive to carefully define the metrics, and include caveats and loopholes.

There are measurements but not objectives. Any improvement is OK.

Any true challenge is labeled as a “stretch goal” meaning “I don’t really expect you to be able to reach it.”

And, sometimes the end of the year is an exercise in re-negotiating the definition of “success” to meet whatever has been achieved.

None of this is going to drive continuous improvement, nor align the effort toward achieving something remarkable or “insanely great.”

But here is the biggest difference: FEAR.

Fear of failure. Fear of committing to something I don’t already know how to achieve. Fear of admitting “I don’t know.”

And fear breeds excuses and other victim language that makes sure success or failure was beyond my control.

Fearless Challenges

So maybe that’s it. The key difference is fearless challenge. So what would that look like?

Here I defer to Jeff Liker and Gary Convis’s great book The Toyota Way to Lean Leadership. But I have seen this in action elsewhere as well.

Some key differences here:

  • The challenge comes from above. It isn’t a bottom-up “what can we improve?” It is a top-down “This is what we need to be able to do.”
  • It is an operational need, not a process specification. In other words, it isn’t the “lean plan.” The process specification is what is created to meet the challenge.
  • The challenge is an integral part of developing people’s capability. Perhaps this is the key difference.
  • There is no fear because the challenge comes with active support to meet it. That support, if done well, is both technical and emotional. We’re in this together – because we are.

Improvement = Meeting the Challenge

Given that there is a business or operational imperative established, we are no longer trying to push improvement for its own sake. We have an answer to “Why are we doing this?” beyond “to get a higher 5S score.”

Now there is a pull.

In my Toyota Kata class, I give the teams a challenge that, in the moment, is seemingly impossible. That is intentional. I hear air getting sucked in through teeth. “No way!” is usually the reply when I ask how it feels.

Then, for the next few hours, the teams are methodically guided through:

  • Grasping the current condition – understanding their process as a much deeper level.
  • Breaking down the problem into pieces, taking on one at a time. First a target condition, then specific obstacles.

Depending on how much time they have, 1/4 to 1/3 of the teams crack the problem, a few excel and go beyond, and those that don’t usually acknowledge they are close.

The teams that get there fastest are the ones who get into a quick cadence of documented experiments and learning. They see the problems they must solve much quicker, and figure out what they need to do. I tell them at the start, as I hold up my blank Experiment Records, “The teams that get it are the ones who burn through these the quickest.”

In the process of tackling the challenge, they learn what continuous improvement is really about. I don’t specify their solution. Any hints I give are about what to pay attention to, not what the solution looks like. I don’t deploy tools or give them a template for the solution. At the same time, most teams converge on something similar, which isn’t surprising.

Taking this to the real world – I see similar things. Teams taking on similar challenges on similar processes often arrive at similar looking solutions. But each got there themselves, for their own reasons, often following quite different paths.

While it seems more efficient to just tell them the answers, it is far more effective to teach them how to solve the problem. That is something they can take beyond the immediate issue and into other domains.

Pull Improvement = Meeting a Need

In my post Learning to See in 2013, I posed the question nobody asks: Why are you doing this at all? I point out that, in many cases, value-stream mapping is used as a “what could we improve?” tool, which is backwards from the original intent.

If there is a clear answer to “Why are we doing this?” or, put another way, “What do we need to be able to do that, today, we can’t?” or even “What experience do we aspire to deliver to our customers that, today, we cannot?” then everything else follows. Continuous Improvement becomes a daily discussion about what steps are we taking to get there, how are we doing, what are we learning, what do we need to do next (based on what we learned)?

This is pull. The people responsible for getting a higher level of performance are pulling the effort to get things to flow more smoothly. The mantra here is “not good enough,” but that must be the form of a challenge that inspires people to step up, not punitive.

Then it’s easy because “they” want to do it.

image

Epilogue: For the Practitioner

If you are reading this, you are likely a practitioner – someone on staff who is responsible for “continuous improvement” in some form, but not directly responsible for day-to-day operations. I say that because I know, in general, who my subscribers are.

This concept presents a dilemma because while you are challenged with influencing how the organization goes about improving things, the challenge of what improvements must be made (if it exists at all) is disconnected from your efforts.

That leaves you with trying to “drive improvement into the organization” and “be a change agent” and all of those other buzzwords that are probably in your job description.

Here are some things to at least think about that might help.

Let go of dogma. If you think continuous improvement is only valid if a specific set of tools or jargon are used, then you are already creating resistance for your efforts.

Focus on learning rather than doing. You don’t have all of the answers. And even if you did, you aren’t helping anyone else by just telling them what to do. No matter how much sense it makes to you, logical arguments are rarely persuasive, and generally create a false “yes.”

Seek first to understand. Listen. Paraphrase back. Try to get the words “Yeah, that’s right.” to come out of that resistant manager you are dealing with. Remember your purpose here is to help line leadership meet their challenges. Often those challenges are vague, are negative – as in trying to avoid some consequence – or even expressed as implied threats. You don’t have to agree, but you do need to “get it.”

That’s the first step to rapport, which in turn, is necessary to any kind of agreement or real cooperation. As a friend of mine said a long time ago: “You can always get someone’s attention by punching them in the nose, but they likely aren’t going to listen to what you have to say.” Making someone wrong is rarely going to increase their cooperation.

All of this, by the way, is harder than it sounds. I’m still learning these lessons, sometimes multiple times. I’ve been on my own journey of explicit / deliberate learning here for a couple of years.

We have a couple of generations now of improvement practitioners who have been trained with the idea that “lean is good” (or Six Sigma is good, or Theory of Constraints is good, or…). Therefore, it follows, that these things need to be put into place for their own sake – because all of the best companies do them.

This approach, though, reflects (to me) a shallow understanding of what continuous improvement is all about. It skips the “Why?” and goes straight to “How” and “What.” My experience has also been that relatively few of the practitioners steeped in this can actually articulate how the mechanics of these systems actually drive improvement on a daily basis after all of the mechanics are in place beyond a superficial statement like “people would see and remove waste.”

It seems that implementing the mechanics is equated with improvement, when in reality, those mechanics are simply an engine for starting improvement.

Yes, the mechanics are important, but the mechanics are not the reason. We are leaving out the people when we have these discussions. What are they doing every day (other than “following their standard work”)? How do these mechanics actually help them move, as a team, toward a goal they cannot otherwise attain?

————–

1In its worst manifestation, this thinking can be a cancer on the integrity of the company, for example, GM has had a couple of scandals where it has been revealed that they calculated the ROI of fixing a safety defect vs. the cost of paying off wrongful death lawsuits. Don’t even go there!