A Machine Productivity Search Question

Here is another test or quiz question that showed up in my search logs:

a machine tool is producing 90 pcs per day .using improved cutting tools,the output is raised to 120 pcs per day. what is the increase in productivity of the machine?

I guess the answer seems pretty obvious…

90 x 1.33, rounded a bit, is 120, which gives us a 33% productivity improvement, right?

Not so fast…

(pun intended)

How fast does the machine need to run?

Is there demand for the additional 30 pieces per day, or are they just being put into inventory with the hope they will sell at some point in the future?

This is actually pretty common where cost accounting systems allocate overhead against production output rather than actual sales.

But what if you are only selling 90 pieces per day?

After three days you will have a day’s worth in inventory. You are running the machine more than you have to, adding wear and tear. You are consuming material to make parts you aren’t selling. At some point you are going to have to shut down the machine – idle it. What is your productivity then?

What Problem Are You Trying to Solve?

It always comes down to this question. Is there a real-world, customer-impacting reason you need the additional output? If so, then yes, this is a valid countermeasure, similar to one I have overseen myself. If the machine is too slow, what do we have to do to run it faster (while maintaining quality and not breaking anything)?

But if the machine is fast enough, then why are you trying to make it run faster?

And what will happen if we do? Use real numbers. You don’t have revenue until a customer with real money (not transfer pricing) actually pays for your product. Pretending otherwise looks great on the balance sheet for a while, but the paper profits aren’t tangible, you can’t use that “money” to buy anything else, or distribute to share holders. In fact, it is just money you have spent not money you have earned.

Machine Utilization at Home

At least here in the USA, a typical home washing machine will run a cycle in about 25 minutes. The dryer takes about 40 minutes to complete a cycle. If you wanted “maximum efficiency” from the washing machine, all you will get is a big pile of wet laundry. There is no point in running the washer any more often than once every 40 minutes. The dryer is pacing the system.

If I could modify the washing machine to run in 15 minutes instead of 25, how much more productivity do I have? The question is nonsense.

This example makes perfect sense to people. Then I often get arguments about how the factory floor is somehow different?

It’s The System, not the Machine

Key Point: You can’t look at one machine in isolation and calculate how “efficient” or “productive” it is unless it is pacing your system. In this case, we don’t have enough information.

Now, I know this example was just a made up case. But I have seen well-meaning production people fall into this trap all of the time. You have to look at the system, not individual machines.

Think Big, Change Small

Anton, my Dutch friend, had a study mission group of Healthcare MBA students from the University of Amsterdam visiting Seattle last week.

Friday morning I spent about four hours with them going through the background and basics of the Improvement Kata and Coaching Kata, and worked to tie that in to what they observed in their visits to local companies. They were a great, engaging group that was fun to work with.

One thing I do to close out every session I do with a group is ask “What did we learn?” and write down their replies on a flip chart. I find that helps foster some additional discussion and consolidate learning. It also gives me feedback on what “stuck” with them.

Sometimes I get a gem that would make a good title for a blog post. The title of this post is one of those.

Think Big

Alice and the Cat

Alice went on, “Would you tell me, please, which way I ought to go from here?”

“That depends a good deal on where you want to get to,” said the Cat.

“I don’t much care where…“ said Alice, “…so long as I get somewhere,” Alice added in explanation.

“Oh, you’re sure to do that,” said the Cat, “if only you walk long enough.”

The question, of course, is whether or not where Alice ends up is where she intends to go. A lot of continuous improvement activity takes this approach –  Look for waste. Brainstorm ideas. Implement them. Just take steps. And, like Alice, you will surely end up somewhere if you do this enough.

I had a former boss (back in the late 90’s) advocate this approach. “We are painting the wall with tennis balls dipped in paint.” The idea, I think, was that sooner or later all of the splotches would start to connect into a coherent color. Maybe. But, at the same time, he was also very impatient for tangible results. Actually that isn’t true. He was impatient for tangible activity which is not the same thing at all.

Direction and Challenge Establish Meaning

In her journeys through Wonderland, Alice learns that objective truth has no meaning in a world of random nonsense. The story, of course, is a parody of the culture and times of Victorian England. It does, however, reflect the frustrations many practitioners can feel when they are just trying to “make improvements.”

As one thing is “fixed,” another pops up for any number of reasons:

  • The “new” problem may well have been hidden by the “fixed” one.
  • Leadership may be chasing short-term symptoms and constantly redirecting the effort.

Day to day it just seems like random stuff, and can get pretty demoralizing.

The point of “Think Big” is that being clear about where WE (not just you) are trying to go helps everyone understand the meaning of what they are doing. That is the whole point of “Understand the Direction and Challenge” as the first step of the Improvement Kata. “What is the meaning behind what you are working on?” It is really a verification check by the coach that the coach has adequately communicated meaning to the learning.

Establish “Why” not “What”

At the same time, it is important for the organization to be clear on why improvement is necessary. I have discussed this a number of times, but keep referring back to Learning to See in 2013 where I ask “Why are you doing this at all?” as the question everyone skips past.

“Where we are going” should not simply be your model of your [Fill Company Name In Here] Production System.

No matter how well explained or understood, a model does not directly address the “Why are we doing this at all?” question that provides meaning to the effort.

It may well establish a good representation of what you would like your process structure to look like, but it does not give people any skill in actually putting these systems into practice, nor a reason to put in the effort required to learn something completely new.

Change Small

Small changes = fast progress as long as there is a coherent direction.

The classic 5 day kaizen event is often an attempt to make a radical improvement in a short period of time. Things usually look really impressive at the end of the week, and even into the next few weeks. What happens, though, is that the follow-up is usually more about finishing up implementation action items than it is working to stabilize the new process.

The problem comes from the baseline assumption that we already understand all of the problems, and our changes will solve them. We line things up, get 1:1 flow running, and yes, there is a dramatic reduction in the nominal throughput time simply because we have eliminated all of the inventory queues.

There is tons of research that backs up the assertion we can’t expect people to be creative when they are under pressure to perform. They are going to revert to their existing habits. During the event itself,  the short time period and high expectations put pressure on people to just implement stuff. People are likely to defer to the suggestions and lead of the workshop leader and install the standard “lean tools” without full understanding of how they work or what effect they will have on the process and people dynamics.

Come Monday morning, we put all of those changes to the test… at once. The people are working in a different way. The problems that will be surfaced are different. The tighter the flow, the more sensitive the system will be to small problems. It is pretty easy to overwhelm people, especially the supervisors who have to decide right now what to do when things don’t seem to be working.

That same pressure to perform exists, only now it is pressure to produce, and possibly even catch up production from what was lost during the previous week. Once again, we can’t expect people to think creatively when these new issues come up, they are going to revert to what they know.

When we do see successful “big change” it is usually the result of many small changes that have each been tested and anchored.

So why is the “blitz” approach so appealing? I think I got some insight into the reason in a conversation with a continuous improvement director in a large corporation. He had so little opportunity to actually engage and break things loose that, when he did, he felt the need to push in everything he could.

My interpretation of this goes back to the first line above: Small changes = fast progress as long as there is a coherent direction. In his case, there wasn’t coherent direction. He had a week, maybe two, to push as hard as he could in the direction he felt things should go. The rest of the time, things were business as usual.

This is why “think big” is important. It provides organizational alignment, and reduces the pressure to seize a limited opportunity and, frankly, inject chaos.

Small, Quick Changes

Because we often don’t see just how long it takes to stabilize a “quick, big change,” we tend to think that quick small changes are slower. I disagree. In my experience the opposite is true.

When there is a clear Challenge and Direction, and frequent check-ins via coaching cycles (or less formal means) on what changes are being made, no time is wasted working on the wrong things.

When small changes are made and tested as part of experiments vs. just being implemented, then there is less chance of erosion later. Rather than overwhelming people with all of the problems at once from a bunch of changes, one-by-one lets them learn what problems must be dealt with. They have an opportunity to always take the next step from a working process rather than struggling to get something that is totally unfamiliar to work at all.

That, in turn, builds confidence and capability.

In a mature organization that has practiced this for years, an outside observer might well see “big changes” being made. But that organization is operating from a base of learning and experience, and what might look big to you might not be big to them. It is all a matter of perspective.

What Do You Think?

I’m throwing this out there, hoping to hear from practitioners. What have you struggled with getting changes made that actually shift people’s behavior (vs. just implementing tools and techniques). What has worked? What hasn’t worked? I’d love to hear in the comments.

Push Improvement vs. Pull Improvement

I’m writing up a proposal for a benchmarking and study week, and just typed the term “push improvement” as a contrast to a true “continuous improvement culture.” I wanted to explore that a bit here, with you, and perhaps I’ll fill in the idea in my own mind.

Push Improvement

A few years ago I was working with a few sites of a multi-national corporation. In one of their divisions, their corporate lean office was pushing various programs into place. Each site was required to implement, and was graded on:

  • 5S
  • Jidoka
  • Heijunka
  • Toyota Kata

plus a few other things. Those are the ones I really remember. Each of these programs was separate and distinct, they had been deployed on some kind of phased time-table over a few years. They also had requirements to maintain some number of Six-Sigma Black Belts and Green Belts in their facilities, with the requirement to report on a number of projects.

They brought me in to teach the Toyota Kata stuff.

In reality, while people taking the class generally thought it was worthwhile, the leadership teams were also a little resentful that all of this stuff was being, in the words of one plant manager, “pushed down our throats.”

Even the Toyota Kata “implementation” had a specific sequence of steps that the site was expected to check off and report their progress on. In addition, there were requirements for how the improvement boards would look, including the position of the corporate logo and the colors used – all in the name of “standardization.”

At the same time, of course, the plants were also measured on their performance – financial, delivery, quality, etc. These metrics were separate and distinct from their audit scores on all of the lean stuff.

On the shop floor, they had the artifacts in place – the boards, the charts, the lines on the floor, but were struggling with making it all work. There was no integration into the management system. Each of these was a program.

I should also point out that, ironically, the plant that was getting the most traction with continuous improvement at the time was the one that was pushing back the hardest against this rote, “standard” approach.

A few years before that, I had been working (as an employee) in another multi-national company. There was a big push from the executives at the very top to develop a set of metrics they could use to determine if a site was “doing lean correctly.” They wanted a set of measurements they could monitor from corporate headquarters that would ensure that any business performance improvement was the result of “doing lean” rather than something else.

Both of these organizations were looking at this as an issue with compliance rather than developing their leaders.

Implementations like these typically involve things like:

  • Developing some kind of “curriculum” and rolling it out.
  • Requiring sites to have a “value stream map” and a “lean plan.”
  • Audits and “lean assessments” against some kind of checklist.
  • Monitoring the level of activity, such as how many kaizen events sites are running.

I have also seen cases of an underlying assumption that “if only we can explain it well enough, then managers will understand and do it.” This assumption drives building models and diagrams that try to explain how everything works, or the relationships between the tools. I’ve seen “pillar models,” puzzles, gears, notional value stream maps, and lots of other diagrams.

In summary, “push improvement” is present when implementing an improvement program is the goal. Symptoms are things such as a project plan with milestones for specific “lean tools” to be in place.

The underlying thinking is that you are doing improvement because you can, not because you must.

This is, I believe, what Jeff Liker and Karyn Ross refer to as mechanistic lean in their book The Toyota Way to Service Excellence.

Relationship to the Status Quo

One of the obstacles that organizations face with this approach is the traditional relationship to the status quo. Most business leaders are trained to evaluate any change against a financial return based on the cost. In other words, we look at the current level of performance as a baseline; evaluate the likely improvement; look at what it would cost to get to that new level and ask “Is it worth doing this?”

If the answer comes up short of some threshold of return, then the answer is “no,” and the status-quo remains as a rational optimum.

Implication: The status-quo is OK unless there is a compelling reason to change it.

For the lean practitioner, this presents a problem. We have to convince management that there is a short-term return on what we are proposing to do in order to justify the effort, time and expense. Six Sigma black belt projects are intently focused on demonstrating very high cost benefit, for example. And, honestly, if you are spending the money to bring in a consultant from Japan and an interpreter, I can see wanting some assurance there is a payback.1

If the discussions are around “What improvements can we make?” and then working up the benefit, you are in this trap. Those benefits, by the way, rarely find their way to the actual P&L unless there is already a business plan to take advantage of them.

The root cause of this thinking may well be disconnection of continuous improvement from whatever challenges the organization is facing; coupled with assumptions that a “continuous improvement program” can be implemented as a project plan on a predictable timeline.

Pull Improvement

Solving Problems

The purpose of any improvement activity is to solve problems. Real problems. In my classes, I often ask for a show of hands from people who have a shortage of problems on a daily basis. I never get any takers. Since there are usually lots of problems – too many to deal with all of them – we need to be careful to work on the right ones. The ROI approach I talked about above is a common way to sort through which ones are worth it. We can also get locked into “Pareto Paralysis”

So which ones should we work on?

Inverting this question, when someone wants to make a change, put in a “lean tool,” etc., my question is “What problem are you trying to solve?” And “we don’t have standard work” is not a problem, it is lack of a proposed solution. In these cases I might ask “…. and therefore?” to try to understand the consequence of this “lack of…” The problem might be buried in there somewhere. But if the issue at hand is trying to raise an audit score for its own sake, we are pack to “Push Improvement.”

A meeting gets derailed pretty fast when people are debating which solution should be applied without first agreeing on the problem they are solving. The Coaching Kata question “Which one [obstacle] are you addressing now?” is intended to get help the learner stay clear and focused on this.

But long before we are talking about problems, we need to know where we are trying to end up. This is why the idea of a clear and compelling challenge is critical.

Challenge First

Organizations that are driven by continuous improvement have a different relationship with the status-quo. Those organizations always have a concrete challenge in front of them. In other words, the status-quo is unacceptable. “Today is the worst we will ever be.”

Cost enters into the discussion when looking at possible ways to reach that destination. But it does not drive the decision about whether or not to try. That decision has already been made. The debate is on how to do it.

The Challenge with Challenges

“Wait a minute… we have objectives to reach too!” Yes, lots of organizations set out objectives for the year or longer. Here are some of the scenarios I have seen, and why I think they are different.

The goal setting is bottom-up. The question “What can we improve?” cascades down the organization, and individual managers set their goals for the year and roll them back up. There may be a little back-and-forth, and discussion of “stretch goals,” but the commitment comes from below. In most of the cases I have seen these goals are carefully worded, the measurements delicately negotiated, with bonuses riding on the level of attainment of these objectives. I have used the word “goal” here because these are rarely challenges or even challenging.

The goal is based strictly on hitting metrics. I have discussed the dangers of “management by measurement” a few times in the past.

A pass is given if there is a strong enough justification made for missing the goal. Thus the incentive to carefully define the metrics, and include caveats and loopholes.

There are measurements but not objectives. Any improvement is OK.

Any true challenge is labeled as a “stretch goal” meaning “I don’t really expect you to be able to reach it.”

And, sometimes the end of the year is an exercise in re-negotiating the definition of “success” to meet whatever has been achieved.

None of this is going to drive continuous improvement, nor align the effort toward achieving something remarkable or “insanely great.”

But here is the biggest difference: FEAR.

Fear of failure. Fear of committing to something I don’t already know how to achieve. Fear of admitting “I don’t know.”

And fear breeds excuses and other victim language that makes sure success or failure was beyond my control.

Fearless Challenges

So maybe that’s it. The key difference is fearless challenge. So what would that look like?

Here I defer to Jeff Liker and Gary Convis’s great book The Toyota Way to Lean Leadership. But I have seen this in action elsewhere as well.

Some key differences here:

  • The challenge comes from above. It isn’t a bottom-up “what can we improve?” It is a top-down “This is what we need to be able to do.”
  • It is an operational need, not a process specification. In other words, it isn’t the “lean plan.” The process specification is what is created to meet the challenge.
  • The challenge is an integral part of developing people’s capability. Perhaps this is the key difference.
  • There is no fear because the challenge comes with active support to meet it. That support, if done well, is both technical and emotional. We’re in this together – because we are.

Improvement = Meeting the Challenge

Given that there is a business or operational imperative established, we are no longer trying to push improvement for its own sake. We have an answer to “Why are we doing this?” beyond “to get a higher 5S score.”

Now there is a pull.

In my Toyota Kata class, I give the teams a challenge that, in the moment, is seemingly impossible. That is intentional. I hear air getting sucked in through teeth. “No way!” is usually the reply when I ask how it feels.

Then, for the next few hours, the teams are methodically guided through:

  • Grasping the current condition – understanding their process as a much deeper level.
  • Breaking down the problem into pieces, taking on one at a time. First a target condition, then specific obstacles.

Depending on how much time they have, 1/4 to 1/3 of the teams crack the problem, a few excel and go beyond, and those that don’t usually acknowledge they are close.

The teams that get there fastest are the ones who get into a quick cadence of documented experiments and learning. They see the problems they must solve much quicker, and figure out what they need to do. I tell them at the start, as I hold up my blank Experiment Records, “The teams that get it are the ones who burn through these the quickest.”

In the process of tackling the challenge, they learn what continuous improvement is really about. I don’t specify their solution. Any hints I give are about what to pay attention to, not what the solution looks like. I don’t deploy tools or give them a template for the solution. At the same time, most teams converge on something similar, which isn’t surprising.

Taking this to the real world – I see similar things. Teams taking on similar challenges on similar processes often arrive at similar looking solutions. But each got there themselves, for their own reasons, often following quite different paths.

While it seems more efficient to just tell them the answers, it is far more effective to teach them how to solve the problem. That is something they can take beyond the immediate issue and into other domains.

Pull Improvement = Meeting a Need

In my post Learning to See in 2013, I posed the question nobody asks: Why are you doing this at all? I point out that, in many cases, value-stream mapping is used as a “what could we improve?” tool, which is backwards from the original intent.

If there is a clear answer to “Why are we doing this?” or, put another way, “What do we need to be able to do that, today, we can’t?” or even “What experience do we aspire to deliver to our customers that, today, we cannot?” then everything else follows. Continuous Improvement becomes a daily discussion about what steps are we taking to get there, how are we doing, what are we learning, what do we need to do next (based on what we learned)?

This is pull. The people responsible for getting a higher level of performance are pulling the effort to get things to flow more smoothly. The mantra here is “not good enough,” but that must be the form of a challenge that inspires people to step up, not punitive.

Then it’s easy because “they” want to do it.

image

Epilogue: For the Practitioner

If you are reading this, you are likely a practitioner – someone on staff who is responsible for “continuous improvement” in some form, but not directly responsible for day-to-day operations. I say that because I know, in general, who my subscribers are.

This concept presents a dilemma because while you are challenged with influencing how the organization goes about improving things, the challenge of what improvements must be made (if it exists at all) is disconnected from your efforts.

That leaves you with trying to “drive improvement into the organization” and “be a change agent” and all of those other buzzwords that are probably in your job description.

Here are some things to at least think about that might help.

Let go of dogma. If you think continuous improvement is only valid if a specific set of tools or jargon are used, then you are already creating resistance for your efforts.

Focus on learning rather than doing. You don’t have all of the answers. And even if you did, you aren’t helping anyone else by just telling them what to do. No matter how much sense it makes to you, logical arguments are rarely persuasive, and generally create a false “yes.”

Seek first to understand. Listen. Paraphrase back. Try to get the words “Yeah, that’s right.” to come out of that resistant manager you are dealing with. Remember your purpose here is to help line leadership meet their challenges. Often those challenges are vague, are negative – as in trying to avoid some consequence – or even expressed as implied threats. You don’t have to agree, but you do need to “get it.”

That’s the first step to rapport, which in turn, is necessary to any kind of agreement or real cooperation. As a friend of mine said a long time ago: “You can always get someone’s attention by punching them in the nose, but they likely aren’t going to listen to what you have to say.” Making someone wrong is rarely going to increase their cooperation.

All of this, by the way, is harder than it sounds. I’m still learning these lessons, sometimes multiple times. I’ve been on my own journey of explicit / deliberate learning here for a couple of years.

We have a couple of generations now of improvement practitioners who have been trained with the idea that “lean is good” (or Six Sigma is good, or Theory of Constraints is good, or…). Therefore, it follows, that these things need to be put into place for their own sake – because all of the best companies do them.

This approach, though, reflects (to me) a shallow understanding of what continuous improvement is all about. It skips the “Why?” and goes straight to “How” and “What.” My experience has also been that relatively few of the practitioners steeped in this can actually articulate how the mechanics of these systems actually drive improvement on a daily basis after all of the mechanics are in place beyond a superficial statement like “people would see and remove waste.”

It seems that implementing the mechanics is equated with improvement, when in reality, those mechanics are simply an engine for starting improvement.

Yes, the mechanics are important, but the mechanics are not the reason. We are leaving out the people when we have these discussions. What are they doing every day (other than “following their standard work”)? How do these mechanics actually help them move, as a team, toward a goal they cannot otherwise attain?

————–

1In its worst manifestation, this thinking can be a cancer on the integrity of the company, for example, GM has had a couple of scandals where it has been revealed that they calculated the ROI of fixing a safety defect vs. the cost of paying off wrongful dealt lawsuits. Don’t even go there!

Experimenting at the Threshold of Knowledge

The title of this post was a repeating theme from KataCon 3. It is also heavily emphasized in Mike Rother’s forthcoming book The Toyota Kata Practitioner’s Guide (Due for publication in October 2017).

What is the Threshold of Knowledge?

“The root cause of all problems is ignorance.”

– Steven Spear

September 1901, Dayton, Ohio: Wilbur was frustrated. The previous year, 1900, he, with his brother’s help1, had built and tested their first full-size glider. It was designed using the most up-to-date information about wing design available. His plan had been to “kite” the glider with him as a pilot. He wanted to test his roll-control mechanism, and build practice hours “flying” and maintaining control of an aircraft.

But things had not gone as he expected. The Wrights were the first ones to actually measure the lift and drag2 forces generated by their wings, and in 1900 they were seeing only about 1/3 of the lift predicted by the equations they were using.

The picture below shows the 1900 glider being “kited.”. Notice the angle of the line and the steep angle of attack required to fly, even in a stiff 20 knot breeze.  Although they could get some basic tests done, it was clear that this glider would not suit their purpose.

image

In 1901 they had returned with a new glider, essentially the same design only about 50% bigger. They predicted they would get enough lift to sustain flight with a human pilot. They did succeed in making glides and testing the principle of turning the aircraft by rolling the wing. But although it could lift more weight, the lift / drag ratio was no better.

image

What They Thought They Knew

Wilbur’s original assumptions are well summarized in a talk he gave later that month at the invitation of his mentor and coach, Octave Chanute.

Excerpted from the published transcript of Some Aeronautical Experiments presented by Wilbur Wright on Sept 18, 1901 to the Western Society of Engineers in Chicago (Please give Wilbur a pass for using the word “Men.” He is living in a different era):

The difficulties which obstruct the pathway to success in flying-machine construction are of three general classes:

  1. Those which relate to the construction of the sustaining wings;
  2. those which relate to the generation and application of the power required to drive the machine through the air;
  3. those relating to the balancing and steering of the machine after it is actually in flight.

Of these difficulties two are already to a certain extent solved. Men already know how to construct wings or aeroplanes which, when driven through the air at sufficient speed, will not only sustain the weight of the wings themselves, but also that of the engine and of the engineer as well. Men also know how to build engines and screws of sufficient lightness and power to drive these planes at sustaining speed. As long ago as 1884 a machine3 weighing 8,000 pounds demonstrated its power both to lift itself from the ground and to maintain a speed of from 30 to 40 miles per hour, but failed of success owing to the inability to balance and steer it properly. This inability to balance and steer still confronts students of the flying problem, although nearly eight years have passed. When this one feature has been worked out, the age of flying machines will have arrived, for all other difficulties are of minor importance.

What we have here is Wilbur’s high-level assessment of the current condition – what is known, and what is not known, about the problem of “powered, controlled flight.”

Summarized, he believed there were three problems to solve for powered, controlled flight:

  1. Building a wing that can lift the weight of the aircraft and a pilot.
  2. Building a propulsion system to move it through the air.
  3. Controlling the flight – going where you want to.

Based on their research, and the published experience of other experimenters, Wilbur had every reason to believe that problems (1) and (2) were solved, or easy to solve. He perceived that the gap was control and focused his attention there.

His first target condition had been to validate his concept of roll control based on “warping” (bending) the wings. In 1899 he built a kite and was able to roll, and thus turn, it at will.

At this point, he believed the current condition was that lift was understood, and that the basic concept of changing the direction by rolling the wing was valid. Thus, his next target condition was to scale his concept to full size and test it.

What Happened

Wilbur had predicted that their wing would perform with the calculated amount of lift.

When they first tested it at Kitty Hawk in 1900, it didn’t.

However, at this point, Wilbur was not willing to challenge what was “known” about flight.

Instead the 1901 glider was a larger version of the 1900 one with one major exception: It was built so they could reconfigure the airfoil easily.

Impatient, Wilbur insisted on just trying it. But, to quote from Harry Combs’ excellent history, Kill Devil Hill:

“The Wrights in their new design had also committed what to modern engineers would be an unforgivable sin. […] they made two wing design changes simultaneously and without test.4

Without going into the details (get the book if you are interested) they did manage to get some glides, but were really no closer to understanding lift than they had been the previous year.

They had run past their threshold of knowledge and had assumed (with good reason) that they understood something that, in fact, they did not (nobody did).

They almost gave up.

Deliberate Learning

Being invited to speak in September actually gave Wilbur a chance to reflect, and renewed his spirits. That fall and winter, he and his brother conducted empirical wind tunnel experiments on 200 airfoil designs to learn what made a difference and what did not. In the process, as an “oh by the way,” they invented the “Wright Balance” which was the gold standard for measuring lift and drag in wind tunnel testing until electronics took over.

They went back to what was known, and experimented from there. They made no assumptions. Everything was tested so they could see for themselves and better understand.

The result of their experiments was the 1902 Wright Glider. You can see a full size replica in the ticketing area of the Charlotte, NC airport.

I’ll skip to the results:

image

Notice that the line is now nearly vertical, and the wing pointed nearly straight forward rather than steeply tipped back.

What Do We Need to Learn?

Making process improvements is a process of research and development, just like Wilbur and Orville were going through. In 1901 they fell into the trap of “What do we need to do?” After they got back to Dayton, they recovered and asked “What do we need to learn?” “What do we not understand?”

The Coaching Kata

What I have come to understand is the main purpose of coaching is to help the learner (and the coach) find that boundary between what we know (and can confirm) and what we need to learn. Once that boundary is clear, then the next experiment is equally clear: What are we going to do in order to learn? Learning is the objective of any task, experiment, or action item, because they are all built on a prediction even if you don’t think they are.

By helping the learner make the learning task explicit, rather than implicit, the coach advances learning and understanding – not only for the learner, but for the entire organization.

Where is your threshold of knowledge? How do you know?

 

________________________________

1We refer to “the Wright Brothers” when talking about this team. It was Wilbur who, in 1899, became interested in flight. Through 1900 it was largely Wilbur with his brother helping him. After 1901, though, his letters and diary entries start referring to “we” rather than “I” as the project moved into being a full partnership with Orville.

2The Wright Brothers used the term “drift” to refer to what, today, we call “drag.”

3Wilbur is referring to a “flying machine” built by Hiram Maxim.

4I’m not so sure that this is regarded as an “unforgivable sin” in a lot of the engineering environments I have seen, though the outcomes are similar.

The Importance of Prediction for Learning

image

One of the things, perhaps the thing, that distinguishes “scientific thinking” from “just doing stuff” is the idea of prediction: When we take some kind of action, and deliberately and consciously predict the outcome we create an opportunity to override the default narrative in our brain and deliberately examine our results.

The Toyota Kata “Experiment Record” (which also goes by the name “PDCA Cycles Record”) is a simple form that provides structure for turning an “action item” into an experiment.

Why Is It Important to Make a Prediction?

Explicit learning is driven by prediction.

Explicit Learning

“The most exciting phrase to hear in science, the one that heralds new discoveries, is not ‘Eureka!’ (I found it!) but ‘That’s funny …’ “

— Isaac Asimov

Curiosity is sparked by the unexpected. “I wonder what that is…”

The only way to have “unexpected” is to have “expected.”

When we consciously and deliberately make a prediction, we are setting ourselves up to learn. Why? Because rather than relying on happening to notice things are a little unusual, we are deliberately looking for them.

Deliberate Prediction: The Key to a “Learning Organization?”

Steve Spear, in his book The High Velocity Edge, makes the case that what all high-performance organizations have in common is a culture of explicitly defining their expected result from virtually everything they do.

He studied Toyota extensively for his PhD work, and discovered that rather than exploiting a “lean tool set,” what distinguished Toyota’s culture was deliberately designing prediction mechanisms into all of their processes and activities. This was followed up by an immediate response to investigate anything that doesn’t align with the prediction.

This is the purpose behind standard work, kanban, takt time / cycle time, 1:1 flow, etc. All of those “tools” are mechanisms for driving anomalous outcomes into immediate visibility so they can say… “Huh… that’s funny. I wonder what just happened?”

The High Velocity Edge extends the theory into a more general one, and we see a common mechanism in other high-performance organizations.

OK… that’s one data point on the higher-level continuum.

 

Building 214

Back in 2009 I wrote about a culture change in a post titled A Morning Market. That story actually took place around 2002-2004, and I have just re-verified (Spring 2017) that it still holds.

But it really wasn’t until this afternoon as I was discussing that story with Craig that it finally hit me. The last step in their problem-solving process was “Verification.” To summarize a key point that is actually buried in that post, they could not say a problem was cleared until they had a countermeasure, and had verified that it works.

What is that? It’s a prediction.

Rather than simply putting in a solution and moving on, their process forced them to construct a hypothesis (this countermeasure will make the problem go away), and then experimentally test that hypothesis.

If it worked, great. If it didn’t work then… “Huh, that’s funny. I wonder what just happened?”

This, in turn, not only made them better deliberate problem solvers, it engaged deliberate learning.

What is critically important to understand here is this: That verification step was not included in the problem solving process they trained on. We added it internally as part of our (then kind of rote) understanding of “What would Toyota do?” But it worked, and I believe added a level of nuance that was instrumental in keeping it going.

 

The Improvement Kata

Mike Rother’s work extends what we learned about Toyota. Going beyond “How do they structure their processes?” he went into “How do they structure their conversations?” (And “How can we learn to structure ours the same way?”)

A hallmark of the Improvement Kata, especially (but not exclusively) the “Starter Kata” around experiments, is a deliberate step to make a prediction, test it, and compare the actual outcome with the prediction.

This, in turn, is backed up in Steve Spear’s HBR articles, especially Learning to Lead at Toyota and Fixing Health Care From the Inside, Today,”  both of which should be mandatory reading for anyone interested in learning about continuous improvement.

 

You are Always Making a Prediction Anyway

Any action you take, anything you do, is actually a hypothesis. You are intending or expecting some kind of outcome.

What time do you leave for work? Why? Likely because you predict that if you leave at a particular time, and follow a particular route, you will arrive by a specified time. You might not think about it, but you have made a prediction.

If you are running to any kind of plan, the plan itself is a prediction. It is saying that “If these people work on these tasks, starting at this time, they will complete them at this later time.” It is predicting that the assigned tasks are the tasks that are required to get the bigger job done.

A work sequence is a prediction. If these people carry out these tasks in this order, we will get this outcome in this amount of time at this quality level.

A Six Sigma project is a prediction. If we control these variables in this way, we will see this aspect of the variation stay within these limits.

An “action item” is a prediction. If we take this action then that will happen, or this problem will be solved.

In all of these cases you don’t know, for sure if it will work until you try it and look for anomalies that don’t fit the model.

But in the difference in day-to-day life is we aren’t explicit about what we expect. We don’t really think it through and aren’t particularly aware when an outcome or result differs from what we expected. We just deal with the immediate condition and move on, or worse, assign blame.

What About Implicit Learning?

The human brain (and all brains, really) is a learning engine. Our experience of learning typically comes from what we perceive as feelings.

Take a look at Destlin Sandlin’s classic “Backwards Bicycle” video here, then let’s talk about what was happening.

 

There is nothing special about a “backwards bicycle.” If Destin (or his son) had no prior experience with a regular bicycle, this would simply be “learning to ride a bicycle.” What makes it hard is that, in addition to building new neural pathways for riding a backwards bicycle, he must also extinguish the existing pathways for “riding a bicycle.”

The Neuroscience of Learning (As I understand it.)

Destin has a clear (very clear) objective (Challenge) in his mind: Ride the bicycle without falling down.

As he tries to ride, he knows if he feels like he is losing his balance then he is about to fall.

He (his brain) doesn’t know how to control the wheel to keep the bike upright as he tries to ride. His arms initially make more or less random movements in an attempt to stay upright. This is instinctive, he isn’t thinking about how to move his arms. (This is what he calls the difference between “knowledge” and “understanding.”)

Whatever neurons were firing to move his arms when he loses his balance are a little less likely to fire again the next time he attempts to ride.

Whatever neurons were firing to move his arms when he stays upright for a little while are a little more likely to fire again the next time he attempts to ride.

This actually starts with increased levels of excitatory or inhibitory neurotransmitters in those neural synapses. No physical change to the brain takes place. But this requires a lot of energy. IF HE PERSISTS, over time (often a long time), the brain grows physical connections in those circuits, making those new pathways more permanent. (It also breaks the connections in the pathways that are being extinguished.)

Destin’s six year old son’s brain is optimized for this kind of learning. He creates those new physical neural connections much faster than an adult does. His brain is set up to learn how to ride a bicycle. His father’s brain is set up to ride a bicycle without thinking too much about it. Thus, Destin has a harder time shifting his performance-optimized brain back into learning mode.

All of this is implicit learning. You have something you want to learn, and you are essentially trying stuff. Initially it is random. But over time, the things that work eventually overpower the things that do not. This is also how machine learning algorithms work (not surprisingly).

 

What does this have to do with prediction?

Destin’s brain is running a series of initially random trials and comparing the result of each with the desired result. The line between a “desired result” and a “predicted result” can be kind of blurry in this type of learning. But what is critical here is to understand that learning cannot take place without some baseline to compare the actual result against. There must be a gap of some kind between the outcome we want and what we got. Without that gap, we are simply reinforcing the status-quo.

The weakness with implicit learning is it can reinforce behaviors and beliefs that correlate with a result without actually causing it. We aren’t actually testing whether our actions caused the outcome. We are just repeating those actions that have been followed by the outcome we wanted whether that is by causation or coincidence.

In the case of something like learning to ride a bicycle, that is generally OK. We may learn things that are unnecessary to stay upright on the bicycle1, but we will learn the things that are required.

In athletics, once the basics are in place, coaches can help shift this learning from implicit to explicit by having you practice specific things with specific objectives.

Moving from Implicit to Explicit

Bluntly, the vast majority of organizations are engaged in implicit, not explicit, learning. They repeat whatever has worked in the past without necessary examining why it worked, or if “now” even is similar to “the past.”

These are organizations that operate on “instinct” and “feel.” That actually more-or-less works as long as conditions are relatively stable. They may do things that are unnecessary but are also doing things that are required.

… Until conditions or requirements change.

When the organization has to accomplish something that is outside of their current domain of knowledge – beyond their knowledge threshold – those anecdotes break down. The narrative of cause-and-effect in our minds is no longer accurate.

That is when it is critical to step back, become deliberate, and ask “Where, exactly, are we tying to go?” and “What do we need to learn to get there?”

The alternative is “just trying stuff” and hoping, somewhere along the way, you get the outcome you want. The problem with that? You’re right back where you were – it works, but you don’t know why.

_______________

1Sometimes we develop beliefs that things we do can influence events that, in reality, we have no control over whatsoever. Once we develop those beliefs, we bias heavily to see evidence they are true, and exclude evidence that they are not true.

Looking at some old notes…

I am cleaning out some old notepads. One page has the notes I took during a 4pm production status meeting during my first “get to know you” visit to this company.

In a box on my notes page it says:

image

“More information is not going to help until you begin to define how you expect things to run.”

What I had observed is their focus on the previous 24 hours metrics, with lots of speculation about “what happened” to account for the lost production. They were focused on incidents and, honestly, assigning those incidents as causes. Where they came up short, they were seeking more information about lost production incidents.

But everything I have written is just to give you a little context for the scribbled box on my notes a few years ago.

Today? They fixed it. This was one of the most dramatic culture shifts – from “find who to blame” to “let’s solve the problem” I have ever witnessed.

Reflection:

What are your daily meetings like? Are they focused on the stuff that went wrong yesterday? Or are they focused on where you are trying to go in the near future?

KataCon 2017: Day 2

Today was the final day of KataCon3. As I have done the last couple of days, what follows is a more or less raw stream of my notes and thoughts from the day’s events and sessions.

My notes from Day 0 (Pre-Event) are here.

My notes from Day 1 are here.

Overall Thoughts

Thought I have been to a few, I rarely attend conferences. KataCon has been an exception as I have attended all three that have been held. I find it interesting that each developed a unique vibe, culture, unwritten theme.

KataCon1 in 2015 was a community coming together for the first time. “Toyota Kata” is (still) pretty new but by late 2014 there were enough people working on it to invite them together. The feeling was one of groups of people who had previously been working in isolation realizing that there are lots of others, all over the world (there was a very large European contingent) working on the same things. The ending felt like a launching ramp.

KataCon2 in 2016 had an emerging theme of “leadership development.” Overall the emotions seemed more subdued. This was a group of people coming to learn about what others were doing. The general feeling was more “technical” than “emotional.”

This year (KataCon3, 2017) the message is that “Toyota Kata” is “out in the wild.” It is evolving and adapting to different environments where it is being practiced. Overall the emerging theme seemed to be about organizations approaching the critical mass of a shift in default behavior – the ever elusive “culture change.”

At the same time, the discussions were more nuanced. Perhaps we are moving Toyota Kata beyond the “tool age.” A realization is emerging that this is about shifting the default thinking patterns of people in an organization, and that the kata are simply teaching structure and tools to help this.

At the same time, though, I get a little concerned that organizations will believe they can let go of the structure too soon.

I am going to expand on that more in a future post about Critical Mass.

Adam Light

Adam discussed the translation between the language of Toyota Kata / “Lean” and the language of Scrum / Agile software development as well as some of the similarities and differences between the process based systems such as production, and design activities such as software development.

He pointed out that, just as “lean” has organizations doing rote deployment of the tools – without the underlying thinking and continuous problem solving – so does software development.

Though his emphasis seemed to be about encouraging kata / “lean” based customers to adopt the language of their software developers, and understand how they work, I’m inclined to say it is the responsibility of the supplier to understand the language and working patterns of their customers. Just my initial thought.

Kennametal

“You have to be a learner before you can be a coach.”

Yet another organization has made this point. They tried to skip straight to teaching others, and ended up having to back up and allow the early adopters to take the role of the learner on live processes to build basic competency (and credibility).

1300 Certified Green Belts. 100’s of lean events. Same “certified” tool applied to every problem.

Continuous Improvement was PUSHED in. Measured by:

  • # of Green Belts certified
  • # of events
  • $ documented savings

No evidence of a lean culture.

This echoes the opening slide of my management training material (and my Toyota Kata training material).

They used Steven Spear’s HBR article Learning to Lead at Toyota to describe how they wanted to operate, then found Toyota Kata as a mechanism for learning to operate that way. (And, again, this echoes my own training material).

They noticed they had made dramatic improvements in their safety culture through implementing routines of behavior. This gave them a baseline that they could emulate to improve quality and delivery.

It’s not about the storyboard. – It is about the thinking behind what is written on it.

Current condition is critical before jumping to target condition. This is another very common mistake. Some teams even start with a list of obstacles – the action item mentality. This reinforces the value of having an experienced master coach to help get you started.

Optum

Mike Blaha and Dave Snider of Optum described some of the challenges in building stakeholder consensus for adopting a proposed solution that doesn’t fit their preconceived ideas.

Kata = “Show Your Work”

When a stakeholder or leader can see, or be led through, how the team arrived at a solution, they are more likely to understand. The kata experiment cycles support this process very well.

NOTE: Here is yet another reason why it is critical to be detailed when filling out your PDCA / Experiment Records. You need to be able to go back and understand, or reconstruct, your logic chain that got you to the final solution. Someone who wasn’t involved in your planning needs to be able to read the words and understand what you did and why.

Here is the coolest part. Their story actually described their process of making sure each stakeholder (who could say “no”) understood and agreed with the logical validity of their counter-intuitive proposal. During the break, I asked them if they had come across the term “nemawashi” in their original readings about “lean.” They had not.

Using the kata to solve a problem, they invented nemawashi. I have also seen teams invent SMED without any previous understanding of the process. But then somebody invented all of the so-called “lean tools” at some point. In each case it was done as a countermeasure to a specific problem or obstacle they were encountering in their effort to achieve better flow toward their customers.

Q&A Session

Mike Rother: “We have found no way to jump directly from “Aware of it” to “Able to teach it” without first passing through “Able to do it.”

  • All sorts of obstacles in the way of getting leaders to practice.
  • Bosch viewed constraints of leaders’ time and an obstacle and developed a countermeasure to let leaders practice when they visited the shop floor.

The question of “How to get leaders to practice” and “What about leaders that just delegate lean?” came up throughout the conference.

Many suggestions were offered, however I would add that none of them are guaranteed to work. In fact that is the case for ANY solution for ANY problem that is not EXACTLY your problem.

Something I have seen work sometimes (which is better than something that never works) is to reverse-coach upward. By this I mean using the format of the learner’s answers to the 5 Questions as a way to summarize status on any project. If the leader getting the report like the format, there is a possibility that leader will ask others to report in the same way. Apply the thinking to your answers when talking to the boss.

Question: How will you know this is successful? When asking about whatever change initiative is on the table, this could help clarify what the leader(s) think.

Question: Have you seen your results translate to top line growth? It was a great question, but I’m not sure anyone actually answered it. A lot of discussion about “savings” which isn’t the same.

Mike Rother: Clarification. A target condition has a hard achieve-by-date. It is experimentation with a hard deadline. It isn’t random, and it isn’t “we’ll just try stuff until we get there.” Teams don’t always hit the achieve-by-date, but they try very, very hard to do so.

Skip Stewart: When you are vague about everything, you are OK with whatever you get.

Kata in Healthcare Panel

At the end of yesterday I had planned to attend the Kata in Software Development panel discussion on the premise that it was the topic I knew least about. I changed my mind today for a number of reasons. Based on the feedback I heard from others, I’m glad I did.

In the last couple of years I have probably logged more hours in healthcare than any other environment. From that experience I predicted that although the context would be healthcare, the conversation would be about sharpening application and coaching. My hypothesis was confirmed.

Beth Carrington – Kata as achieving vs. doing. People tend to get a goal and make a list of stuff to do. Question is: What are we striving to achieve?

Challenge: Don’t use “doing” language. i.e. avoid a challenge that states what will be achieved followed by the word “by” and then how we are going to do it. Just stop at what will be achieved. That opens up many more possibilities for getting there.

“By ____(date a year from now)____ we will retain 90% of our freshmen by implementing a student mentoring program.”

“A challenge is a hypothesis that we will be tangibly closer to our vision when we get there.”

Train –> Observe –> Feedback

The method of “See one, do one, teach one” (common in healthcare) makes critical assumptions:

  • There exists a common standard approach to doing it. (usually FALSE)
  • That this approach works when correctly applied. (usually UNTESTED)

Obstacles are what propel you.

– Marci McCoy, Baptist Memorial Health, Oxford, Mississippi

Deployment of a new approach –> Use a real issue as a vehicle.

Experiment with something meaningful vs. a thought experiment.

Concept of non-movable obstacles. These are obstacles that are truly built into the environment or the constraints. My note: While I recognize there are truly obstacles that cannot be addressed, I would be concerned that a team may liberally use this label and constrain themselves out of success.

Deployment: Don’t push the “go” button until you are ready.

TWI is about standard behaviors (vs. standard processes). I’m interpreting as TWI is what people do. It is about how they move their hands, feet, eyes. About what they say. All of these things involve muscle movement regulated by an active brain. Those things, are by definition, behaviors. Control behavior what the brain does. (Note – this doesn’t mean we are in conscious control of all of these things). I found this an interesting way to think about it.

Use TWI to drive toward a target condition.

Kata = mental model. Storyboard = teaching structure. No matter what the teaching structure, the target mental model is always the same.

Process Metric: Measures “How close am I to my desired pattern of work. NOT how close am I to the goal.

We don’t PDCA to achieve the target condition. We PDCA against obstacles. This was a huge take-away insight for me. In retrospect, doh!

(Look for a post about obstacles once I get the rest of my thoughts together)

Closing Comments

“What am I doing that impedes my people from doing this?”

– Michael Lombard, CEO

About the thinking.

  • Not the tools.
  • Not the forms.

KataCon 2017: Day 1

Today was Day 1 of KataCon in San Diego. Click here for my notes from yesterday, the pre-events.

Like yesterday, this is a mix of things I heard and things I thought of as a result of hearing them (or writing about them here). I’ll try to make a distinction between them, but no guarantees, I just write down what I think.

Be a coach. Have a coach. – Seek out someone to coach you, go find someone to coach.

Mike Rother:

A plan is really just a hypothesis. It is a prediction of what you currently believe will happen, based on the evidence you have. It isn’t, not can it be a definitive statement about how things will actually go. If you treat it as a hypothesis instead of a “this will happen” definitive statement, you will be in a much better position to respond smoothly to the inevitable disruptions and discontinuities.

A model alone is not enough. No matter how well you explain a concept, the challenge is (and always has been) how to actually transition it into reality. What I have seen in the past is struggle to develop the perfect model so “they will get it” when it is explained. “They” understand it. That isn’t the problem. The problem is it takes a lot of work, experimentation and discovery to figure out how to make things actually work like that.

The Improvement Kata:

  • A practical scientific thinking pattern (the model) PLUS
  • Daily routines for deliberate practice.

The model, by itself, only gives you the framework for what to practice, not how to practice it.

Mike showed a really interesting, quick little exercise.

“What is the next number in this sequence:

2, 4, 6, 8, 10, 12, _____ ?

Of course most people will say “14.” But that is a hypothesis. A prediction. You actually have no information about what the next number is.

Let’s say the next number is 2.

What have we learned?

  • It isn’t 14. The theory that the numbers are a series incrementing by two has been refuted by the evidence.
  • We have a new theory, perhaps the pattern repeats.

How would we test it? What prediction would we make?

Note this is a much, much different response than “I was wrong.” Instead, evidence that does not fit the theory simply redirects the direction of investigation.

What if it was 14? What if we ran it 100 times and it was 14? Could we definitely say that on the next iteration it would be 14? No. All we can say is there is no evidence that the theory is false.

I’m just throwing this in because I remembered it when I was writing this. From xkcd.com by Randal Munroe:

The Difference between Scientists and The Rest Of Us

Small experiments.

  • Learn quickly.
  • Limited “blast radius” –> Small controlled experiments are far less likely to have far reaching negative effects if things do not go as predicted.

You can’t say “This will happen” with this mindset.

My thought: Any statement about the future you make ALWAYS GETS TESTED, whether you do so deliberately or not. The future arrives. You have a choice. You can EITHER:

  • Be wrong –OR–
  • Learn

Which would you choose? Make the choice deliberately.

This feels awkward. That is how it is supposed to feel. If you don’t feel awkward with something new, you are not learning.

Karyn Ross:

Karyn is the co-author, along with Jeff Liker, of The Toyota Way to Service Excellence. I have a copy of the book and will be writing up a review for a later post.

People who provide service are motivated by saying “I can.” Customer facing people want to help the customer.

The creativity zone is between “I can’t” and “I can.” What targets must you set, what experiments must you run, to cross that gap between “I can’t” and “I can?” as a response to a customer (internal or external)?

Kata is a way to get from “I can’t” to “I can.” It is a model of human creativity.

Invention is the mother of necessity.

Who needed a smart phone before it was invented?

What is your organization’s purpose in serving others? This is a powerful question to ask top leadership.

Dan Vermeech and Chris Schmidt

Dan and Chris had a dynamic presentation about their journey to transition their company away from being an action item driven company  (that won the Shingo Medal for being so good at it!) toward being a learning organization.

Amy Mervak

Amy gave us an update on her journey introduced at a previous conference. She relayed how kata is helping build higher levels of performance AND higher levels of compassion and empathy in a hospice.

Empathy

  • Cognitive – intellectual understanding of what another person is likely feeling.
  • Emotional – a state of co-feeling with another person.
  • Compassionate – response to another person’s feelings.

Her final question was “How does practicing the kata help you and your organization connect?

——-

Prior to the break:

The gap between knowing and doing is far bigger than anybody thinks it is.

Joe Ross

(Joe Ross is the CEO of Meritus Health in Hagerstown, Maryland. They have been a client, and I nominated Joe to be a keynote at the conference.)

“It took me a while to be as happy about a failure as a success.” … “GREAT! We can try again tomorrow!”

Joe was connecting with the principle of predictive failure that Mike had described in the “it isn’t 14, it’s 2” exercise.

“Innovators appear in the strangest places and some superstars are lousy at daily improvement.”

Joe related some cases where the “hero culture” leaders were having a hard time adapting to no longer just “having the answers,” where some quiet more behind-the-scenes leaders are the ones with the breakthroughs.

“We learned that Kata isn’t just a problem solving tool – it’s a leadership development program.”

Jeremiah Davis

Jeremiah has the “Kata at Home” YouTube channel. I highly suggest you check it out. There is a compelling story about kids, family and parenting developing there.

A child’s mind is designed to learn. An adult’s mind is designed to perform.

My thoughts: We, in the continuous improvement community, have been telling people for decades that they must “think like a child” to have fresh ideas.

Jeremiah nails what is different about kids with this quote which I am already incorporating into my own material (with attribution of course). This is the difference. It reflects what a child’s brain, and an adult’s brain, are each optimized to do. It doesn’t mean, of course, that adults can’t learn. Nor does it mean kids can’t perform. But in neither case is it optimal.

What kids are learning, as they grow up, is how to perform. They are learning, through experimentation, which automatic patterns work well to get through everyday life.

If adults are willing to practice they can learn to learn the way kids learn naturally. This is what the Improvement Kata provides – a way to practice.

At the same time, Jeremiah is working very hard as a parent to teach his kids how to learn deliberately so, as their minds mature, their performance optimization – the habits his kids have learned to get through everyday life – is learning. How cool is that?

Adults have “Functional Fixedness” – we (adults) see things as they are. Kids don’t have that constraint.

They see things for what they could be, they see things with features that are only in their imagination, but those features are as real to them as the physical attributes.

And any parent will know this one (or will very much sooner than they expect):

“Puberty is when parents become difficult to deal with.”

Skip Stewart

Skip is the Chief Improvement Officer of Baptist Memorial Health Care, a large multi-hospital and clinic system in the Southeastern USA. He talked about his efforts to integrate various initiatives: Kata, TWI, A3, Hoshin, and more into a single system as well as his impressive scaling of the transition across a multi-state, multi-site 15,000 person organization.

Skip made a great distinction between coaching and scientific coaching. The term “coaching” is a major buzz word these days – I suspect at least some of this is the “Lean Bazaar” tracking in behind the Toyota Kata momentum.

A system cannot be separated into individual parts. Only the interactions between the pieces produce the system behavior.

An automobile consists of an engine, transmission, drive train, wheels, axles, steering system, brakes, body, seats, etc, etc. It’s purpose is to carry you from place to place. None of its constituent sub-systems will do that by themselves. You only have an “automobile” when all of the parts interact correctly.

Skip was quoting Russell Ackoff (Who I had the opportunity to listen to and meet a long while ago), perhaps one of the greatest systems thinkers ever. Skip’s point is that we have all of these different techniques and tools, but none of them, alone, is going to do everything. Furthermore, deploying them separately – without regard for integration – may well make things worse.

Q&A Session

Some notes I took during the Q&A session with the presenters:

Actions to take = hypotheses. They must be tested, not implemented.

Leader standard work is the responsibility of the leader above.

Don’t sweat the obstacle list. If you left something off the list, don’t worry. Obstacles will find you if you don’t find them. – Mike Rother

My follow-on to the above: The original question was asked by someone relating that, as a coach, he could see obstacles that his learner was missing. Mike’s response was right on – if the coach is keeping the learner on track in the improvement corridor, then the obstacles that must be found will, indeed, turn up.

My addendum would be that often times the learner’s path to the target bypasses the obstacles you thought were critical. The learner finds another way. As a coach, you must be open to the possibility that your learner will surprise you with creativity.

There was additional Q&A / discussion time after the day’s events as we ran out of time.

There was additional discussion about the value of the “Model Line” (See yesterday’s post where I discussed a couple of failure mode alternatives to the intended “let them see the power of this” outcome.)

Constancy of purpose is critical. What is your intent? Why are you doing it?

If you are implementing a model line as a learning laboratory for leaders (this is VERY different from a “demonstration”) then that is much more likely to work than if you are implementing a model line to “demonstrate the power of the lean tools.”

Breakout Session: TK in Software Development

These are more my notes upon reflection than things that were covered in the workshop directly.

There were a number of breakout sessions to choose from. I decided to attend the one with the topic I knew the least about.

The workshop itself was engineered to teach the Improvement Kata pattern to software developers rather than teach software development management to Kata Geeks. Still I got a lot out of the discussions about the overlaps.

To anyone casually aware of cutting edge software development today, it is obvious that “Agile” (with scrums, sprints, etc), “Lean Startup,”  and related software (and product*) development management techniques are built around the same underlying thinking pattern that Toyota Kata is intended to teach.

At the same time, it is equally obvious that software development suffers from exactly the same “copy the tools and you will be as good as the best” mentality that the “lean” community does.

It is an interesting topic that I want to learn more about. I’ll be attending the Kata in Software Development panel discussion tomorrow.

_____________________

*The first use of the Rugby “scrum” analogy I have found in literature was in a 1982 HBR article about product engineering development at Honda. I found that quite interesting when the engineering development people were pushing back about using software development management to try to manage engineering design. Um… the software guys got the foundation from engineering, not the other way around.

KataCon 2017: Day 0

KataCon officially starts tomorrow as I write this but today was a couple of pre-events and I learned a few things worth sharing. These are from my notes.

Common failure modes for continuous improvement / kata:

Note that these titles and words are not necessarily what I heard in the presentation. They are my notes and interpretations, along with my own similar experiences.

Resistance from a support organization.

The presenter talked about a case where a manager was having great success applying the Toyota Kata thinking pattern, and improving quality in the process. However the corporate quality department didn’t see the records, paperwork, etc. in the format they expected and caused a lot of problems.

I have actually seen this myself in the case of a factory in a multi-plant company. A plant manager figured out good flow pretty much on his own and got there without kaizen events, and without the direct participation from the corporate “lean promotion office.” Very senior people from the LPO engaged in a subtle (and not subtle) campaign to discredit the success. Ultimately the plant manager ended up leaving the company for elsewhere.

In another case I counseled a local C.I. manager in a larger company to “make what you are doing look like what the corporate C.I. office expects to see.” By renaming some forms and using their jargon to describe what he was doing (even though it was a little different), he was able to protect his approach.

SHOULD anyone have to do this? NO!!! But sometimes it is necessary.

Key Point: Understand the political environment and develop countermeasures just like you would for any other obstacle. It doesn’t help to get made about it. Just deal with it.

Consultant as Surrogate Leader

In this “Fail” the external consultant was chartered to provide coaching to a couple of junior-level managers by the owner of the company who did not participate. Needless to say this can result in political problems as well, especially for the managers being trained when they start doing things differently than what the owner is used to seeing. (What did he think was going to happen?) Again: If you are a practitioner / consultant (internal OR external) be aware of this kind of situation.

Spotlight Showed Problems

The improvement effort begins with a deep look at the current condition. This inevitably makes issues visible that were previously hidden. Again, if the culture / politics of the organization are not friendly to revealing problems, the ground work needs to be laid first.

The Guru Model

The “Guru Model” is pretty common – actually relates to everything above, especially the first topic. If you are working on developing these skills in the line leadership, it can dilute the status of the experts – who may or may not be as truly competent as they claim. We are seeing a shift away from a model of doing what the expert tells us to, and toward a model of learning to figure it out ourselves. The second model is far more flexible and works in almost any situation. We need to let go of the idea of seeking “the answers” from Gurus, and embrace that we need to learn how to figure them out.

“Doing Lean” with a challenge of “Getting Lean”

Don’t be a solution in search of a problem. The traditional approach has been to have “lean program” that pushes deploying tools and models that resemble a snapshot of a benchmark company such as Toyota. That implementation becomes an end unto itself. In the example discussed, the target company was doing fine in the eyes of its owner, but the consultant was trying to sell him on “lean” to “eliminate waste.” Even if there is a lot of possible upside, if the owner isn’t feeling the need to do something different, you are probably not going to get very far.

Note: If you are an internal practitioner, it is even harder. Ultimately it is managements job to set a performance challenge for the organization that can’t be met by tweaking the status-quo. “The challenge is often challenge” came up a number of times – organizations are typically pretty bad at establishing challenges that actually… challenge anything.

The Value of a “Model Line” as a “Demonstration of the Power”

This was an interesting discussion. Traditional thinking, for decades, has been that those who want to implement a change find a single area to transform in order to demonstrate what is possible. The idea is that once management sees the power, they will want to spread the same approach across the organization.

This effort could be taken on by an outside consultant, such as an MEP, or even an internal centralized improvement office (which may technically be “internal consultants” but they are “outside” to the other departments in the organization.

In either case, there is a lot of time and effort required, with no real assurance that even dramatic success will be seen in the light intended. My thoughts are there are at least two other equally plausible scenarios:

  • The one I discussed earlier: The success is discounted as an special case that can’t be replicated. This is what happened in the company where the company lean office that was the primary detractor.
  • Possibly worse: Management sees how great it is, and puts together a mass training / deployment plan to standardize the “new process” and rapidly spread it across the organization as a project – an approach that is doomed to fail.

Better, perhaps, to use a simple, short, mass-orientation exercise such as Kata in the Classroom to introduce the concept to as wide an audience as possible. Then see who is interested and help them learn more. You can’t force this stuff upon anyone. Ever.

Other Notes

Developing capability in the organization requires covering both technical and social (people relations, leadership) skills with leaders.

———-

Target conditions and experiments means “you don’t have to boil the ocean or solve world hunger.” My thoughts: I have seen executives reluctant to accept or commit to a challenge because they did not know ahead of time exactly what would be required to reach it. The point of breaking down the challenge into target conditions, and further into obstacles, and addressing obstacles one-by-one makes the challenge seems less overwhelming.

———-

The first group to get training needs to be the senior leaders. They learn by doing on the shop floor: Making actual improvements on actual processes. If the execs aren’t willing to learn, you probably aren’t going to get far. Further notes: This doesn’t mean you can’t do anything without full participation from the top. But you will reach a limit to what is possible.

———-

Kata Ideas vs. “Suggestions” or simply soliciting ideas for improvement. A traditional suggestion program solicits any idea that the team member things might help. When there is a supervisor-as-learner working against a specific obstacle, striving to achieve a specific target condition, the ideas are much more focused.

Supervisor to team: “I’m trying to figure out how to solve this problem. Does anybody have any ideas we can test?” Then test them one by one as experiments.

———-

“Real” scientists often don’t do true “single factor experiments.” They do mess around and try changing things up to see what happens. But if they see something interesting they then go back and run controlled experiments to isolate variables to understand what is happening.

This is totally different than the common approach of implementing a bunch of changes and hoping the problem goes away. If the problem does go away, and you don’t then rigorously investigate why, you have learned little or nothing. Now you are stuck in the position where you can’t risk changing anything at all because you don’t actually know what is important.

——-

Measuring the success of “Toyota Kata”

We emphasize that you don’t “implement Toyota Kata.” Instead, you use the Improvement Kata and the Coaching Kata as practice routines to embed a new way of thinking into the organization’s daily habit of the way things are done.

The key is the thinking pattern that remains and self sustains. In companies that are very advanced, you sometimes see few, if any, “Toyota Kata boards” because the thinking pattern is embedded in every conversation they have.

Just as we say “Lean is NOT about the tools” – it isn’t about the physical artifacts, neither is Toyota Kata.

However (my current thinking) (THIS IS CRITICAL) – the artifacts are what provides the initial structure that lets you get that thinking started, allows people to practice it, and lets you observe how they are doing. My thought: DO NOT TAKE THIS DOWN until you are confident that someone new just joining your organization is going to learn it simply by picking it up from the way people talk to them every day.

These are just raw notes and some thoughts about them. If any of this sparks interest, leave a comment and I can expand on it with a more formal post.

People to Meet at KataCon

KataCon is a couple of weeks out. If you are considering going you are probably looking at the keynote speakers and KataGeekYellowbreakout workshops.

The other reason to attend KataCon is to meet other people and share experiences with them. I’d like to introduce you to two of those people.

imageHal Frohreich is the Chief Operating Officer of Cascade DAFO in Ferndale, Washington. Their product is custom pediatric foot / ankle orthotics that help kids walk. Yup, custom. Every one is different.

Since taking the position, Hal has been using Toyota Kata as a mechanism to develop the leadership and technical skills of the supervisors and, in doing so, make fundamental shifts to the culture of the organization. For you TWI folks, he has also deployed TWI, especially Job Instruction, along side the Toyota Kata for much more consistency in the way work is performed.

 

 

imageHal provides support to his Production Manager, Tim Grigsby. Tim coaches 4-7 kata boards every day and covers diverse areas including people development, I.T. issues, R&D, and production. Tim views his job as seeing that each work team has the time, education, direction, space, tools and help to improve their work. Toyota Kata provides the structure that he uses to help them develop critical thinking and clarity in their target conditions, obstacles, and their PDCA cycles.

Each afternoon the COO and CEO walk the floor and review the target conditions, obstacles and next steps. This helps keep things aligned as well as ensure nobody is “stuck” on a problem that is outside of their scope to fix.

 

I believe, and teach, that Toyota Kata is a mechanism for driving culture change, and this is the philosophy that Hal and Tim have embraced. While the performance of the organization has dramatically improved by every measure you care to ask about, that is not the real result of this work.

The real outcome has been to create a cadre of front-line leaders that are taking initiative and applying creative solutions vs. just getting through the day doing what they are told.

Come to KataCon and find these guys. They are worth talking to.