The Human in the Loop

W. Edwards Deming espouses a “system of profound knowledge” as the way to manage complex systems. The key points are:

  1. Appreciation for a system. (Systems thinking)
  2. Knowledge about variation. (Knowing the difference between variation inherent in the system and variation with an attributable cause.)
  3. Theory of knowledge. (Understanding how the organization learns – summarized as PDCA)
  4. Psychology.

This last point – psychology – is the one I want to discuss.

The common view of business and production systems is a technical one. We look at things that can be easily disaggregated and analyzed – production processes, financial models, defect rates. Even when we consider the role of people it is in terms of “heads” and labor hours; absenteeism, payroll, labor costs.

Then we turn around and talk about “corporate culture” as though it is an abstract thing that can be analyzed as well, and that conversation all too often turns into commiseration and a blame game where things would be great “if only they….”

Reality, however, is even messier than that. The culture of a company emerges from how people interact with each other, and with the work environment. The work environment itself is also the product of interactions between people. People also interact with the processes themselves. Every second of every day, it is the people who are sensing, assessing, and deciding how to respond to what they see, hear, feel, perceive, believe.

If we truly want to construct a work environment where people make the best possible decisions, it behooves us to rid ourselves of decades old stereotypes and convenient beliefs about why people decide what they do.

Those stereotypes were largely established in the 1930’s and 1940’s. Since then, however, we have learned a great deal about psychology. Further, we are now (finally) beginning to make the connections between what happens in the mind – how people think, feel, perceive, behave – and what happens in the brain – the neuroscience behind the feelings.

How We Decide is a layman’s overview of those linkages.

As I was reading it, I found many topics that link directly to business and the workplace, and stuffed my book full of sticky notes.

Learning

Deming famously said “Management is prediction.” We also know this in the context of PDCA and the scientific method. We make observations, collect facts, and make a prediction. “Given what I know, if I do x I should see y.” Or “… if I observe x happening then y should follow.” These are predictions. By making them, we set ourselves up to either be proven correct, or confront something that surprises us. In either of those situations, we learn either by reinforcing the prediction for next time, or by examining what was not understood and trying to understand it better.

Well – as much as we like to imagine this is a logical process, it isn’t. This is pure emotion, which in turn is driven by changes in the level of dopamine in the brain as we have these experiences. In fact, our emotions learn to predict a vague situation before our logical brains catch on. “I don’t know why, but this feels right” – except that if you think through it that much, you will likely get it wrong.

Lehrer cites a number of scientific studies, but what they all have in common is immediate positive or negative consequences. Without those consequences, the emotional mind is never engaged, and the people never develop that “gut feel” for the situation.

Now before anyone jumps all over the word “consequences” let me be really clear on this point. This has nothing to do with punishment or “accountability.” Indeed, neither of those is immediate enough for this kind of learning to occur. Rather, it means that there is a situation where the person has immediate feedback and knows whether he made a good or bad choice.

What is more interesting is when these experiments were conducted with people who were neurologically impaired to the point where they could only engage in logical thought – they experienced little or no emotion. They failed, totally, at detecting the subtle patterns of success and failure in the experiments. The dopamine driven emotional part of the brain “gets it” and the logic part follows.

So – if we want a person to learn to correctly carry out a subtle process, and develop a good feel for how it is going:

  • They need practice.
  • They need immediate feedback.
  • They need safe opportunities to get it wrong.
  • They need emotional support for continuing to try. (More about that later.)

Now think about this in the context of how your people are trained to perform tasks that require skill or developing a “knack.” How well does your work environment provide a safe place to practice?

Another factor that plays a huge role in learning is reflection. Again, this is something that we all know at a logical level, but do we structure our situations to actually do it… or rely on happenstance? Worse yet, do we try to avoid focusing on things that went less than perfectly in our desire to focus on the positive?

Leher’s next key point is that, except in trivial cases, practice is not simply repetition. It is equally important to be good at it – to know how to practice. Reflection plays a huge role in this. He uses the example of a master game player – chess, poker, backgammon. Bill Robertie plays these radically different games at a world class level.

Leher describes how Robertie learned to play backgammon.

Robertie bought a book on backgammon strategy, memorized a few opening moves, and then started to play. And play. And play. “You’ve got to get obsessed,” he says. […]

After a few years of intense practice, Robertie had turned himself into one of the best backgammon players in the world. “I knew I was getting good when I could just glance at a board and know what I should do…The game started to become..a matter of aesthetics.”

Leher is describing the process of training the dopamine receptors in the brain to give a positive emotional response to thoughts of the right move, and a negative emotional response to thoughts of a bad move. That is what happens in the brain of someone who is playing by instinct.

But, he goes on:

But Robertie didn’t become a world champion just by playing a lot of backgammon. “It’s not the quantity of practice, it’s the quality,” he says. According to Robertie, the most effective way to get better is to focus on your mistakes. In other words you need to consciously consider the errors being internalized by your dopamine neurons. After Robertie plays a chess match, or a poker hand, or a backgammon game, he painstakingly reviews what happened. Every decision is critiqued and analyzed.

Actually this kind of reflection can be found behind pretty much any world-class performance you might see. Professional sports teams review the films. The U.S. Army does “after action reviews” in training. The opposing force commanders and the unit being trained first discuss and reconstruct what really happened, and then drill in on cues that might have revealed missed opportunities. By consciously learning from their mistakes in a practice environment – with blanks and lasers – they make far fewer mistakes when the bullets are real.

What about business? If you are a regular reader (meaning you are interested in this stuff), you likely know that “reflection” is a critical part of policy deployment, otherwise known as hoshin kanri. That reflection is the same process – examining the original intention and prediction, and then seeking to understand why things went differently (better or worse) than anticipated. By understanding the why behind the deltas, the leaders are better able to make better and better plans. That might look like they are leading by instinct, but just like Robertie’s backgammon game, that instinct is honed deliberately by a process of learning.

Likewise, when organizations try to learn “problem solving” and “A3” I see them start with big, complicated problems. But in my experience, it is far better to start off on small ones that are easy to solve. There are a couple of good reasons for this. First, it gets leaders down to the place where the work is done and shows that they actually care. This is all well and good, but it isn’t the primary reason.

The main reason is so they have an opportunity to practice seeing and solving problems in an environment where they can do it a lot, get immediate feedback, and contain the effects of their mistakes. In other words, it is an environment where:

  • They get practice.
  • They get immediate feedback.
  • They have safe opportunities to get it wrong.

    Unfortunately, what many senior leaders fail to give to themselves or to each other is that last point – an emotionally safe environment to make mistakes. And that links to the next key point that Leher makes.

    He describes research by Carol Dweck, a Stanford psychologist, on the role of making mistakes in learning. In her classic research, she gave groups of school children puzzle tests that were relatively simple for them. All of the kids did well. Two groups, selected at random (the total population was over 1000) were alternatively praised for their intelligence (“You must be very smart”) or their effort (“You must have worked really hard.”)

  • To cut to the chase, in follow-up tests, the group that was initially praised for their intelligence avoided subsequent challenges, gave up on tough puzzles more easily, and sought out opportunities to see that they had done better than others.

    The group that was praised for their effort, on the other hand, sought out tougher challenges, worked harder on those tougher puzzles, and sought out opportunities to understand why others had done better than they had. In other words, they were driven to learn.

    To be clear, the only difference between the groups was the initial praise. Throughout the remainder of the experiment, each group sought to self-validate the single compliment they had been given – one group by selecting tasks that allowed them to look smart, the other group by selecting tasks that allowed them to work hard.

    At the end, they were all given a final set of puzzles. Guess which group had learned more about how to tackle them? In short, the kids who were praised for their efforts got better results because they worked hard to learn how to learn.

    Now, this is with little kids. But what we all learn as kids are the things which drive us throughout our lives. Each of us seeks to renew the conditions that got us the most acceptance and praise.

    Application

    Let’s look at how to apply all of this when we are trying to transform an organizational culture.

    First, what are we trying to achieve?

    If we are trying to instill a culture of problem solving and kaizen, then we want people to try hard to solve the problems they are confronting, are willing to experiment (and make mistakes), realize they have to discover (rather than already know) the answers, and support others in doing the same.

    So what are the best conditions to learn how to do that, and do it well?

    • Practice.
    • Immediate feedback.
    • Safe opportunities to experiment.
    • Emotional support for continuing to try.

    If I go back and look at the learning environment described in Learning to Lead at Toyota by Steven Spear, I actually see these characteristics being deliberately put into play. And not simply for the benefit of the senior executive who is the main character, but for the team leaders in training as well.

    Think about how much more effective this gradually building hands-on practice and experience is than sitting people through a three day classroom based “Lean Overview?” Just like you can’t learn backgammon (or infantry tactics) in a lecture, neither is it possible to really understand what kaizen is about. You have to play, and play, and play. You have to reflect, which means you have to know what you expected, know what actually happened, and study the differences.

    No matter what you think people’s motivations should be, let go of your judgments, and look at what we know about how people really learn. Use that information to create the best possible space to do it in.

    The next section covers the neurological and psychological aspects of what Deming calls “tampering” – why we are tempted to do it – and the psychology behind relying on hope and luck as a risk management plan. Pretty interesting stuff.

    Takt Time – Cycle Time

    There has been an interesting discussion thread on “Kaizen (Continuous Improvement) Experts” group on LinkedIn over the last few weeks on the differences between takt time and cycle time.

    This is one of the fundamentals I’d have thought was well understood out there, along with some nuances, but I was quite surprised by the number (and “quality”) of misconceptions posted by people with “lean” and “Sigma” in their job titles.

    I see two fundamental sources of confusion, and I would like to clarify each here.

    “Cycle Time” has multiple definitions.

    “Cycle time” can mean the total elapsed time between when a customer places an order and when he receives it. This definition can be used externally, or with internal customers. This definition actually pre-dates most of the English publications about the Toyota Production System.

    It can also express the dock-to-dock flow time of the entire process, or some other linear segment of the flow. The value stream mapping in Learning to See calls this “production lead time” but some people call the same thing “cycle time.”

    In early publications about the TPS, such as Suzaki’s New Manufacturing Challange and Hirano’s JIT Implementation Manual, the term “cycle time” is used to represent what, today, we call “takt time.” Just to confuse things more, “cycle time” is also used to represent the actual work cycle which may, or may not, be balanced to the takt time.

    We also have machine cycle time, which is the start-to-start time of a machine and is used to balance to a manual work cycle and, in conjunction with the batch size,  is a measure of its theoretical capacity.

    “Cycle time” is used to express the total manual work involved in a process, or part of a process.

    And, of course, “cycle time” is used to express the work cycle of a single person, not including end-of-cycle wait time.

    None of these definitions is wrong. The source of confusion is when the users have not first been clear on their context. Therefore, it is critically important to establish context when you are talking. Adjectives like “operator cycle time” help. But the main thing is to be conscious that this can be a major source of confusion until you are certain you and the other person are on the same wavelength.

    Takt time is often over simplified.

    The classic calculation for takt time is:

    Available Minutes for Production / Required Units of Production = Takt Time

    This is exactly right. But people tend to get wrapped up around what constitutes “available time.” The “pure” definition is usually to take the total shift time(s) and subtract breaks, meetings, and other administrative non-working time. Nobody ever has a problem with this. (Maybe because that is the way Shingijutsu teaches it, and people tend to accept what Shingijutsu says at face value.)

    So let’s review and example of what we have really done here. For the sake of a simple discussion, let’s assume a single 8 hour shift on a 5 day work week. There is a 1/2 hour unpaid lunch break in the middle of the day, so the workers are actually in the plant “at work” for 8 1/2 hours. (this is typical in the USA, if you are in another country, it might be different for you)

    So we start with 8 hours:

    8 hours x 60 minutes = 480 total minutes

    But there is a 10 minute start-up process in the morning, two 10 minute breaks during the day, and 15 minutes shut-down and clean up at the end of the shift for a total of 45 minutes. This time is not production time, so it is subtracted from “available minutes”:

    480 – 45 = 435

    A very common mistake at this point would be to subtract the 30 minute lunch break. But notice that we did not include that time to start with. Subtracting it again would count it twice.

    So when determining takt time, we would use 435 minutes as the baseline. If  leveled customer demand was 50 units / day, then the takt time would be:

    435 available minutes / 50 required units of production = 8.7 minutes (or 522 seconds)

    Note that you can just as easily do this for a week, rather than a day.

    435 minutes x 5 days = 2175 total available minutes

    2175 available minutes / 250 required units of production still equals 8.7 minutes (or 522 seconds)

    All of this is very basic stuff, and I would get few arguments up to this point, so why did I go through it?

    Because if you were to run this factory at a 522 second takt time, you will come up short of your production targets. You will have to work overtime to make up the difference, or simply choose not to make it up.

    Why? Because there are always problems, and problems disrupt production. Those disruptions come at the expense of the 435 minutes, and you end up with less production time than you calculated.

    Then there is the fact that the plant manager called an all-hands safety meeting on Thursday. That pulled 30 minutes out of your production time. Almost four units of production lost there.

    I could go on with a myriad of examples gathered from real production floors, but you get the idea.

    Here is what is even worse, though.

    When are you going to work on improvements?

    If you expect operators to do their daily machine checks, when do you expect that to happen?

    Do you truly expect your team members to “stop the line” when there is a problem?

    All of these things take time away from production.

    The consequence is that the shop floor leadership – the ones who have to deal with the consequences of disrupted production – will look at takt time as a nice theory, or a way to express a quota, but on a minute-by-minute level, it is pretty useless for actually pacing production.

    All because it was oversimplified.

    If you expect people to do something other than produce all day, you have to give them time to do it.

    Let’s get back to the fundamental purpose of takt time and then see what makes sense.

    The Purpose of Takt Time

    Here is some heresy: Running to takt time is wholly unnecessary. Many factories operate just fine without even knowing what it is.

    What those factories lose, however, is a fine-grained sense of how things are going minute by minute. Truthfully, if they have another way to immediately see disruptions, act to clear them, followed by solving the underlying problem then they are as “lean” as anyone. So here is the second heresy: You don’t NEED takt time to “be lean.”

    What you need is some way to determine the minimum resource necessary to get the job done (JIT), and a way to continuously compare what is actually happening vs. what should be happening, and then a process to immediately act on any difference (jidoka). This is what makes “lean” happen.

    Takt time is just a tool for doing this. It is, however, a very effective tool. It is so effective, in fact, that it is largely considered a necessary fundamental. Honestly, in day to day conversation, that is how I look at it. I made the above statements to get you to think outside the mantras for a minute.

    What is takt time, really?

    Takt time is an expression of your customer demand normalized and leveled over the time you choose to produce. It is not, and never has been, a pure customer demand signal. Customers do not order the same quantity every day. They do not stop ordering during your breaks, or when your shift is over. What takt time does, however, is make customer demand appear level across your working day.

    This has several benefits.

    First, is it makes capacity calculations really easy through a complex flow. You can easily determine what each and every process must be capable of. You can determine the necessary speeds of machines and other capital equipment. You determine minimum batch sizes when there are changeovers involved. You can look at any process and quickly determine the optimum number of people required to make it work, plus see opportunities where a little bit of kaizen will make a big difference in productivity.

    More importantly, though, takt time gives your team members a way to know exactly what “success” looks like for each and every unit of production. (assuming you give them a way to compare every work cycle against the takt time – you do that, don’t you?)

    This gives your team members the ability to let you know immediately if something is threatening required output. Put another way, it gives your entire team the ability to see quickly spot problems and respond to them before little issues accumulate into working on Saturday.

    The key point here is that to get the benefit, you have to have a takt time that actually paces production. It has to be real, tangible, and practically applied on the shop floor. Otherwise it is just an abstract, theoretical number.

    This means holding back “available time” for various planned (and unplanned) events where production would be stopped.

    Further, in a complex flow, there may be local takt times – for example, a process that feeds more than one main line is going to be running to the aggregated demand, and so its takt will be faster than either of them. Likewise, a feeder line that builds up a part or option that is not used on every unit is going to be running slower.

    And finally if disruptions do cause shortfalls to the required output, you have to make it up sometime. If you are constrained from running overtime (and many operations are for various reasons), then your only alternative is to build a slight over speed into your takt time calculation. The nuances of this are the topic of a much longer essay, but the basics are this:

    – If everything goes well, you will finish early. Stop and use the time for organized improvement of either process or developing people. Continuing to produce is overproduction, and just means you run out of work sooner if you have a good day tomorrow.

    – If there are issues, the use the buffer time for its intended purpose.

    – If there are more issues than buffer time, there is an operational decision to make. Have a policy in place for this. The simplest is “hope for a better day tomorrow” and use tomorrow’s buffer time to close the gap. If this isn’t enough, then a management decision about overtime or some other remedy is required.

    What about just allowing production to fall short? Well.. if this is OK, then you were running faster than customer demand already. So pull that “extra” out of your schedule, stop overproducing (which injects its own disruptions into things), and deal with what just actually have to accomplish. Stop inflating the numbers because they hide the problems, the problems accumulate, and you end up having to inflate even more.

    Gee, all of this seems complicated.

    Yeah, it can be. But that complexity is usually the result of having an ad-hoc culture that makes up the reactions as you go along rather than a comprehensive thought-out systems-level approach. The key is to work through the “what if…” for what you are doing and thinking about doing, how the pieces actually interconnect and interact, and have a plan.

    That plan is the first part of Plan-Do-Check-Act.

    Then, as the real world intrudes, you can test your thinking against reality and get better and better rather than just being glad you survived another day.

    And that, is the whole point of knowing your cycle times and takt times.

    WSJ: Yes, Everybody Hates Performance Reviews

    This article, carried by Yahoo News from the Wall Street Journal succinctly  says something that usually goes unsaid. It goes unsaid for the same reason that “Only Nixon could go to China” – only someone who is heavily invested in the performance review system can dare to criticize it. This, in itself, is an indictment of a process built built around control and fear.

    Deming called out the same message decades ago. And even managers who purport to agree immediately start making excuses and justifications for why performance reviews are necessary.

    If the question to be answered was “How does this Team Member know he is succeeding each day?” rather than “How do we rank and compare the Team Members against one another?” what would be different about this process?

     

    Knowing vs. Knowing How To Learn

    On the way to the airport a few days ago a couple of thoughts occurred to me that I wanted to toss out there and see how you all responded. This is one of them.

    What separates an expert from a master? Actually I need to ask in more prejudicial terms. Some people who are truly experts are also “stuck” in that they try to fit new things they encounter in to an analogy within their (vast) experience. When they find it, they apply the analogy and often come up with a pretty good solution. But they can have problems relating when the encounter something that doesn’t compare with anything they have seen before.

    As an example, the classic elements of standard work are described as

    • a repeating work sequence
    • balanced to a takt time
    • standard in-process stock

    And, indeed, these things are the elements of standard work when there is a repeating work sequence and when there is a takt time.

    Some experts at applying standard work, however, have a hard time seeing application outside of this scope. They know work needs to be standardized, but they continue to try to shoe horn what they see into this model.

    Another, more general, lean manufacturing model is the notion that this is about manufacturing, or that it “doesn’t apply” to true job-shops or non-repetitive environments. But this, too, is just a limitation of an “analogy” model. It is the analogy that breaks down, not the concept.

    In the analogy model, we try to educate by providing more analogies, more examples of different applications in order to expand the base for comparison.

    And, to be honest, this works to a degree. Some people get it, others simply don’t want to expand their analogy base. They are the ones who say “This (model) does not apply to (whatever is their current paradigm)     .

    Indeed, people who are tightly holding the view that kaizen events led by trained specialists are the only way to drive improvement can easily be blind to the possibility that an organization that is successfully running daily kaizen is operating at a fundamentally different level. I have seen that as well. And I have seen the same excuses made to explain away the difference in performance. “It isn’t different;”  “it doesn’t scale;”  “it isn’t repeatable.” All of this is defending a mental model – a paradigm – an analogy.

    On the other hand, I want to contend here that a true master is not one who has mastered a process, but rather one who has mastered the process of learning about a process.

    At an organizational level, true continuous improvement starts to engage when “process” and “standard” become baselines to gain higher understanding. Rather than trying to audit and enforce compliance, they are genuinely curious about the reason why a process is not being carried out as it should. This thinking requires far more work because it is empowering – it simply does not allow playing victim to “they won’t.” It puts the spotlight right back on what can (or should) be learned from the experience.

    Put another way, the “expert” knows.

    The “master” knows that there is much to learn.

    The TPS vs. Toyota’s Production System

    Up to this point I have resisted weighing in on the Toyota quality story largely because:

    1. I don’t have anymore insight than anyone else.
    2. The signal-to-noise ratio in the story seems really low, and I didn’t feel I would contribute much.

    But there is another story in the back channels of the “lean” community.

    Many of us (myself included) have been holding up Toyota as an example of “doing it right,” with good reason.

    Toyota, of course, has never publicly claimed to be an icon of perfection, but we have held it up as one.

    Now, when their imperfections are exposed, I am seeing a backlash of sorts, questioning whether the Toyota Production System is flawed somehow. This raises some really interesting questions cutting across the principles themselves; the psychology of various groups of practitioners; and of course Toyota’s practice of “The Toyota Production System.”

    Are the principles themselves flawed?

    We have a whole industry built on extolling the perfection of Toyota. Now we are seeing a bit of a boomerang effect. Say it ain’t so, but believe it or not, there is a population of people out there who are pretty sick of hearing “Toyota this..” and “Toyota that…” and having themselves held up to Toyota and being told they are coming up short.

    Shame on us, the lean manufacturing community, for setting that situation up, but now we have to defend the principles on merit and establish credibility for ourselves rather than using Toyota as a crutch. Hopefully the adversity will sort out some of the practitioners who are still advocating rote copy of the tools and artifacts.

    So, no, the principles are not flawed, not unless you didn’t believe in scientific thinking to begin with. It is a fallacy to confuse failure to adhere to the principles with failure of the principles themselves. The truth has always been that the Toyota Production System defines an ideal, and Toyota’s practice, like everyone else’s, comes up short sometimes.

    So what will happen?

    I can imagine that consultants the world over are figuring out how to re-brand their offerings to show how they “close the gaps” in the Toyota Production System to go “beyond lean.”

    Meanwhile, though, those who are grounded are going to have to get more grounded. Stay focused on the process, the objectives, what is happening right in front of you. Ask the same questions. Tighten up on your teaching skills because the concepts are going to have to make sense in the here and now. No longer will they be blindly accepted because “That is how Toyota does it.”

     

    Information Transfer Fail

    While the dentist was looking over my x-rays, he saw something he would like checked out by a specialist. He used words like “sometimes they..” and “might be…” when describing the issue he saw.

    I get a referral. The information on the referral slip is the name of the referring dentist (which I can’t read), no boxes checked, and “#31” in the comments.

    I call the specialist and start getting technical questions about what my dentist wants them to look at / look for, etc.

    So the process is to use the patient as a conduit for vaguely expressed (in layman’s terms) technical information between highly trained specialists.

    Sadly, I think this happens all of the time in the health care industry. It seems that there is so much focus on optimizing the nodes that nobody really “gets” that the patient’s experience (and ultimately the outcome of the process) is defined more by the interactions and interfaces than it is by the nodes themselves.

    I am really not sure how fundamentally different this is from a pilot asking a passenger to find the maintenance supervisor and tell the mechanic about a problem with a plane.

    The net effect is, as I am writing this, the specialist’s office is calling the referring dentist and asking them what, exactly, they want done.. a net increase of 100% in the time involved for all parties to communicate.

    While the national debate is on how we pay for all of this, we aren’t asking why it costs so much (or kills more people than automobile accidents do).

    Get Your Ducks In A Row For Lean Accounting

    I have known Russ Field since working with him on a few projects in a large Seattle (now Chicago) based aerospace company. Recently he posted a very (typically Russ) thorough reply on NWLEAN to a question about value stream accounting. I asked him to take the same basic material, clean it up a little, and let me publish it here as a guest post.

    Added Feb 21: There are some good comments to this post as well.

    Enabling Material-Only Costing in Value Streams

    —————– By Russell Field ——————

    For some time now, the value stream concept has been a topic of energetic debate. If you choose to implement that approach, you’ll find there is more than one reasonable way to organize them, each with its own requirements for management, measurement and performance assessment.

    This discussion centers on the value stream design described in three excellent books:

    Aside from my own experiences and observations, these works are the primary references for this article. I recommend them highly for anyone wanting to better understand the concepts and related impacts on the Finance function as a business “leans out”.

    NOTE: This article is not an endorsement of this particular value stream form. Rather, it is examination of its enabling and prerequisite conditions.

    The really short message, as in so many things, is “Don’t get the cart before the horse!” In this case, if material-only cost accounting procedures (discussed later) are implemented before the factory processes have been realigned and proven, the best which can be expected is a different flavor of misrepresentation.

    First, though, some baseline thoughts.

    “TRADITIONAL STANDARD COSTING” vs. COSTING OF (LABOR) STANDARDS

    Remember, words have meaning. I’ve seen many discussions derailed because of confusion between these two phrases. The philosophy and practice of “Traditional Standard Costing” is not the same as the “costing of (labor) standards”.

    The question is not whether I should know the cost of one widget, or the labor content (time) of that widget, or even the labor contribution ($$) to the cost of that widget; rather, the questions are how I should determine the hourly rate ($$) to apply to the labor content (time), and how I should account for other costs of producing that widget.

    BUT – even those questions become academic in a value stream where all products have the same labor content and where all costs are contained within the value stream (more on that later); that’s when we can start talking about the average cost per unit at the value stream level.

    “LEAN ACCOUNTING” vs. “ACCOUNTING FOR LEAN”

    As described in AWCO (pg. 36), these are two different concepts.

    “Lean Accounting” refers to the use of Lean tools and techniques to make the accounting process more efficient.

    “Accounting for Lean” “… represents an accounting process that captures the benefits of a Lean implementation as well as motivates Lean behavior.”

    MATURITY PATH OF “ACCOUNTING FOR LEAN”

    Both PLA (Chapt. 2; pg. 141 et al) and WC (pg. 165 et al) make the point that changes in accounting techniques should be made in conjunction with or immediately following the successful implementation of Lean procedures on the shop floor (we won’t get into Service vs. Production in this article). To change the cost accounting processes BEFORE leaning out Operations merely confuses the situation and causes unnecessary churn.

    That said, let’s proceed.

    In my opinion, the ability to successfully implement and sustain the accounting techniques described in the books noted earlier is dependent upon getting the process “ducks” into four rows:

    1. Organization by value stream
    2. Elimination of task-level labor tracking
    3. Stabilization of overall value stream-level labor costs
    4. Lowering of inventory levels

    Underlying each of these “duck rows” is, of course, a set of enabling conditions.

    DUCK ROW #1) Organization by value stream

    There’s plenty of material out there on value stream mapping, so for this discussion let’s just say there are some key characteristics:

    1. Similar process flows (or “routings”);
    2. Similar production cycle times (AKA “work content”);
    3. Similar physical size of product;
    4. Ideally, personnel dedicated to the value stream; and
    5. Again ideally, no “monuments” or shared resources (there are, of course, ways of dealing with shared resources and personnel, but I did say “ideally”).

    In other words, this approach emphasizes segregation of product families with high similarity in multiple categories. What does all this buy us?

    a) If every product follows the same process flow or path, physical segregation and rearrangement is much simpler. Additionally, there is a high likelihood that each product will use each resource (or resource type), further reducing the need for cost allocation between product lines/families.

    b) If each product takes about the same amount of time at each task/resource, then the overall work content of all products is about the same.

    c) If product size is similar, that helps keep the number of people needed and the need for additional, product-specific moving/handling equipment to a minimum (and supports similar work content).

    d) & e) If people and equipment aren’t shared outside the value stream, then all of their costs can be attributed to the value stream.

    NET RESULTS:

    1. The major sources of cost are captive within the value stream, so the need for allocation is minimized if not eliminated.
    2. Every product in the value stream population takes about the same flow time and has about the same total work content (which also minimizes allocation requirements).
    3. Some key sources of variation in that flow time and labor content are sorted out of the value stream by design.

    DUCK ROW #2) Elimination of task-level labor tracking

    There are two sub-elements here: “Stabilize task cycle times” and “Establish a common wage structure”. In order to reach these goals, we must create certain conditions.

    a) Stabilize task cycle times

    In order to create an environment in which a given task takes the same amount of time and effort regardless of who executes it, we need:

    1. Stable, high-quality processes (“reasonably under control and low variability”, PLA pg. 140/141 et al); this cuts down on how many times a job needs to be done, and how much input is wasted.
    2. Standard work; standardizing process steps helps assure that the job is done the same way each time.
    3. A cross-trained, multi-skilled workforce; in full implementation, this means that any member of the workforce can step in and execute any task.

    b) Common wage structure

    Note that a cross-training and multi-skilling not only helps stabilize task times, but also aids breaking down the “craftsman/guild” barriers to a common wage. For that, we need:

    1. A cross-trained, multi-skilled workforce
    2. Simplified processes; among other things, this makes cross-training and multi-skilling considerably easier.

    NET RESULTS:

    1) I no longer need to track how much time was spent executing an individual task; with high quality, standard work and cross-training, I know how long it takes because I know the plan.

    2) I no longer need to track who executed the task in order to determine an appropriate rate (accountability/traceability is another issue), because everyone gets paid pretty much the same (within a job category, at least).

    In short, I no longer need to run all over the production floor, tracking activity durations or costs at the task level. In fact, to the degree that the process paths and work content are very similar, I don’t even need to track them at the Item level.

    DUCK ROW #3) Stabilization of overall value stream-level labor costs

    Once I have eliminated the need for task-level labor tracking, then I need only to stabilize my labor population in order to keep my overall value stream labor costs at a fairly constant level.

    The high-quality processes I put in place to stabilize my task cycle times will help by assuring that labor is used only for production (and not rework or re-make), but I also need to level my demand, either artificially within my walls or by working with the customer base toward that end (I may also need to level-sequence my product mix if I still have significant difference in work content between products).

    In that way, I always have about the same amount of work to be done, and don’t need to bring people in, pull them back out, put them back in, or shake them all about (do the “Production Hokey Pokey”!).

    NET RESULT:

    I have a consistent amount of work to be done in the value stream, so I’m able to maintain a fairly fixed workforce population.

    DUCK ROW #4) Lowering inventory levels

    Once the inventory turn rate gets down to, or drops below the overall value stream cycle time, product is going out the door very soon after completion. Assuming a FIFO approach, I don’t need to put a lot of effort into tracking my inventory costs because a) they’re per current rates and b) there aren’t many pieces anyway.

    NET RESULT:

    I don’t have to keep separate track of what I have invested in each lot, batch or item. Even radical changes in material costs flush through the value stream very quickly.

    SO:

    IF I don’t have to mess with spreading/allocating costs;

    AND IF every time I make a given product, it takes the same amount of time;

    AND IF all the products I make in my value stream take the same amount of time;

    AND IF anyone who makes it gets paid the same wage;

    AND IF I don’t have to constantly change the size of my labor population,

    THEN I can apply a flat hourly $$ rate – based on total value stream cost – to the product’s work standards (planned work content). In fact, at full maturity I may even be able to simply average my total periodic value stream costs over the number of pieces shipped – AKA “average cost per unit” (PLA pg. 124 et al).

    If my value stream is fully segregated and mature, this is the closest I need to get to “overhead allocation”. I don’t have to account for variation in order quantity, labor expense or the like because I’ve designed/driven out those variations. I know what the labor content is because of the stable processes and high reliability; there’s no rework to account for because of the high quality; a worker is a worker is a worker from both a skill and pay standpoint, and I’m controlling the number of workers on the payroll (NOTE: Worker interchangeability must not be confused with worker dispensability / replaceability; it can take quite a while to adequately cross-train good personnel).

    And the ONLY thing I really need to track is material consumption (you can throw in some consideration for features and characteristics costing if appropriate). Hence the term, “material-only cost system” (WC, pg. 38).

    To recap, in order to enable truly simple, material-only costing we need to:

    1. Organize by value stream, driving out key categories of variation;
    2. Work to eliminate the need for task-level tracking of labor input and rates;
    3. Stabilize the overall value stream labor costs; and
    4. Lower inventory levels to less than one value stream cycle (or 5 days as some suggest).

    To do all that, the fundamental enablers include:

    1. A value stream organized around product families with high similarity in routings, process time/work content, physical size and the ability to segregate personnel and resources
    2. Stable, reliable processes
    3. Standard work
    4. Cross-trained, multi-skilled workforce
    5. Simplified processes
    6. Stable, level demand

    Of course, these enablers presuppose the existence of lower-level enablers (e.g. accurate BOMs and routings, and having appropriate metrics in place).

    So what? My whole point is that there’s more to successfully implementing Lean Accounting techniques described in PLA, WC and AWCO than simply ceasing or starting to gather certain data or collect certain costs. If you already have these enablers in place, then you are better positioned to embrace the related accounting practices.

    NOW – the BIG question is, “How do we manage our business until we can get these enablers firmly in place?”

    I refer you again to the books noted above. All offer some good thoughts on this subject, but I will reiterate the sentiments expressed under the heading of “MATURITY PATH OF ‘ACCOUNTING FOR LEAN'”. Bottom line: “Don’t get the cart before the horse!”

    Your current accounting processes are likely adequate for traditional manufacturing environments, especially those which are still largely mass-production oriented (PLA, pg. 4 et al). Both PLA and WC specifically discuss synchronizing the evolution of Lean Production Operations and “accounting for Lean” (and yeah, I capitalized it).

    Metrics and Customers

    Metrics are a fairly common topic of discussion on the various lean manufacturing forums. One theme that comes up fairly frequently is how to determine “what counts” in this-or-that measurement.

    For example, a recent post asked about measuring lead times. The way they were measuring lead time only counted the time from when the order was actually started on the shop floor. But it didn’t include the latent time between the time the customer placed the order and when processing it actually started.

    A rule of thumb I like to apply in these cases is “What is the customer’s experience?”

    In this case, the customer starts waiting on the order the moment he informs the company he wants something. The customer really doesn’t care whether the order is waiting in order-entry, waiting for the computer to run its batch process, or whether it is stuck in production queues. Time is time from the customer’s viewpoint.

    Of course this doesn’t mean I would ignore the internal components of the lead time… but I would include all of them because that is what the customer experiences.

    While I am on the topic of metrics, I want to reiterate the importance of also having a specification or standard. It is not enough to simply measure something and graph it. That does nothing but consume people’s time. Each instance must be compared against a specification. “Lead time” doesn’t tell me anything. What I want to know is “Was this order on time?” Yes or no? How late was it? What delayed this one? Why? Then launch into the problem solving process.

    Once a standard is being consistently met it is appropriate to then ask whether you want to set the bar higher. But to try to simply measure your way into excellence, without regard for stability and sources of variation, is an exercise in frustration.

    As you look at the various things you measure, ask yourself if your metrics are reflecting the experience of the internal or external customer. That can help reduce some of the questions about what is “in” or “out” of the measurement.