Simon Sinek – Remote Teaming Tips

How Remote Teams Can Connect Meaningfully – Simon Sinek – March 20, 2020

We are all being pushed into the zone beyond our knowledge base right now – having to rapidly adapt and adjust to different ways of working together.

This morning Craig Stritar forwarded a cool little video to me from Simon Sinek’s YouTube channel. In it Steve Shedletzky, a member of Simon’s team, introduces their weekly huddle – a way that this team, which has been working remotely for years, maintains their connection to one another.

One of the keys here is that this meeting is not a conversation about the business at hand. There are other meetings for that. This one is intended to strengthen the social bonds of the group.

They dedicate 75 minutes a week to this task. The video is a condensed version to give us a taste of their structure.

And it is structure that makes it work. It is structure that makes sure no individual dominates the conversation, and structure that keeps it from becoming the kind of wide-ranging conversation that happens over beer and pizza.

It is structure that gives them the freedom to hear and be heard.

With that – here is the video.

For those who can’t see the embedded video, here is the YouTube link: https://youtu.be/tKEtm3HCrsw

The Cancer of Fear

File:Nandu River Iron Bridge corrosion - 03.jpg

I am sitting in on a daily production status meeting. The site has been in trouble meeting its schedule, and the division president is on the call.

The fact that a shipment of material hadn’t been loaded onto the truck to an outside process is brought up. The actual consequence was a small delay, with no impact on production.

The problem was brought up because bringing up process misses is how we learn what we need to work on.

The division president, taking the problem out of context, snaps and questions the competence of the entire organization. The room goes quiet, a few words are spoken in an attempt to just smooth over the current awkwardness. The call ends.

The conversation among those managers for the rest of that day, and the next, was more around how to carefully phrase what they say in the meeting, and less about how do we surface and solve problems.

This is understandable. The division president clearly didn’t want to hear about problems, failures, or the like. He expected perfect execution, and likely believed that by making that expectation loud and clear that he would get perfect execution.

That approach, in turn, now has an effect on every decision as the managers concern themselves with how things will look to the division president.

Problems are being discussed in hallways, in side conversations, but not written down. All of this is a unconscious but focused effort to present the illusion that things are progressing according to plan.

Asking for help? An admission of failure or incompetence.

This, of course, gets reflected in the conversations throughout the organization. At lower levels, problems are worked around, things are improvised, and things accumulate and fester until they cannot be ignored.

They the bubble up to the next level, and another layer of paint is plastered over the corrosion.

Until something breaks. And everyone is surprised – why didn’t you say anything? Because you didn’t want us to!

In a completely different organization, there were pre-meetings before the meeting with the chief of engineering. The purpose of these pre-meetings was to control what things would be brought up, and how they would be brought up.

The staff was concealing information from the boss because snap reaction decisions were derailing the effort to advance the project.

And in yet another organization they are getting long lists of “initiatives” from multiple senior people at the overseas corporate level. Time is being spent debating about whether a particular improvement should be credited to this-or-that scope. It this a “value improvement,” is it a “quality improvement,” is it a “continuous improvement” project?

Why? Because these senior level executives are competing with one another for how much “savings” they can show.

Result at the working level? People are so overwhelmed that they get much less done… and the site leader is accused of “not being committed” to this-or-that program because he is trying to juggle his list of 204 mandated improvement projects and manage the work of the half-a-dozen site people who are on the hook to get it all done.

And one final case study – an organization where the site leader berates people, directly calls them incompetent, diminishes their value… “I don’t know what you do all day”, one-ups any hint of expert opinion with some version of “I already know all of that better than you possibly could.”

In response? Well, I think it actually is fostering the staff to unite as a tight team, but perhaps not for the reasons he expects. They are working to support each other emotionally as well as running the plant as they know it should be run in spite of this behavior.

He is getting the response he expects – people are not offering thoughts (other than his) for improvements, though they are experimenting in stealth mode in a sort of continuous improvement underground.

And people are sending out resumes and talking to recruiters.

This is all the metastasized result of the cancer of fear.

Five Characteristics of Fear Based Leaders

Back in 2015 Liz Ryan wrote a piece in Forbes online called The Five Characteristics of Fear Based Leaders.

In her intro, Liz Ryan sets out her working hypothesis:

I don’t believe there’s a manager anywhere who would say “I manage my team through fear.”

They have no idea that they are fear-based managers — and no one around them will tell them the truth!

And I think, for the most part, this is true. If I type “how to lead with fear” into Google I get, not surprisingly, no hits that describe the importance of intimidation for a good leader – though there are clearly leaders (as my example above) who overtly say that intimidation is something they do.)

My interpretation of her baseline would be summarized:

People who use fear and intimidation from a position of authority are often tying their own self-esteem to their position within that bureaucratic structure. Their behavior extends from their need to reinforce their externally granted power, as they have very little power that comes from within them.

They are, themselves, afraid of being revealed as unqualified, or making mistakes, or uncertain, or needing help or advice.

I have probably extended a bit of my own feelings into this, but it is my take-away.

She then goes on to outline five characteristic behaviors she sees in these “leaders.” I’ll let you read the article and see if anything resonates.

Liz Ryan’s article is, I think, about how to spot these leaders and avoid taking jobs working for them.

This post is about how the organization responds to fear based leadership.

The Breakdown of Trust

A long time ago, I wrote a post about :The 3 Elements of “Safety First”. Today I would probably do a better and more nuanced job expressing myself, but here is my key point:

If a team member does not feel safe from emotional or professional repercussions, it means they do not trust you.

Fear based leadership systematically breaks down trust, which chokes off the truth from every conversation.

Here is my question: Do you want people to hide the truth?

If the answer is “No,” then the next question is “What forces in your organization encourage them to do so?” because:

Your organization is PERFECTLY designed to produce the BEHAVIORS you are currently experiencing.

– VitalSmarts via Rich Sheridan

Eliminating Key Points

TWI Job Instruction is a structured process for breaking a job or task into teachable elements, and a 4 step process for teaching that job to someone.

I am not going to try to explain everything about breaking down a job here, this post is primarily for people who use job breakdowns and job instruction already.

The process of breaking down a job involves:

  • Identifying the Important Steps – the sub-tasks that materially advance the work.
  • Then identifying Key Points within important steps. These are things which the team member must pay special attention to, or perform a specific way. The guideline is that a key point is something which:
    • Is a safety issue – could injure the worker, or someone else.
    • Would “make or break the job” – critical to quality or the outcome of the work.
    • Is a “knack” or special technique that an experienced team member would use to make the job easier to do.

Breaking down a job this way takes practice, but it is a great way to identify the elements that are critical to safely getting a quality job done, and ensuring that the team member understands them.

But I want to propose this is only the first step.

Every key point is something the team member must remember in order to to not get hurt (or hurt others); to avoid scrapping or damaging something; to perform at the level you need.

Take it to the next level.

Think of these key points and the training that they drive as temporary countermeasures.  They are stop gaps you have in place until you can do something more robust.

Take one key point at a time. Why is it necessary? Why is it possible to do this step any way but the best way?

Can you alter the work environment – the product, the process, the equipment, the visual controls to reduce the things the team member has to remember (and you have to remember to teach)?

Can you make it impossible to perform that step in any way other than the way it should be done?

If you can’t, can you make it impossible to proceed until the error is corrected, before any harm is done?

If you can’t, can you make it so obvious that it is impossible to miss?

If you can’t, can you put in a robust reminder? (Signs and placards generally DON’T WORK for this unless they are especially “sticky.”

Every key point is something you have identified as critical to doing the job directly. Therefore every key point should drive a focused effort to mistake-proof the work.

You want to have as few as possible… but no fewer.

Job Instruction for Risk Reduction

I stumbled across this PDF file on The Hanover Group’s web site:

“Job Instruction Training (JIT): Controlling Your Workers’ Compensation Costs Through a Better Work Environment.”

The page essentially summarizes the contents of the TWI Job Instruction pocket card.

There is a reference at the bottom saying to “Access our policyholder education safety series online at www.hanover.com” but I can’t find the link on the main site. It might be buried in a section only available to policy holders, or this PDF might be an orphan page that the Google bot found.

Regardless of the backstory on the Hannover site, it indicates that at least someone in the insurance industry understands the importance of consistent training as a prerequisite to consistent job performance, as a prerequisite to consistent job results.

Remember: Your front line leaders are teaching every day. If you want to know what they are teaching, go look at what their people are doing.

Also remember that your managers are teaching the front line leaders what is important. If you want to know what the manages consider important, go look at what the front line leaders emphasize.

If you don’t like what you see, consider changing what, and how, YOU are teaching. But we discussed that a while ago.

Safety and Lean Manufacturing

This is a (belated) response to a post from Patsi Sells on The Whiteboard. She asked about safety and kaizen.

When first implementing some of the tools and mechanics of the TPS (especially in a manufacturing environment), many of the initial efforts seem to run afoul of the industrial safety professionals. My experience suggests a couple of basic causes.

  • Safety countermeasures are not seen as necessarily directly contributing to production of the product. Thus, they are often lumped in to “not value added” or “waste” – at least by implication, if not directly, by over-enthusiastic kaizen event leaders.
  • Safety professionals can be stuck on specific countermeasures vs. looking at the actual risk and identifying other possible countermeasures.
  • Safety professionals are concerned (and rightly so) about repetitive motion injury. They perceive that takt driven standard work might increase the risk of this.
  • There is overall a general lack of understanding of the difference between regulatory compliance and keeping people safe. While these two things overlap, neither is necessary nor sufficient to assure the other.

Let me start with the last point since it probably raised the most eyebrows.

An industrial safety program has two distinct and discrete objectives:

  1. To keep people from getting hurt.
  2. To comply with all laws and regulations

As I said above, while meeting both of these objectives are absolutely necessary, meeting the requirements of one does not guarantee meeting the requirements of the other. To put it more directly –

  • It is possible to be fully compliant with all health, safety and environmental regulations and still have a workplace which is extremely dangerous.
  • It is possible to have a very healthy, safe and environmentally friendly workplace and still run afoul of laws and regulations.

Thus, it is necessary to ensure that each of these things are discretely addressed in any successful program.

It is important for both kaizen and safety professionals to be aware of this. Some things are simply required by law, even if they are wasteful, and sometimes interpretation is up to the whim of a local inspector who might be having a bad day. Whatever the case, for each regulatory requirement there must be a specific, targeted countermeasure, just as there must be one for each physical risk. Sometimes these things overlap, but sometimes they do not, and that is the key point here.

I will digress for a paragraph and make this point however: If you have the basics of workplace organization in place, you make a much better first impression on a health and safety inspector than if the place is a mess. If your fire department sees the fire extinguishers are all where they should be and up to date, access aisles and evacuation doors clear, etc. he is more biased toward the assumption that you have your basic act together than if the place is a mess. Everyone has some violations, but when they are blatant, simple things it starts you off with a bad impression, and inspectors usually start digging.

Moving beyond simple regulatory compliance, the objectives of “lean manufacturing” and safety are 100% congruent. Ethically it is never acceptable to knowingly put people in harm’s way for the sake of production. At a more pragmatic level, a hazardous workplace will adversely affect quality, production, cost, delivery and morale. I will not descend to the level of cost-justifying safety simply because it isn’t necessary. But it is equally true that an unsafe workplace has higher costs than a safe one.

The good news is that the kaizen process is incredibly effective at dealing with safety hazards.

Moving up on the above list, one of the biggest places where the safety professionals can help smooth out the work is with their expertise in ergonomics.

What the kaizen people should take a little time to understand is this: Motions with poor ergonomics nearly always take longer than motions with good ergonomics. To put it a little more clearly: Ergonomic improvements are kaizen. Once a good team understands the difference between good and bad ergonomics, they can quickly see many small improvements which all accumulate.

By standardizing the method, we standardize the good ergonomics. Where there are no standard methods, the Team Members will each develop their own which may, or may not, be the safest way to do the job.

Standardizing the right motions reduces the chance of repetitive motion injury. And paying attention to why unusual motions are necessary comes back to reducing variation and overload (muri, mura) in the workplace.

At the next level up, standard work is your best friend.

To develop any process you first take a good look and specify exactly what result you expect to achieve.
Define a defect-free outcome.
Part of that definition is “perfectly safe.” No one is going to argue with this. Of course you want a process that is perfectly safe, and produces a defect free outcome. Who wants the opposite? But until “defect free” and “perfectly safe” are explicitly defined or specified, there may be room for interpretation. Do this before working on the process steps.

Next define the work you believe will deliver a perfectly safe, defect-free outcome. Define the content, sequence and timing of the work.

Now you have a specification for content, sequence, timing and outcome – the four elements of activity that Steven Spear called out in “Decoding the DNA of the Toyota Production System.”

Next – try it. Run the process exactly as you specified it. Verify two things:

  • That the Team Member can perform the process as specified – there is nothing keeping him from doing it.
  • That the Team Member actually does it that way and put in guides, controls, mistake-proofing where things go off track.

Check your results.
Was it perfectly safe? Did you see any risks? How are the ergonomics?
Adjust as necessary, then repeat.

Was the result defect free? How do you know? Is there a way for the Team Member (or an automatic step) to verify the result?

In practice, you are “trying it” and “checking the results” not just when you are developing the process, but every time it is done.

If a defect is produced at some point, then you know either:

  • The process didn’t work as you expected.
  • For some reason (which you don’t know yet), the Team Member did something different, or omitted a step.

Likewise, if the Team Member so much as skins a knuckle, you know the same things: The process is not perfectly safe or the process wasn’t (or couldn’t be) followed.

In most cases where the process was not followed you likely had a Team Member doing something in good faith to “get the job done.” This is the time to gently remind him that, when he can’t follow the process as designed, to please pull the andon and let someone know. Maybe you will have to make an exception to correct the current situation, but you HAVE to know if the process isn’t working or is unworkable.

Bottom line?
You get perfect safety exactly the same way you get perfect quality.
The methods and approach for getting it, and the methods and approach for correction and countermeasure are exactly the same.

Remember:
The right process produces the right result.

If you aren’t getting the result you want, then take a look at the process.

What Nukes – a little more clear.

I re-read my “What Nukes?” post and realized I was really rambling. I want to reiterate a key point more clearly because I think it is important.

In the “Bad Apple” theory there is an implied assumption that the cause of an accident or other problem was one person who, at that moment in time, was not following the documented rules or procedures.

Except in the most egregious cases, such as deliberate misconduct, that is likely not the case. Most organizations have a set of “norms” that operate at some level of violation of the written or established procedures. The reasons for this are many, but usually it is because good people are doing the best they can, in the conditions they are given, to get the job done.

Failure to follow the rules does not result in an accident or incident.

Have you every run a red light or a stop sign? It happens thousands of times every day. It almost never results in an accident. Only when other contributing conditions are ripe will an accident result. Running a stop sign AND a car coming through the intersection.

The same goes for quality checks, and the more reliable an “almost 100%” process becomes, the more vulnerable you are. If a defect is only rarely produced, it is unlikely that any kind of human-based inspection will catch it. The faster the work cycle, the more this is true. The mind numbs, it is impossible to always pay attention to the detail, and the mind sees what it expects. “Failure to pay attention” is never an adequate root cause. It is blaming an unlucky Team Member for an omission that everyone makes every day just going through life. It is just, in this case, “there was a car coming through the intersection.” It is bad luck. It is being blamed for red beads in Deming’s paddle experiment.

So attaching the failure of an individual, while it is easy, avoids the core issue:

People’s failure in critical processes is a SYSTEM PROBLEM. You must investigate from the viewpoint of the person at the pointy end. What did he see? What did he perceive? What did he believe was happening and why was that belief reasonable given his interpretation of the circumstances at the time.

The post about “sticky visual controls” got to this. Your mistake-alerts or problem signals must penetrate conciousness and demand attention if they do not actually shut down the process.

What Nukes?

Cruise Missiles

Warning to Reader: This piece has a lot of free-association flow to it!

Oops. A few weeks ago a story emerged in the press that a B-52 had flown from North Dakota to Louisiana with half-a-dozen nuclear armed missiles under its wing. The aircrew thought they were transporting disarmed missiles. This is a rather major oh-oh for the USAF, as in general, they are supposed to keep track of nuclear warheads. (Yeah, I am understating this. I, by the way, can speak from a small amount of experience as I once held a certification to deal with these things, so I have some idea how rigorous the procedures are.)

Normally the military deals with nuclear weapons issues with a simple “We do not confirm or deny…” but in this case they have released an unprecedented amount of information, including a confirmation that nukes were on a particular plane in a particular location at a particular time.

The news story of the report summarized a culture of casual disregard for the procedures – the standard work – for handling nukes. I quote the gist of it here:

A main reason for the error was that crews had decided not to follow a complex schedule under which the status of the missiles is tracked while they are disarmed, loaded, moved and so on, one official said on condition of anonymity because he was not authorized to speak on the record.

The airmen replaced the schedule with their own “informal” system, he said, though he didn’t say why they did that nor how long they had been doing it their own way.

“This was an unacceptable mistake and a clear deviation from our exacting standards,” Air Force Secretary Michael W. Wynne said at a Pentagon press conference with Newton. “We hold ourselves accountable to the American people and want to ensure proper corrective action has been taken.”

So what’s the point, and what has this got to do with lean manufacturing?

The right process produces the right result.

As true as this is, it isn’t the point. The point is that the Airmen didn’t follow the procedures. And now the Air Force will apply the “Bad Apple” theory, weed out the people who are to blame, re-emphasize the correct procedures everywhere else, and call it good.

How often do you do this when there is a quality problem, an accident or a near miss? How often to you cite “Human Error” or “not following procedures” or “didn’t follow standard work” as a so-called root cause?

You need to keep asking “why” some more, probably three or four more times.


Field Guide to Understanding Human ErrorTo this end, I believe Sydney Dekker’s book “Field Guide To Understanding Human Error” should be mandatory reading for all safety and quality processionals.

Dekker has done most of his research in the aviation industry, and mostly around accidents and incidents, but his work applies anywhere that people’s mistakes can result in problems.

In the USAF case cited above, there was (according to the reports in the open press) a culture of casual disregard for the established procedures. This probably worked for months or years because there wasn’t a problem. The “norms” of the organization differed from “the rules” and I would speculate there was considerable peer pressure, and possibly even supervisory pressure, to stick with the “norms” as they seemed to be adequate.

Admittedly, in this case, things went further than they normally do, but let’s take it away from nuclear weapons and into an industrial work environment.

Look at your fork truck drivers. Assuming they got the same training I did, they were taught a set of “rules” regarding always fastening seat belts, managing the weight of the load, keeping speed down and under control, checking what is behind and to the sides before starting a turn (as the rear-end swings out.. the opposite of a car). All of these things are necessary to ensure safe operation.

Now go to the shop floor. Things are late. The place is crowded. The drivers are under time pressure, real or perceived. They have to continuously mount and dismount. The seatbelt is a pain. They get to work, have the meeting, then are expected to be driving, so there is no real time for the “required” mechanical checks. They start taking little shortcuts in order to get the job done the way they believe they are expected to do it. The “rules” become supplemented by “the norms.” This works because The Rules apply an extra margin of safety that is well above the other random things that just happen around us every day. The Norms – the way things are actually done erode that safety margin a little bit, but normally nothing happens.

Murphy’s Law is wrong. Things that could go wrong usually don’t.

The “Bad Apple” theory suggest that accidents (and defects) are the fault of a few people who refuse to follow the correct procedures. “If only ‘they’ followed ‘the rules’ then this would not have happened.” But that does not ask why they didn’t do it that way.

Recall another couple of catastrophes: We have lost two Space Shuttle crews to the same problem. In both the Challenger and Columbia accident reports, the investigators cite a culture where a problem which could have caused an airframe loss happened frequently. Eventually concern about it became routine. Then, one time, other factors come into play and what usually happens didn’t happen and we are wringing our hands about what happened this time. Truth is it nearly happened every time. But we don’t see that because we assume that every bad incident is an exception, the result of something different this time. In reality, it is usually just bad luck in a system which eroded to the point where luck was relied upon to ensure a safe, quality outcome. In this case they didn’t single out “bad apples” because the investigations were actually done pretty well. Unfortunately the culture at NASA didn’t adjust accordingly. (Plus Space Flight involves the management of unimaginable amounts of energy, and sometimes that energy goes where we don’t want it to.)

So – those quality checks in your standard work. Do you have explicit time built in to the work cycle to do them? Are your team members under pressure real or perceived to go faster?

What happens if there is an accident or a defect? Does the single team member who, today, was doing the same thing that everyone does every day get called out and blamed? Just look at your accident reports to find out. If the countermeasure is “Team Member trained” or “Team Member told to pay more attention” or just about anything else that calls out action on a single Team Member then… guilty.

What about everybody else? Following an incident or accident, the organization emphasizes following The Rules. They put up banners, have all-hands meetings, maybe even tape signs up in the work place as reminders and call them “visual controls.” And everything goes great for a few weeks, but then the inevitable pressure returns and The Norms are re-asserted.

Another example: Steve and I were watching an inspection process. The product was small and composed of layers of material assembled by machine. Sometimes the machine screwed up and left one out. More rarely, it screwed up and doubled something up. As a countermeasure, the Team Member was to take each item and place it on a precise scale, note the weight, and compare the weight to a chart of the normal ranges for the various products.

There were a couple of problems with this. First, the human factors were terrible. The scale had a digital readout. The chart was printed and taped to the table. The Team Member had to know what product it was, reference the correct line on the chart, and compare a displayed number with a set of displayed numbers which were expressed to two decimal places. So the scale might say “5.42” and she had to verify whether that was in or out of the range of “5.38 – 5.45”

Human nature, when reading numbers, is that you will see what you expect to see. You might recall that it was different after five or six more reads. So telling the Team Member to “pay more attention” if she made a mistake was unreasonable. Remember, she is doing this for a 12 hour shift. There is no way anyone could pay attention continuously in this kind of work. If a defective item got through, though, there would be a root cause of “Team Member didn’t pay attention.” She is set up to fail.

But wait, there’s more!

She was weighing the items two at a time. Then she was mentally dividing the weight by two, and then looking it up. Even if she was very good at the mental math and had the acceptable range memorized, that isn’t going to work. Plus, and this is the key point, in the unlikely but possible scenario where the machine left out a layer in one item, then doubled up the next, the net weight of the two defective items together would be just fine.

“Why do you weight two at a time?” Answer: “It’s faster.” This is true, but:

  • It doesn’t work.
  • She doesn’t need to go faster.

Her cycle time for weighing single items was well within the required work pace. But the supervisor was under pressure for more output because of problems elsewhere, and had translated that pressure to the Team Member in a vague “work faster if you can” way. It was the norm in that area, which was different from the rules.

Where is all of this going?

The Air Force has ruined 70 careers as a result of the cruise missile incident. They may have been right to do so, I wasn’t there, and this was a pretty serious case. But the fact that it got to this point is a process and system breakdown, and it goes way beyond the base involved.

Go to your own shop floor. Stand in the chalk circle. Watch, in detail, what is actually happening. Compare it with what you believe should be happening. Then start asking “Why?” and include:

“Why do people believe they have to take this shortcut?”

Standards Protect the Team Members

One of my kaizen-specialists-in-training just came to me asking for help. The Team Members he is working with are not seeing the need to understand sources of work variation.

I hear that a lot, both in companies I have worked in and in the online forums. Everyone seems to think it is a problem in their company, their culture – that they are unique with this problem.

The idea of a unique problem is variation on the “our process / environment / product is different so ____ won’t work here.” Someday I will make a list of the standard management “reasons why not” but that isn’t the topic of this post.

I told him:

  1. This is not unique to China, or to this facility. The same resistance a always comes up, and nearly always comes up the same way once the Team Members begin to realize we are serious.
  2. There is no way to just change people’s minds all at once.

Here is something to explain to the concerned Team Members: The standard process is there to protect the team member. If there is a problem, and the standard process was followed, then the only focus for investigation can be where the process itself broke down. Countermeasures are focused on improving the strength of the process.

If, on the other hand, the process was not followed (or if there is no process), then the team member is vulnerable. Instead of the “Five Why’s” the investigation usually starts with the “Five Who’s” – who did it? Countermeasures focus on the individual who happened to be doing the work when the process failure occurred.

As you introduce the concept of standard work into an area that is not used to it, it is probably futile to try to tighten down everything at once. The good news is that you really don’t have to.

Start with the key things that must be done a certain way to preserve safety and quality. If they are explained well and mistake-proofed well, there is usually little disagreement that these things are important.

The next step is to make it clear that the above are totally mandatory. If anything gets in the way of doing those operations exactly as specified, then STOP. Do not just work around the problem, because doing so makes you (the Team Member) vulnerable to the Five Who’s inquisition.

If you focus here for a while, you will start to get more consistent execution leading to more consistent output, which is what you want anyway.

Then start looking at consistent delivery and all of a sudden the concept of variation in time comes into play. Why was this late? The welder ran out of wire, I had to go get some more, I couldn’t find the guy with the key to the locker…… Go work on that. At each step you must establish that the point of all of this is to build a system that responds to the needs of the people doing the work.

The 3 Elements of “Safety First”

When we talk about safety, most people consider the context of accidents and injuries. But if we are to achieve a true continuous improvement environment, where everyone fully participates, we have to consider more.

A good way to sum it up is with three elements that all start with ‘P.’

1) Physical Safety

This is what most people think of when we say “safety is the most important thing.” But, aside from the moral, legal and financial imperatives, what are some other reasons why physical safety is important?

Simple. We want our Team Members 100% engaged in performing their work and improving it. We do not want any of their precious mental bandwidth consumed by worrying about whether or not they will get hurt.

This is a far cry from the “blame the victim” approach I have seen. Root Cause of Accident: Team Member failed to pay attention. Countermeasure: Team Member given written warning.”

A few years ago I was painting my house. I will tell you right now that I am not fond of ladders. (Go figure, I spent three years in the 82nd Airborne, you’d think I would be over it.) There I was up near the top of an extension ladder painting the eaves of the roof. I can tell you that I was paying a lot more attention to staying on the ladder than to where the paint was going. The quality of the job suffered, for sure.

The truth is that a physically safe environment is more, not less, productive. Ergonomically bad motions take more time than good ones. Well designed fail-safe’s and guards prevent quality issues and rework. An even, sustainable pace of work reduces disruptions upstream and downstream. Good lighting lets people catch quality issues and mistakes sooner. Reduced noise levels foster communication. High noise isolates people in invisible bubbles.

2) Psychological Safety

Can your Team Members freely share problems and ideas free of concern for ridicule or rejection by their co-workers? Or is it safer for them to keep to themselves? Do you know who the natural leaders are? Do you know who the influencers are? Do you know who the bullies are? Do you know which line leaders people are afraid of? Which co-workers? Don’t kid yourself, it is only the truly exceptional team that does not have these issues. And most teams that move past these issues become truly exceptional. It is something called “trust” but that is just another way of saying “feeling safe being vulnerable.”

3) Professional Safety

This is a deceptively simple concept. The Team Member is not put in fear (real or implied) of losing his job for doing what is expected of him. That sounds so simple. The really obvious example is the Team Member being asked to contribute to saving cycle time (and therefore, labor) when he knows unnecessary people lose their jobs. But it goes further.

How often do we expect, by implication, people to short-cut The Rules in order to get something done more quickly? Sidney Dekker has authored a number of books and publications focusing on human error as the cause of accidents. One of his key points is that within any organization there are The Rules, and a slightly (sometimes greatly) lower standard of the norms – the way people routinely do things. The norms are established by the day-to-day interactions and the real and implied expectations placed on people to get the job done.

Well meaning Team Members, just trying to meet the real or perceived pressures of everyday work take shortcuts. They do it because they feel they must in order to avoid some kind of negative consequence.

At this point you can hopefully see that these three elements blur together. The work environment and culture play as much a part in a safe work place as the machine guards and safety glasses.

All of these things, together, set the tone for the other things you say are “important” such as following the quality checks (when there is no time built in to the work cycle to do so), and calling out problems (when halting the line means everybody has to work overtime).

One more point – everything that applies to safety also applies to quality. The causes of problems in both are the same, as are the preventions and countermeasures. Do you use the same problem solving approach in both contexts? More about simplifying your standards sometime in the future.