While we lean practitioners seem to have earned a reputation of distain for high-speed automation, industries like mass production consumables, and the food and beverage industry, would not be viable without that approach.
These plants are capital intensive, and the main focus of the people is to keep the equipment running. I hinted at some of these things a couple of years ago from the Czech Republic.
Here are some more recent thoughts.
Even though it is about equipment, it is still about people.
This is not a paradox at all. People are the ones who are getting cranky equipment to run, scurrying about clearing jams, clearing product that got mangled. Until you have a “lights out” plant, people are critical to keeping things running.
Robust problem solving and improvement skills are more critical.
In a purely manual world, you can get away with burying issues under more people and more inventory.
With interconnected automated equipment, not so much. The hardware has to run. It all has to run or there is no output.
How the organization responds to a technical problem makes the difference between quickly clearing the issue, or struggling with it for a couple of hours while everything else backs up.
This is where standard processes are critical, not only to short-term success, but also to capture new information as it is learned. This is the “chatter as signal” issue I have written about a couple of times.
Quoting from the above link:
Most organizations accept that they cannot possibly think of everything, that some degree of chatter is going to occur, and that people on the spot are paid to deal with it. That is, after all, their job. And the ones that are good at dealing with it are usually the ones who are spotlighted as the star performers.
The underlying assumptions here are:
- Our processes and systems are complex.
- We can’t possibly think of and plan for anything that might go wrong.
- It is not realistic to expect perfection.
- “Chatter is noise” and an inevitable part of the way things are in our business.
Those underlying assumptions say “Our equipment is complicated and difficult to get adjusted. All we can do is try stuff until it runs.”
That assumption actually lets people off the hook of actually understanding the nuances of the equipment; as well as letting them off the hook of a disciplined approach to troubleshooting. The assumption essentially says “We can’t do anything about it.”
A dark side of this designed ignorance is that the only thing leaders are really able to do is hover about and apply psychological pressure to “do something” or, at best, contribute to the noise of “things to try.”
Neither of those is particularly helpful for an operator who is trying to get the machine running. Both of those actually have a built-in implication that the operator (1) Does not know how to do his job or (2) Is somehow withholding his expertise from the situation.
But we get a different result from the alternative assumptions:
On the other hand, the organizations that are pulling further and further ahead take a different view.
Their underlying assumptions start out the same, then take a significant turn.
- Our processes and systems are complex.
- We can’t possibly think of and plan for anything that might go wrong.
- But we believe perfection is possible.
- “Chatter is signal” and it tells us where we need to address something we missed.
What does this look like in practice?
A known starting condition for all settings, that is verified.
A fixed troubleshooting checklist for common problems (that starts with “Verify the correct initial settings).
What things should be verified, in what sequence? (Understand the dependencies).
If a check reveals an issue, what immediate corrective action should be taken?
I would also strongly recommend using the format of a Job Breakdown (from TWI Job Instruction) for all of this. It is much easier to teach, but more importantly, it really forces you to think things through.
Of course, the checklist is unlikely to cover everything, at least at first. But it does establish a common baseline, and documents the limit of your knowledge.
The end of the operator checklist then defines the escalation point – when the operator must involve the next level of help.
It takes robust problem solving skills (and willingness to take the time to use them) to develop these processes; but doing so can save a mountain of time that pays back many times over.
The alternative is taking the time to mess with things until it sort of works, and never really understanding what was done or what had an effect – every single time there is an issue.
Cry once, or cry every day.
What does this have to do with improvement?
The obvious answer is that, if done well, it will save time.
The more subtle effect is that it sharpens the organization’s knowledge base, as well as their ability to really understand the nuances of the equipment. But this must be done on purpose. It isn’t going to happen on its own.
By getting things up and running sooner; and reducing the time of stoppages; it increases equipment capacity.
But more importantly, all of this increases people capacity.
It gives people time to think about the next level of problems rather than being constantly focused on simply surviving the workday. Of course you need the right organizational and leadership structure to support that.