How Do You Look At Problems?

A couple of posts ago, I tried to emphasize “hypothesis testing” as the key, core thinking behind the TPS. For that matter, I think that anyone who truly understands any of the various improvement approaches out there will find the same thinking at the core. Certainly Six Sigma; Theory of Constraints; and TQM are all about surfacing and solving problems. They may use different language, might insert the initial lever between different bricks, but in the end, the approaches all embrace the same basic thinking.

I’d like to put out there an idea that it is the way problems are regarded and approached that separates “gets it” from “business as usual.”

What Constitutes “a problem?”

In “traditional thinking” a problem is something which disrupts output. It is something serious enough that it cannot be ignored.

In a true continuous improvement mindset, anything that causes variation from the plan, in any way, is “a problem.” Any barrier between the current condition and the idealized world is “a problem.”

What triggers a response?

In “traditional thinking” if output isn’t disrupted, spend time elsewhere. There is a caveat to this, however. The parable of the “boiling frog” (whether true for actual frogs or not) can drive an ever higher level of numbness as “normalized deviance”   sets in.

Since continuous improvement is a process of discovering the ideal process, variation from the plan is new information. It must be investigated and understood. If everything is running smoothly, then the problem solving shifts to the next barrier to higher performance.

What triggers alarm in the organization?

This one may be the most controversial. While “stopped production” is certainly cause for alarm and immediate response, in the traditional thinking world, it is the only thing that really gets people’s attention.

In a thinking and learning organization, I would add to the above “No problems are apparent.” If there are no andons, there are no defects, there are no line stops, there are no shortages, there are no disruptions, then there is a BIG problem. I say that because these conditions are impossible and it is only because your system is totally numb that you would not see them.

Target Condition

Given the above, then I think it is safe to offer that silence is equated with “stability” in the traditionally reacting organization. Of course it isn’t stable at all, it is just that there is so much systemic anesthesia that nobody feels anything.

In the continuous improvement mindset, things are running as they should if there is a continuous flow of problem being surfaced and solved. That is the only way to be 100% certain that things are getting better every day.

“Management Commitment”

The term “management commitment” is tossed around as a prime reason for failure of improvement initiatives. There are lots of good reasons for this, but until we really define exactly what leaders need to do every day, stop using euphemisms, and start getting real about leadership’s actual role in this process, we are crutching the problem. This is partly “our fault” because we teach the basics very badly. We put top leaders into “kaizen events” but never explicitly link kaizen to daily problem solving. In doing so, we convince them that if only they support enough kaizen events, the organization will be transformed. The logical result is a monthly report on how many kaizen events have been run. Argh.

If we used kaizen events to explicitly teach the core questions, the rules of good process design, and the concept of applying PDCA to everything, we might get more traction. That can be difficult, but maybe if everyone in the industry starts thinking in terms of a few core mantras we might get a chorus going.

Setting Up For Success (or failure)

Remember when, a few short months ago, everyone was too busy taking orders and building up all of that inventory that you see out of your window now? Times have changed.

Then again, very few can claim lack of a “burning platform” now. Platform? Today it is more about getting out of the building alive!

Still, a few organizations are trying to drive change into the way they operate, and many more will fail than succeed.

The reasons why this is true were articulated by John Kotter back in 1995 in his now classic article Leading Change: Why Transformation Efforts Fail.

The short list is:

  1. Establishing a sense of urgency.
  2. Forming a powerful guiding coalition.
  3. Creating a vision.
  4. Communicating the vision. (Over-communicating!)
  5. Empowering others to act on the vision.
  6. Planning for and creating short-term wins.
  7. Consolidating improvements and producing still more change.
  8. Institutionalizing new approaches.

The question, then, becomes “Are you deploying effective countermeasures against these known failure points?”

I would like to share an exercise I used (admittedly improvised as I went) with a company leadership team a few years ago. It ended up really hitting them between the eyes with the gap between their perception and the reality.

Prior to the day, I had them all read the article.

We spent some time discussing and understanding each of the eight points Kotter discusses.

Then I had each of the little sub-teams we had break out and score how effectively they felt they were dealing with each of these eight items. For example, how well did they rate themselves on “Communicating the vision?” It was a simple numeric rating, 1-5.

Each sub-team then debriefed the group, and found everyone was pretty close to consensus.

In the meantime, we had another group going through the same exercise. This group consisted of the direct reports of the top leadership team.

We compared the numbers. They were very different.

The leaders rated themselves as being pretty effective. Their direct reports were not so kind. We didn’t do it, but it would have been interesting to do the same thing another level down again.

The leaders gained a decent understanding of the huge gap that existed between what they thought they were doing vs. how it was being read by their staff. What the leaders thought was a clear, crisp “change” message was pretty mushy by the time it was filtered through words vs. actions.

Try it in your organization. Assess yourselves. Then do the same assessment with another group a couple of levels closer to reality. See what you get.

.

The TPS In Four Words

ptolematic_universeIn the world of science, great discoveries simplify our understanding. When Copernicus hypothesized that everything in the universe does not revolve around the Earth, explaining the motions of things in the sky got a lot easier.

In general, I have found that if something requires a great deal of detail to explain the fundamentals, there is probably another layer of simplification possible.

Even today, a lot of authors explain “lean manufacturing” with terms like “a set of tools to reduce waste.” Then they set out trying to describe all of these tools and how they are used. This invariably results in a subset of what the Toyota Production System is all about.

Sometimes this serves authors or consultants who are trying to show how their process “fills in the gaps” – how their product or service covers something that Toyota has left out. If you think about that for a millisecond, it is ridiculous. Toyota is a huge, successful global company. They don’t “leave anything out.” They do everything necessary to run their business. Toyota’s management system, by default, includes everything they do. If we perceive there are “gaps” that must be filled, those gaps are in our understanding, not in the system.

So let me throw this out there for thought. The core of what makes Toyota successful can be expressed in four words:

Management By Hypothesis Testing

I am going to leave rigorous proof to the professional academics, and offer up anecdotal evidence to support my claim.

First, there is nothing new here. Let’s start with W. Edwards Deming.

Management is prediction.

What does Deming mean by that?

I think he means that the process of management is to say “If we do these things, in this way, we expect this result.” What follows is the understanding “If we get a result we didn’t expect, we need to dig in and understand what is happening.”

Control ChartAt its most basic level, the process of statistical process control does exactly that. The chart continuously asks and answers the question “Is this sample what we would expect from this process?” If the answer to that question is “No” then the “special cause” must be investigated and understood.

If the process itself is not “in control” then more must be learned about the process so that it can be made predictable. If there is no attempt to predict the outcome, most of the opportunity to manage and to learn is lost. The organization is just blindly reacting to events.

Here is another quote, attributed to Taiichi Ohno:

Without standards, there can be no kaizen.

Is he saying the same thing as Deming? I think so. To paraphrase, “Until you have established what you expect to do and what you expect to happen when you do it, you cannot improve.” The quote is usually brought up in the context of standard work, but that is a small piece of the concept.

So far all of these things relate to the shop floor, the details. What about the larger concepts?

What is a good business strategy? Is it not a defined method to achieve a desired result? “If we do these things, in this way, at these times, we should see this change in our business results.” The deployment of policy (hoshin planning) is, in turn, multiple layers of similar statements. And each of the hoshins, and the activities associated with them, are hypothesized to sum up to the whole.

The process of reflection (which most companies skip over) compares what was planned with what was actually done and achieved. It is intended to produce a deeper level of learning and understanding. In other words, reflection is the process of examining the experimental results and incorporating what was learned into the working theory of operation, which is then carried forward.

Sales and Operations Planning, when done well, carries the same structure. Given a sales and marketing strategy, given execution of that strategy, given the predicted market conditions, given our counters to competitor’s, we should sell these things at this time. This process carries the unfortunate term “forecasting” as though we are looking at the weather rather than influencing it, but when done well, it is proactive, and there is a deliberate and methodical effort to understand each departure from the original plan and assumptions.

Over Deming’s objections, “performance management” and reviews are a fact of life in today’s corporate environment. If done well, then this activity is not focused on “goals and objectives” but rather plans and outcomes, execution and adjustment. In other words, leadership by PDCA. By contrast, a poor “performance management system” is used to set (and sometimes even “cascade”) goals, but either blurs the distinction between “plans” (which are activities / time) and “goals” which are the intended results… or worse, doesn’t address plans at all. It gets even worse when there are substantial sums of money tied to “hitting the goals” as the organization slips into “management by measurement.” For some reason, when the goals are then achieved by methods which later turn out to be unacceptable, there is a big push on “ethics” but no one ever asks for the plan on “How do you plan to do that?” in advance. In short, when done well, the organization manages its plans and objectives using hypothesis testing. But most, sadly, do not.

Let’s look at another process in “people management” – finding and acquiring skills and talent, in other words, hiring.

In average companies, someone needing to hire someone puts in a “requisition” to Human Resources. HR, in turn, puts that req out into the market by various means. They get back applicants, screen them, and turn a few of them over to the hiring manager to assess. One of them gets hired.

What happens next?

The new guy is often dropped into the job, perhaps with minimal orientation on the administrative policies, etc. of the company, and there is a general expectation that this person is actually not capable of doing the work until some unspecified time has elapsed. Maybe there is a “probation period” but even that, while it may be well defined in terms of time, is rarely defined in terms of criteria beyond “Don’t screw anything up too badly.”

Contrast this with a world-class operation.

The desired outcome is a Team Member who is fully qualified to learn the detailed aspects of the specific job. He has the skills to build upon and need only learn the sequence of application. He has the requisite mental and physical condition to succeed in the work environment and the culture. In any company, any hiring manager would tell you, for sure, this is what they want. So why doesn’t HR deliver it? Because there is no hypothesis testing applied to the hiring process. Thus, the process can never learn except in the case of egregious error.

If we can agree that the above criteria define the “defect free outcome” of hiring, then the hiring process is not complete until this person is delivered to the hiring manager.

Think about the implications of this. It means that HR owns the process of development for the skills, and the mental and physical conditioning required of a successful Team Member. It means that when the Team Member reports to work in Operations, there is an evaluation, not of the person, but of the process of finding, hiring, and training the right person with the right skills and conditioning.

HR’s responsibility is to deliver a fully qualified candidate, not “do the best they can.” And if they can’t hire this person right off the street, then they must have a process to turn the “raw material” into fully qualified candidates. There is no blame, but there are no excuses.

Way back in 1944, the TWI programs applied this same thinking. The last question asked on the Job Relations Card is “Did you accomplish your objective?” The Job Instruction card ends with the famous statement “If the worker hasn’t learned, the teacher hasn’t taught.” In other words, the job breakdown, key points and instruction are a hypothesis: If we break down the job and emphasize these things in this way, the worker will learn it over the application of this method. If it didn’t work, take a look at your teaching process. What didn’t you understand about the work that was required for success?

I could go on, but I have yet to find any process found in any business that could not benefit from this basic premise. Where we fail is where we have:

  • Failed to be explicit about what we were trying to accomplish.
  • Failed to check if we actually accomplished it.
  • Failed to be explicit about what must be done to get there.
  • Did something, but are not sure if it is what we planned.
  • Accepted “problems” and deviation as “normal” rather than an inconsistency with our original thinking (often because there was no original thinking… no attempt to predict).

As countermeasures, when you look at any action or activity, contentiously ask a few questions.

  • What are we trying to get done?
  • How will we know we have done it?
  • What actions will lead to that result?
  • How will we know we have done them as we planned?

And

  • What did we actually do?
  • Why is there a difference between what we planned and what we did?
  • What did we actually accomplish?
  • Why is there a difference between what we expected and what we got?

The short version:

  • What did we expect to do and accomplish?
  • What did we do and get?
  • Why is there a difference?
  • What are we doing about it?
  • What have we learned?

Learning To Sensei: LEAN.org

John Shook’s latest column on LEI’s site is about coaching and whether it is better to give them the answers or just ask questions.

Asking questions in a way that actually teaches is a skill that we, as a “lean” community do not foster very well. Certainly in U.S. corporate culture, we are expected to be the experts, and to have the answers. John’s post is summed up well by his last paragraph:

Learning to Sensei: A prerequisite for the apprentice sensei who is learning to not give solutions is to grasp for himself the fact that he doesn’t actually know the solution. Once you grasped that, then it’s very easy to not give “the answer” you simply don’t really have an answer to give. But, while it is not necessary for you to give or even possess “the solution”, you do have an important obligation, which is to give the question or learning assignment in a way that will lead to the learning, with learning as the goal. Once that is accomplished, all sorts of “solutions” will fall out. Then you can experience the joy, liberation, and humility that come with admitting you don’t know.

You can read the whole thing here:

LEAN.org – Lean Enterprise Institute: Coaching and Questions; Questions and Coaching

Now, as an additional value-add…

This really falls under the general notion of “Socratic teaching.” One of the best overviews of what this is really about is Rick Garlikov’s classic piece where he recounts his experiment with teaching through questions. If you don’t think this can work for difficult topics, then I suggest you read his account of using only questions to teach binary arithmetic to a typical class of third graders. If he can teach 8 year olds to understand that 0110 + 0011 = 1001, then surely we can get adults through understanding why takt time is important for management.

“What are you trying to do?”

“How will you know you have done it?”

TPS Failure Modes – Part 1

Following on from the buzz created by the last couple of posts, I would like to go back in time a bit.

In 2005 Steven Spear wrote a working paper called “Why General Motors Lost and Toyota Won.” A reader can clearly the see emerging themes that were developed into his book Chasing the Rabbit .

Spear talks about the leverage of “operationally outstanding” in changing the game, the capabilities he sees in “operationally outstanding” organizations, then “Failure Modes at Becoming Toyota Like.”

I would like to dwell a bit on these failure modes, and reflect a little on what they look like.

Failure Mode 1: Copy Lean Tools Without Making Work Self Diagnostic

So what does “self diagnostic” mean?

For something to be “self diagnostic” you need two pieces of information:

  • What is supposed to happen.
  • What actually happens.

Then you need some mechanism that compares the two and flags any difference between them. This sounds complicated, but in reality, it isn’t.

Let’s look at a common example: At the most basic level, this is the purpose of 5S.

5S as a Self Diagnostic Tool

In 5S, you first examine the process and seek to understand it. Then, applying your best understanding, you decide what is necessary to perform the process, and remove everything else. Applying your best understanding again, you seek to locate the items where they would be used if the process goes the way you expect it to.

You improve the self-diagnostic effect by marking where things go, so it immediately clear if something is missing, out of place, or excess.

Then you watch. As the process is carried out, your “best understanding” is going to be tested.

Can it be done with the things you believed are necessary? If something else is required, then you have an opportunity to improve your understanding. Get that item determine where it is used, and carry out the operation. But go a little deeper. Ask why you didn’t realize this was needed when you examined the process. Challenge your process of getting that initial understanding and you improve your own observation skills.

Is everything you believed is necessary actually used? If not, then you have another opportunity to improve your understanding. Is that item only used sometimes? When? Why? Is it for rework or exceptions? (see failure mode #2!) Why did you believe it was necessary when you did your initial observation?

Does something end up out of place? Is it routinely used in a place other than where you put it? This is valuable information if you now seek to understand how the process is different from what you expected.

Now think about the failure mode: What does fake-5S look like? What is 5S without this thinking applied?

How do you “do” 5S?

Look long and hard if you are trying to audit your way into compliance rather than using it as a diagnostic for your understanding of your process.

Though the jargon “self diagnostic” sounds academic, we have all talked about visual controls “distinguishing the normal from the abnormal” or other similar terms. It is nothing new. What makes this a failure mode is that people normally don’t think it is that important when Spear cites a mountain of evidence (and I agree) that this is critical to success. You can only solve the problems you can see.

Another Example: Takt Time

What is takt time? Before you whip out your calculators and show me the math, step back and ask the core question of “What it is” rather than “how to calculate it.” What is it?

Takt time is part of a specification for a process – it defines what should be happening. By setting takt time you are saying “IF our process can routinely cycle at this interval, then we can meet our customer’s demand. Takt time defines both part of a “defect free” outcome of your work cycle as well as a specification for process design.

Then you apply your very best knowledge of the process steps and sequence and set out a work cycle which you believe can be accomplished within the takt. It becomes self-diagnostic when you check, every time if the actual work cycle was the same as the intended work cycle. An easy way to check is to compare the actual time with the designed time.

In Decoding the DNA of the Toyota Production System, Spear cites an example of seat installation.

At Toyota’s plants, […] it is instantly clear when they deviate from the specifications. Consider how workers at Toyota’s Georgetown, Kentucky, plant install the right-front seat into a Camry. The work is designed as a sequence of seven tasks, all of which are expected to be completed in 55 seconds as the car moves at a fixed speed through a worker’s zone. If the production worker finds himself doing task 6 (installing the rear seat-bolts) before task 4 (installing the front seat-bolts), then the job is actually being done differently than it was designed to be done, indicating that something must be wrong. Similarly, if after 40 seconds the worker is still on task 4, which should have been completed after 31 seconds, then something, too, is amiss. To make problem detection even simpler, the length of the floor for each work area is marked in tenths. So if the worker is passing the sixth of the ten floor marks (that is, if he is 33 seconds into the cycle) and is still on task 4, then he and his team leader know that he has fallen behind.

assembly-valencien-640

On this assembly line, time is used, not so much to pace the work, but as a diagnostic tool to call out instances when the work departs from what is intended – or in simpler terms, when there is a problem.

All of this has been about checking that the process is being carried out as expected. But, bluntly, the customer really doesn’t give a hoot about the process. The customer is interested in the output. How do you know that the output of your process is actually helping your customer?

To know that you first have to be really clear on a couple of things:

  • Who your customer actually is.
  • What value do you provide to that customer?

Put another way, can you establish a clear, unambiguous specification of what a defect free outcome is for each supplier-customer interface in your process? Can you follow that trap line of supplier-customer interfaces down to the process of delivering value to your paying customers?

To make this self-diagnostic, though, you have to not only specify (what should be happening), you have to check (what is actually happening), and you have to do it every time.

So poka-yoke (mistake proofing) is a way to verify process steps. Other mistake proofing will detect problems with the outputs. All are (or should be) designed to diagnose any problems and immediately halt the process or, at the very least, alert someone.

The theme that is emerging here is the concept of PLAN-DO and CHECK, with the CHECK being thoroughly integrated into the process itself, rather than a separate function. (Not that it can’t be, but it is better if CHECK is continuous.)

The Supply Chain

I was walking through the shop one afternoon and spotted an open bundle of steel tube – about half of them remaining. They used kanban to trigger re-order. The rule of kanban is that the kanban card is pulled and placed in the post when the first part is consumed. To make this clear, they literally attached the kanban to the bands with a cable tie. Thus, breaking the band would literally release the kanban. Simple, effective.

But, though this bundle had been opened, there was the kanban card still attached to the (now broken) banding. It was lurking at the base of the pallet, but visible.

I went and got the supervisor, and told him that “You are going to have a shortage of 4×5 tube on Wednesday, and it is your fault. Would you like to know why?” I guess I should point out that I had a pretty good relationship with this guy, so he knew I was playing him a little. Then I just said “Look, and tell me what you see.”

It took a few seconds, but he saw the kanban, said “Argh” and started to pick it up to put into the post. I stopped him and asked if it was his responsibility to do that. No, it was the welder’s responsibility. “Then you should take a minute with him, and do what I just did with you.”

Then I asked him how often the kanban cards were collected. (I knew, I wanted to check that he knew.) “Daily, about 1 pm.”

“How often do you walk by here every day?” He said at least twice, in the morning and right after lunch. “So all you need to do is take a quick glance as you walk by, twice a day, and you will always get that kanban into the post before 1pm.”

He did, and their periodic shortages dropped pretty much to zero.

Why?

First, the system was set up JIT. We knew the expected rate of use of that part. We knew the supplier’s lead time and delivery frequency. We knew the bundle size. MATH told us the minimum number of kanban cards that would need to circulate in this loop to ensure the weld cell never ran out… unless one of the inputs was wrong or unless the process operated differently than we expected.

We also knew that, given the rate of consumption, how many bundles should be in the shop at any given time, and we had space market out for just that amount. Thus, we could tell immediately if something was received early, or if our rate of consumption fluctuated downward. (Because a pallet would arrive, but there would be no place to put it.)

We could have just had a Team Member come by once a day and check if a bundle had been opened, and send an order to replenish it. We could have been even more sophisticated and just set up a barcode scan that would trigger a computer order.

But if we had set up the process that way, how could the Supervisor distinguish between “Ordered” or “Not Ordered?” It was that stray out-of-place kanban card that told him.

Bin with kanban cardThus, physical kanban cards, physically attached to the materials, represent a self-diagnostic check. If the card is there, then that container should be full. Quick to check. If the container is not full, there should be no card. Quick to check. If a container arrives with no card, it was not properly ordered and is likely excess (or at least calls for investigation). Quick to check. The physical card, the seeming crude manual operation, provides cross-checks…self-diagnosis… simply and cheaply in ways that few automated systems replicate.

So far these have all been manufacturing examples. But Spear is clear that all work is self-diagnostic, not just manufacturing.

What non-manufacturing work fits into this model?

(Hint: All of it.)

What about a project plan? Though in reality, most project plans don’t. Read Critical Chain by Goldratt to understand the difference – and produce and manage project plans that are much more likely to finish on time.

What about a sales plan? Do you predict your monthly sales? Do you check to see if week 1 actual sales deviated from what was expected? Do you plan on activities that you believe will hit your sales targets? Which customers are you calling on? What do you predict they will do? (Which tests how well you really know them.) Do you actually call on those customers? Do they need what you thought they would? Or do you learn something else about them? (Or do you make a best guess “forecast” and wait for the phone to ring?)

Once you have a sales plan, do you put together a manufacturing and operations plan which you believe will match production to sales? How much variation do you expect to accumulate between production and sales over the course of that plan? Do you put alarm thresholds on your finished goods or your backlog levels that would tell you when that predicted variation has been exceeded?

How do you (and how often do you) compare actual sales (volume, models, and revenue) against the plan? Do you know right away if you are off plan? How long will you tolerate being off plan before you decide that something unexpected is happening? Is there a hard trigger point already set out for that? Or do you let the pain accumulate for a while?

A lot of companies were caught totally by surprise a few months ago when their sales seemed to fall off a cliff. Question: Did you have rising inventory levels before that? Was there a threshold that forced attention to something unexpected? Or did you increasingly tolerate and normalize deviance from your intended business plan?

At the next levels up – you have a business plan of some kind. You have financial targets. Do you have specific actions you plan to take at specific times? Do you check if those actions were actually taken? Do those actions have some kind of predicted outcome associated with them? Do you check if the actual outcome matches what you predicted? If you just “try something to see what happens” you might learn, but you might not. If you predict, then try to see IF it happens, you are more likely to gain more understanding from that experience.

When you design a product, do you have performance specifications? Do you test your design(s) against those performance specifications? Do you do that at every stage, or just at the end?

When you say “We need training” have you first specified what the work process is? Have you studied the situation and determined that people do not have the skills or knowledge to do the job? Have you specified what those skills are?

Have you developed the training to specifically develop those skills? Do you have a way to verify that people got the skills required? Do you check if, now, they can perform the work that they could not perform before? If you didn’t do the last, why did you “train” them at all? How do you know it was not just entertainment?

“Management Is Prediction”

deming

This quote, attributed to W. Edwards Deming, really sums up what this is about. By specifying an outcome – “defect free” (which implies you know that is), within a certain time; and by specifying the process steps; you are predicting the outcome.

“If we do these things, in this sequence, it should require this amount of time, and produce this result.”

Then do it, and check:

Does what you really had to do reflect the tasks you thought would be required?

Does the order you really did them in reflect what was in the plan?

Does the time required reflect the time you planned on?

Did it produce the intended result?

Building those checks into the process itself makes it self-diagnostic.

Try This

Go study a process, any process. Just watch. Constantly ask yourself:

How does this person know what to do?

How does this person know if s/he is doing it right?

What alerts this person if s/he makes a mistake?

How does this person know s/he succeeded?

If there are clear, unambiguous answers right in front of you – without research, without requiring vigilance or luck on the part of the Team Member, then you probably have something approaching self-diagnostic work. Likely, you don’t. If that is the case, don’t say you are “doing lean.” You are only going through the motions.

Behind The Scenes Of An Outlier

Yesterday when I published Gipsie Ranney’s white paper “Remembering Nummi” I did so because I thought she made some points that others would be interested in.

Let me take you behind the scenes of WordPress. One of the things this little program makes available is a stats tracker. This is the graph of daily “Site Views” over the last month. I think the graph speaks for itself:

graph1Needless to say “Remembering Nummi” got some legs under it.

Looking at the graph, you can see that this site has a pretty steady pulse to it. The dips are weekends. The little (second highest) spike you see correlates with a link back from a site in Europe. Looking at this, and other information, I can reasonably conclude that I have a couple of dozen regular readers, probably from feeds, and the difference is click through traffic from other sites and search engine traffic.

Something different obviously was going on today.

I also see the regular sources of click-throughs to this site. This is a pretty typical list:

gembapantarei.com/2009/01/the_essenti…
leanblog.org
google.com/reader/view
leansupermarket.com/servlet/Page?temp…
google.ca
gembapantarei.com
gembapantarei.com/2009/01/finding_tim…
linkedin.com/mbox?displayMBoxItem=…
us.mc354.mail.yahoo.com/mc/showMessag…
search.conduit.com/Results.aspx?q=Typ…
170.2.59.38:15871/cgi-bin/afterWorkOp…
my.yahoo.com/p/3.html
translate.google.com.br/translate?hl=…

But here is today’s:

tmalive.tma.toyota.com/toyota/story.c…
dailynews.tma.toyota.com/story.cfm?st…
clipsheet.ford.com/article_view.cfm?a…
clipsheet.ford.com/article_view.cfm?a…
dailynews.tma.toyota.com
clipsheet.ford.com/article_view.cfm?a…
toyota.lonebuffalo.com/story.cfm?stor…
global.clipsheet.ford.com/article_vie…
tmalive.tma.toyota.com/toyota
global.clipsheet.ford.com/article_vie…
dailynews.toyota.com/story.cfm?story_…

So I conclude that “Remembering NUMMI” got picked up by a couple of news clipping services and fed into Toyota and Ford.

First, then, is a “Welcome” to any new readers from these great companies. Please feel free to peruse, comment, and even offer to write a guest post.

Though I cannot attribute to anything other than who does, or does not, subscribe to a particular clipping service, I did find it ironic that this article, really taking a critical look at the need for government guaranteed “bailout” loans to the automotive industry, was read exclusively by people in two companies who (so far) have not asked for any help. (I speak primarily of Ford here – they conspicuously said “We can get by for right now, thank you.

To this I offer a personal comment – as a former Boeing employee I have met (though certainly did not know) Alan Mullaly (now CEO of Ford). While no leader is perfect, I believe he is certainly capable of helping the Ford culture to “confront the brutal facts” of their business. My main question about Ford is whether they had already hit the iceberg when they brought him on board.

To GM and Chrysler, though, I guess I would offer: Gipsie Ranney seems to be talking to you guys. It would behoove you to listen to her. Yes there are devestating external factors at work, but guys… your boat was leaking faster than the bilge pumps were pumping long before the storm. Stop blaming the weather and take a look inside. That is where your issues are.