ARTICLES

Take A Trip To Manhattan - with control technology

The BIS.Net Team BIS.Net Team

If the quality profession wants others to improve, then it too must seek to improve. Unfortunately there seems to be no sign of improvement in quality technology. The core tools remain virtually unchanged since their introduction 100 years ago. The Shewhart chart is one of these tools. However, Dr Juergen Ude urges quality professionals to turn instead to the Manhattan control chart which goes further than the Shewhart chart in advising when a process is out of control.

The Shewhart chart was first introduced by Dr Walter Shewhart early this century. It has been hailed as one of the most significant quality improvement tools ever introduced to the quality profession. Occasionally, however, some question its technology. When this happens there is usually angry opposition by Shewhart proponents who claim that the chart is timeless.

The Shewhart Chart

The concept of the Shewhart chart is simple. Lines are placed within levels, within which the natural level of variation falls. Any point falling outside these limits indicates the presence of assignable cause. Perhaps it is the simplicity which has resulted in its popularity. However, although the core concept is simple, Shewhart charts are not that simple to implement. Three-day training courses are not uncommon and even then many attendees do not know which of the many charts to choose from (x-bar, S, individuals, P, NP, C, U), let alone understand the various supplementary rules.

There is also some controversy over the application of the Shewhart chart. One school of thought explains that the Shewhart chart should be used as a tool for continuous improvements. Here the objective is to identify and remove assignable causes, thereby improving the process. Another school argues that looking at the process after the event is too late and encourages use of the Shewhart chart in real-time where operators make a compensating adjustment to the process. In practice either or both of the above are used. This article is based upon the former, more popular applications.

Reading a text book on the topic of using a Shewhart chart seems a straightforward task. Most examples plot about 25 plotted points with one or two points outside limits, annotated with labels such as ‘new operator change’ as shown in Figure 1.

X-Bar Chart demonstrating control limits that are too wide FIGURE 1: A cause assigned to an out-of-control point

Falling outside the line

The implications is that the process is only out-of-control where points fall outside control limits and that the causes can be easily identified. In practice causes can not be easily identified from the out-of-control point. In reality processes are out of control when points fall outside the control limits, but the engineering of the Shewhart chart is too simplistic to show this.

Several years ago a food manufacturer in Australia had a point which fell outside the lower control limit for net weight. Believing that the process was out of control, the pallet corresponding to that point in time was impounded. Several weeks later the Weights and Measures Department fined the manufacturer for selling an underweight product. The problem occurred because the manufacturer did not realise that points falling inside the control limits did not necessarily indicate that the process is in control. More pallets should have been impounded.

When there is a process parameter shift, then depending on the magnitude of the change, it will take time before a point falls outside control limits. During this time the process is clearly out of control. The Shewhart chart, having been designed in an era where there was no modern computing power, is unable to detect when the shift first occurred and how long it lasted. All it can do is advise that a process is out of control.

It’s a time problem

Not knowing when a problem started or when it finished makes it very hard to determine assignable causes. If a new raw material was added at 9am and an out-of-control point occurred at 2pm the day after, operators would search for an assignable cause the following day, at around 2pm, and not find the cause. If it was known that the problem really started at 9 am the day before, the operators would have been better of identifying the cause of the problem. Figure 2 shows how misleading out-of-control points can be.

Control chart showing how misleading out-of-control points can be FIGURE 2: Occurence of out-of-control points do not coincide with the onset of the disturbance

Another fundamental theoretical problem with the Shewhart chart is that significance testing is relative to the centre line. This makes the Shewhart chart more appropriate for those applications which are only concerned with a departure from the target, not the popular application of using the Shewhart chart to improve the process by identifying assignable causes.

Consider a process change due to the inclusion of a new raw material batch. Now consider a second process disturbance due to a change in processing condition. As the Shewhart chart was not designed to detect relative changes, operators have no objective means of detecting the presence of another assignable cause. This is shown in Figure 3.

Control chart showing how misleading out-of-control points can be FIGURE 3: Two changes in the process mean (The two out-of-control points cannot be used to determne whether there are two assignable causes. A single cause may also show two (or more) out-of-control points.

In summary, the use of central limit theorem to determine x-bar chart limits often result in control limits that are to tight; probability points of attribute charts are often dubious, and supplementary tests, such as zone charts, run and trend tests, result in increased false alarms and make it even harder to determine assignable causes.

Lack of significant successes can, of course be due to many factors, not necessarily the fault of the Shewhart chart. One must also look at the reasons for lack of success. The Shewhart chart will not work correctly if management does not react to the results. In many cases incorrect and half-hearted applications are the reason for failure.

What is important to the quality practitioner is that there are significant problems in the technology. If an alternative exists that overcomes these problems then this alternative should be given serious attention. Such an alternative is the Manhattan control chart.

The Manhattan control chart

The Manhattan control chart uses a change analysis algorithm developed by Dr Juergen Ude. The Manhattan is called a Manhattan control chart because it not unlike the manhattan skyline. The Manhattan control chart was engineered for the very application for which the Shewhart chart is promoted: to detect the onset and duration of a change and to detect relative changes.

Many process changes are sustained for some time period. If a new operator takes over, the effect of the operator will remain for some time. If a new batch is added then the effect of that batch of material will remain until the material is finished. If a process setting change is made then the effect will remain until the change is reversed or a compensating adjustment made. The Manhattan control chart was engineered to show the effect i.e when it started and when it finished. The Shewhart chart is unable to do this because it only looks at individual points. The Manhattan control chart can also highlight outliers. An example of a Manhattan control chart is shown in Figure 4.

Control chart showing how misleading out-of-control points can be Control chart showing how misleading out-of-control points can be FIGURE 4: A typical Manhattan control chart showing sustained and unsustained changes in the process.

The top chart is for central tendency, the bottom for variablity. Each plateau is an estimate of the mean (or sd) of a statistically significant local average (or sd).

The important difference between Shewhart and Manhattan control charts is that the Shewhart chart only advises that the process is out of control, while the Manhattan chart maps statistically-significant changes in the process mean. The control limits are only decision limits for hypothesis testing. Because the technology of the Shewhart chart is very basic, control limit calculations are very simple and can be displayed.

Manhattan technology on the other hand uses more comprehensive significance testing. Decision limits are used, but these are transparent to the user. Users only need to know that whenever there is a change in the process mean then there has been a statistically significant change. Just like a point outside control limits shows instability, so does a change in a Manhattan plateau. If there is no process change, ie the Manhattan is a flat line (a flat liner), then the process is stable.

The simple success of Manhattan

It is important to understand that the Manhattan control chart does not represent a statistical breakthrough. Very basic statistics are used. Manhattan technology simply uses the modern number-crunching power of a computer

The benefits of the Manhattan control chart, as an alternative to the Shewhart charts, is that it:

  • Is far more sensitive than a Shewhart chart, for the same type in error
  • Is robust to non-normality – operators no longer have to differentiate between attributes and variables charts
  • Reduces training from several days to 15 minutes
  • Provides a more realistic of the process - this makes it easier to identify assignable causes and the extent of process instability can be better assessed
  • Can be stacked on top of other charts for a fingerprint analysis, whereas conventional regression analysis fails.
  • Can be complemented with change analysis to identify assignable causes
  • Can be enhanced and built on

There are many success stories, for example an aluminum manufacturer has made dramatic reductions in coolant costs and a cigarette manufacturer has solved a cigarette firmness problem. Advances have also been made with d-charts which are able to display changes in the mean, variability and conformance, all on the same chart. No other tool is able to do this.

Manhattan control improves process settings

A more recent contribution is the stability index. This is arguably the most powerful new statistic since the Cp and Cpk index. The Cp and Cpk index provides information on process capability in a concise way. Management does not have time to scan hundreds of capability histograms or look at hundreds of control charts. It needs a convenient index instead. Historically the number of out-of-control points was used to sumarise degree of control. However, this statistic provides little useful information. Instead the stability index not only advises whether the process is out of control, but also the effect of the out-of-control situation.

The index can vary between 0 and 100. An index of 100 implies that the process is 100 per cent in control, whereas an index of 80 means that 20 per cent of variation is to the process being out of control. Through Manhattan control technology management has the capability of glancing through tabular reports of perhaps thousands of products and charactistics, and at a glance determine which processes require action. Management can also priortise process improvement changes by relating the stability index to the Cp and Cpk index. This was not possible before, but controlling many characteristics is now simple.

It is nicer in Manhattan

The Manhattan control chart can be used wherever a Shewhart chart is currently used. It can also be used in real-time, with the added advantage of enabling better decision rules for the operator. As a tool to identify assignable causes of variation the Manhattan control chart, too, is not perfect. If there are many changes and if changes are small, then it will show a distorted picture of the true process. However it will still show it better than a shewhart control chart.

Knowing when a problem occurred and how long it lasted does not necessarily mean that the cause will be identified. In the chocolate industry a viscosity problem could be due to problems such as bad mixing, temperature, fat content, surfactant content, sugar particle size conching time, chocolate particle size, moisture content and raw material variations. Even though it is still difficult to identify causes to problems, the probability of doing so is greater with Manhattan control.

As a tool Manhattan technology (not just the control chart, but also fingerprint analysis, change analysis, d-charts and the stability analysis) is an advancement. If the quality profession is genuine in its drive for improvement, it must move towards improved technology, not dogmatically adhere to the old. Although there have been many advances in statistical theory much of it is inapporopriate for applications by quality practitioners. The actual core technoogy used in the profession is basic (run charts, scatter diagrams and Pareto charts). So much more is possible. With computers it is now possible to interface statistical and operational research advances in a way which enables the quality profession to use the technology without needing to be professionally trained in these areas.