ARTICLES

# Problems with statistical process control theory

This article discusses the various problems that may occur through direct text-book application of statistical process control chart theory.

SPC is currently receiving considerable attention by industry to improve quality and productivity. Provided it is correctly and appropriately applied, significant improvements can indeed be achieved in these areas. To achieve these improvements, it is important that the control chart is set up to rapidly reflect changes in the process parameters whilst minimising the number of false alarms. This requires careful choice of sample size and placement of control limits. If the control limits are too tight, then the control chart will lead to freqent and unncessary process adjustments, which can be as costly as an out-of-control process. If the control charts are too wide, then the process mean is able to drift within the control limits, without call for action. Industries attempting to control nett weight to a minimum will appreciate how expensive this can be.

Both problems can arise when using sub-group data to set control limits. For this article I will revolve around the x-bar chart.

Placement of control limits for the x-bar chart is based on the fact that the sampling distribution of averages sampled from the SAME popuation is approximately normal with a standard deviation (standard error) equal to the standard deviation of the individual results divided by the square root of the sample size. Many recommend that the standard deviation of the indiviauls should be estimated by pooling the standard deviation obtained from each sub-group. For simplicity, the range method is often used to estimate the standard deviation.

To demonstrate how inadequate control chart limits can be set by this approach, I will use the data in Table I to set up an x-bar chart.

Table I

Sub-Group
1 2 3 4 5 Avg Range
9.8 9.9 10.6 8.7 9.2 9.6 1.9
7.5 8.7 9.1 8.0 8.0 8.3 1.6
9.3 11.0 12.6 10.7 11.6 11.0 3.3
9.2 8.7 9.5 8.5 10.3 9.2 2.9
8.4 9.3 11.2 109 11.3 10.2 2.9
13.0 11.9 13.4 11.9 12.3 12.5 1.5
9.4 9.2 9.5 11.2 9.8 9.8 2.0
9.6 9.6 11.6 10.1 10.7 10.3 2.0
8.6 6.9 9.4 8.4 8.6 8.4 2.5
10.6 11.9 10.3 10.7 12.9 11.3 2.6
12.6 11.8 11.1 11.2 11.3 11.6 1.5
9.4 9.3 8.5 10.7 10.5 9.7 2.2
10.0 9.0 9.6 8.5 10.5 9.5 2.0
7.5 10.0 9.2 9.7 11.0 9.5 3.5
6.3 9.0 7.8 8.9 9.5 8.3 3.2
10.6 9.3 9.5 9.5 10.0 9.8 1.3
10.1 12.9 9.9 11.4 10.0 10.9 3.0
10.1 9.4 10.3 12.0 10.9 10.5 2.6
10.7 10.0 9.8 8.6 8.6 9.5 2.1
7.6 8.6 7.8 8.9 7.4 8.1 1.5
Average 9.9 2.3

Using the averages and an appropriate factor (.577), obtained from tables, the lower and upper control limits should be set at 8.6 (9.9-.577*2.3) and 11.2 (9.9 +.577*2.3) units respectively. Plotting the sub-group averages on a chart with these limits, as in Figure 1, shows that several points fall above and below the control limits. A natural tendency will be to discard these points as out-of-control, and then recalculate the control limits, probably after replacing the removed data with new results. This approach is certainly valid if the points were due to assignable causes unlikely to reoccur frequently.

However the approach is not valid if the process mean itself is variable. Standard Process Control Theory assumes that the underlying process average is constant. However based on experience, real industrial processes are often not so simplistic. Instead the process average may wander according to some autoregressive relationship, as shown in Figure2, or simply fluctuate randomly over time, i.e. the process average itself is variable. The variability of the process average (between sample variation) can be as inherent to the process as the within sample variation and impossible to remove through increased inspection frequency. For these situations the control limits must incorporate the natural variation of the process average. Setting control limits on the basis of sub-group data i.e. short term variation will not capture the total inherent process and result in too-tight control limits, as occured with our example. Removal of outside-limit points and recalculation of limits will not work. As soon as the chart is applied, operators will be forced to frequently adjust the process, thereby further increasing variability. In the presence of inherent process mean variation, control limits must reflect this component of variation.

FIGURE 1: X-Bar Chart
AND FIGURE 2: A process with a variable process average

Some authors state specifically that control charts should not include between sample variation i.e. they SHOULD be based on sub-groups. It can only be assumed that these authors assume a stationary process where between sample variatons is not inherent to the process, and should therefore not be included in the estimate of process standard deviation.

The quality practitioner before applying SPC should first study his or her process and identify whether there is time to time variation i.e. whether the process average is variable. This can be done with the aid of an analysis of variace. He or she must then decide whether the variability is truly inherent to the process. If not, the sources for the variation should be removed and control limits computed using standard methods. If so, the variability must be accepted and control limits computed to reflect the process variability of the process average. This is not easy, for example the standard error is no longer inversely proportional to the square root of the sample size and the distribution of the avearages may not be normal. How to deal with these situations depends on the process.

## Analytics as a Service (AaaS) for Quality

Drive quality improvement through actionable insights using analytics you can trust! Use up to 200 analytics tools downloadable through a suite of Apps!

• Augmented with machine-powered smarts
• Always updated with the latest tools and features
• No licencing or fixed subscriptions - Pay ONLY for the analysis you run from 20 USD cents per analysis, billed monthly! Set a budget so you don't exceed!