It costs, too...it's not as though you can use the two-sigma "warning" limits for free. Every false signal they pick up increases the chance that you will take action on a stable process. Those familiar with the Nelson Funnel experiment understand the dangers of that...tampering. It can increase the variation in a number of different ways, and will increase cost inevitably (you are paying someone to take action when no action is warranted or wanted).
I had a profound experience with this when I attended Dr. Wheeleer's "Advanced Topics in Statistical Process Control" seminar back in the mid-90s. He used a quincunx to demonstrate a stable process, and derived 1-, 2-, and 3-sigma limits from the data in the quincunx. Then he ran a lot of beads to show that the process was stable, and that his empirical rule held up. Then, he covered the top section so you couldn't see what was happening after the beads dropped through the funnel and drew specification limits on the board. Then he started dropping beads, and when a couple of beads started landing near a spec limit, some of the engineers in the room suggested that he shift the funnel away from the spec limit. So he did, then started dropping beads again. Every time a bead fell close to the upper or lower spec limit, he would shift the funnel (at the request of the students). After he'd done this for a couple of minutes, he pulled the paper off the front of the board and showed them that the distribution was significantly wider. Then he asked, "Do any of you use p-controllers in your processes? This is just a p-controller..."
This was well before the age of ubiquitous cell phones, and at the next class break, several of these engineers raced to the bank of cell phones out in the lobby, and called back to their factories to tell their people "Shut down the p-controllers! They're killing us!"
It costs, too...it's not as though you can use the two-sigma "warning" limits for free. Every false signal they pick up increases the chance that you will take action on a stable process. Those familiar with the Nelson Funnel experiment understand the dangers of that...tampering. It can increase the variation in a number of different ways, and will increase cost inevitably (you are paying someone to take action when no action is warranted or wanted).
I had a profound experience with this when I attended Dr. Wheeleer's "Advanced Topics in Statistical Process Control" seminar back in the mid-90s. He used a quincunx to demonstrate a stable process, and derived 1-, 2-, and 3-sigma limits from the data in the quincunx. Then he ran a lot of beads to show that the process was stable, and that his empirical rule held up. Then, he covered the top section so you couldn't see what was happening after the beads dropped through the funnel and drew specification limits on the board. Then he started dropping beads, and when a couple of beads started landing near a spec limit, some of the engineers in the room suggested that he shift the funnel away from the spec limit. So he did, then started dropping beads again. Every time a bead fell close to the upper or lower spec limit, he would shift the funnel (at the request of the students). After he'd done this for a couple of minutes, he pulled the paper off the front of the board and showed them that the distribution was significantly wider. Then he asked, "Do any of you use p-controllers in your processes? This is just a p-controller..."
This was well before the age of ubiquitous cell phones, and at the next class break, several of these engineers raced to the bank of cell phones out in the lobby, and called back to their factories to tell their people "Shut down the p-controllers! They're killing us!"