Davis Bothe's book Measuring Process Capability has the confidence interval formulas for process capability metrics. The easiest one is for Cp because you only have to estimate one parameter (the population standard deviation). The other intervals are wider but calculating the Cp interval is enough to show that to obtain a useful confidence interval (one sufficiently narrow to indicate a unique interpretation over its range) you need a very large sample size. For 10% precision with 95% confidence, i.e. P(0.9*Cp_hat < Cp < 1.1*Cp_hat) = 0.95, you need a sample size of about 200 units. Suppose a sample of 200 units delivers estimated Cp Cp_hat = 1.5. The 95% confidence interval (with 10% precision) will be P(1.35 < Cp < 1.65) = 0.95. That's a huge range of possible Cp values considering everything interesting happens between Cp = 1 (awful) and Cp = 2 (spectacular). Process capability with small sample sizes, e.g. n = 30, is delusional unless all of the metrics are very large and you're confident that all of the assumptions (e.g. normality, single stable process, etc.) are satisfied.
If the process capability data are collected under SPC protocol (as they're supposed to be) and there is evidence that the process mean is unstable (Pp << Cp), then you can't use Bothe's simple confidence intervals. In this case there will be two standard deviations - the within-subgroup and the between-subgroup standard deviations - and the confidence intervals have to be determined with appropriate degrees of freedom weighting.
In general, Harish is correct that process capability (with confidence intervals), normal tolerance intervals, and variables sampling plans are all about the same thing - characterizing the proportion defective with respect to specification limits and their results will converge.
To Stan Alekman and Mike M: It's good to see you're keeping track of these things. P.
Davis Bothe's book Measuring Process Capability has the confidence interval formulas for process capability metrics. The easiest one is for Cp because you only have to estimate one parameter (the population standard deviation). The other intervals are wider but calculating the Cp interval is enough to show that to obtain a useful confidence interval (one sufficiently narrow to indicate a unique interpretation over its range) you need a very large sample size. For 10% precision with 95% confidence, i.e. P(0.9*Cp_hat < Cp < 1.1*Cp_hat) = 0.95, you need a sample size of about 200 units. Suppose a sample of 200 units delivers estimated Cp Cp_hat = 1.5. The 95% confidence interval (with 10% precision) will be P(1.35 < Cp < 1.65) = 0.95. That's a huge range of possible Cp values considering everything interesting happens between Cp = 1 (awful) and Cp = 2 (spectacular). Process capability with small sample sizes, e.g. n = 30, is delusional unless all of the metrics are very large and you're confident that all of the assumptions (e.g. normality, single stable process, etc.) are satisfied.
If the process capability data are collected under SPC protocol (as they're supposed to be) and there is evidence that the process mean is unstable (Pp << Cp), then you can't use Bothe's simple confidence intervals. In this case there will be two standard deviations - the within-subgroup and the between-subgroup standard deviations - and the confidence intervals have to be determined with appropriate degrees of freedom weighting.
In general, Harish is correct that process capability (with confidence intervals), normal tolerance intervals, and variables sampling plans are all about the same thing - characterizing the proportion defective with respect to specification limits and their results will converge.
To Stan Alekman and Mike M: It's good to see you're keeping track of these things. P.