Issue: Volume 3, Number 9
Date: September 2003
From: Mark J. Anderson, Stat-Ease, Inc. (

Dear Experimenter,

Here's another set of frequently asked questions (FAQs) about doing design of experiments (DOE), plus alerts to timely information and free software updates. If you missed previous DOE FAQ Alerts, please click on the links at the bottom of this page. Feel free to forward this newsletter to your colleagues. They can subscribe by going to If this newsletter prompts you ask to your own questions about DOE, please address them to

Here's an appetizer to get this Alert off to a good start—an experiment to see if people swim faster in guar-thickened water! In my days at General Mills I traveled the world, primarily in India, buying beans of this natural thickener which you often see listed as an ingredient in foods such as salad dressing. The instigator of this bizarre experiment, Ed Cussler, is a professor at my alma mater—the University of Minnesota department of chemical engineering and material science. With the aid of powdered guar, he created a gummed-up pool of slime and somehow induced several people (mainly his students!) to actually swim in it. See if they went faster or slower (or could not tell the difference) by viewing Decide for yourself whether guar should be added as an ingredient in Olympic swimming pools. It may not be a good idea for the high divers!

Here's what I cover in the body text of this DOE FAQ Alert (topics that delve into statistical detail are designated "Expert"):

1. Info alert: The September issue of the Stat-Teaser (a link is provided) features response surface methods (RSM) on a trebuchet
2. FAQ: A Rule-of-thumb for assessing outliers
3. Events alert: See us at the Fall Technical Conference (a forum for both statistics and quality) plus the Medical Design and Manufacturing Show
4. Workshop alert: See when and where to learn about DOE—consider advanced training (Six Sigma Black Belt+)

PS. Quote for the month—Comments by Fisher that apply well to the appetizer provided above (on pool of goo)


1. Info alert: The September issue of the Stat-Teaser (a link is provided)
features response surface methods (RSM) on a trebuchet

Many of you by now have received a printed copy of the latest Stat-Teaser, but others, by choice or because you reside outside of North America, will get your first look at the September issue at

The feature article, "Messing with Medieval Missile Machines (Part 2)" is a followup to my previous report on a simulated trebuchet. This time I got my hands on the real thing—a sizable scale model made by the South Dakota School of Mines and Technology (SDSMT). I successfully applied response surface methods (RSM) to zero in on a backyard target. For enlightenment on this powerful optimization tool (and some amusement), read about my experiments on the SDMST 'treb'.

The other stories in the Stat-Teaser, authored by consultant Shari Kraber, provide details on RSM designs and training.

(Learn more about RSM designs by attending the "Response Surface Methods for Process Optimization" workshop. For a description, see Link from this page to the course outline and schedule. You can enroll online by linking to the Stat-Ease e-commerce page for workshops.)


2. FAQ: Rule-of-thumb for assessing outliers

-----Original Question-----
"What is the basis of the 3.5 value shown in your software for the outlier T plot?"

Excellent question! Bear with me a bit and I will try to address your question, but first let's go over some background on this statistic.

The outlier t, more properly described as the "externally studentized residual" in statistical terms, is a type of "deletion diagnostic". The idea is to measure influence of each response after deleting it from the data set. The "outlier t" requires that each response be set aside, the model re-fitted and residual error calculated, and finally, plotted on a standard deviation scale. This requires re-fitting the model to what remains and making it the benchmark. The end result looks much like a control chart with data plotted in run order and limits imposed to prevent tampering with the process.

As a general rule, the upper and lower control limits should be placed at plus-or-minus 3.5 to be conservative. Any individual runs that fall outside the limits should be investigated for special causes, such as typographical errors or mechanical breakdowns. In such cases, it may be prudent to ignore the result and re-analyze the remainder of the data. Results that fall within the control limits should be considered as common-cause variation. Removing any of this data would likely bias the outcome of your

OK, now where did we get the value of 3.5? (I am finally getting to your question!) The answer can be found in an elegant book by Weisberg called "Applied Linear Regression", 2nd ed. New York: John Wiley and Sons, 1985. On page 116 he provides the formula for the externally studentized residual (outlier t) and then provides guidelines on determining critical values. His technique is based on the Bonferroni inequality which is described in the NIST/Sematech "Engineering Statistics Handbook at Weisberg presents a table of the critical values for the outlier test. We used the one for a risk (alpha) of 0.05. The table is laid out as a function of n, the number of runs in the experiment, and p, the parameters in the model. It turns out that for n's from 16 to 32 the value of p makes little difference: Critical t's stabilize at 3.5 or so. That's why we use this value for the red lines on the outlier t plots in our software. (Learn more about diagnostic plots and other statistical tools by attending the 3-day computer-intensive workshop "Experiment Design Made Easy." See for a complete description. Link from this page to the course outline and schedule. Then, if you like, enroll online.)


3. Events alert: See us at the Fall Technical Conference (a forum for both statistics and quality) plus the Medical Design and Manufacturing Show

Click on for a list of where Stat-Ease professionals will be giving talks and doing DOE demos. Next month we set up shop at:

- Fall Technical Conference, El Paso, TX, October 16-17 (see for details) (Update 3/07: Link no longer active.)
- Medical Design and Manufacturing (MD&M), Minneapolis, MN, October 29-30, 2003 at Booth #1816 (for details, see, or for an exciting movie, click on (Update 3/07: Link no longer active.) (OK—it's not quite up to the standards of "Terminator 3" but Arnold decided to terminate the Governor of California rather than be featured in our flick.)

We hope to see you sometime in the near future!


4. Workshop alert: See when and where to learn about DOE—consider advanced training (Six Sigma Black Belt +)

If you are already well-versed in design of experiments (having mastered RSM*) and want to make a move to the next level, attend our Robust Design: DOE Tools for Reducing Variability (RDRV) workshop September 16 - 18 (next week!) in the Stat-Ease training center (Minneapolis). This class will be extremely useful for Six Sigma Black Belts who work on improvement of manufacturing processes. Seats remain for this RDRV presentation, but you'd better act fast—click on for details and from there link to the online registration.

*(If you lack this knowledge, come to our three-day, computer-intensive Response Surface Methods for Process Optimization workshop ( It will be presented October 28-30, 2003 in Minneapolis.)

See for schedule and site information on all Stat-Ease workshops open to the public. To enroll, click the "register online" link on our web site or call Stat-Ease at 1.612.378.9449. If spots remain available, bring along several colleagues and take advantage of quantity discounts in tuition, or consider bringing in an expert from Stat-Ease to teach a private class at your site. Call us to get a quote.

PS. The "Statistics for Technical Professionals" workshop scheduled for October 7-9 has been postponed until February 17-19.


I hope you learned something from this issue. Address your general questions and comments to me at:



Mark J. Anderson, PE, CQE
Principal, Stat-Ease, Inc. (
Minneapolis, Minnesota USA

PS. Quote for the month—Comments applicable to the 'appetizer' provided (on how a pool of goo affects swim times):

"That's not an experiment you have there, that's an experience."
—Sir R. A. Fisher

Trademarks: Design-Ease, Design-Expert and Stat-Ease are registered trademarks of Stat-Eae, Inc.

Acknowledgements to contributors:

—Students of Stat-Ease training and users of Stat-Ease software
—Fellow Stat-Ease consultants Pat Whitcomb and Shari Kraber (see for resumes)
—Statistical advisor to Stat-Ease: Dr. Gary Oehlert (
—Stat-Ease programmers, especially Tryg Helseth (
—Heidi Hansel, Stat-Ease marketing director, and all the remaining staff


Interested in previous FAQ DOE Alert e-mail newsletters?
To view a past issue, choose it below.

#1 Mar 01
, #2 Apr 01, #3 May 01, #4 Jun 01, #5 Jul 01 , #6 Aug 01, #7 Sep 01, #8 Oct 01, #9 Nov 01, #10 Dec 01, #2-1 Jan 02, #2-2 Feb 02, #2-3 Mar 02, #2-4 Apr 02, #2-5 May 02, #2-6 Jun 02, #2-7 Jul 02, #2-8 Aug 02, #2-9 Sep 02, #2-10 Oct 02, #2-11 Nov 02, #2-12 Dec 02, #3-1 Jan 03, #3-2 Feb 03, #3-3 Mar 03, #3-4 Apr 03, #3-5 May 03, #3-6 Jun 03
, #3-7 Jul 03, #3-8 Aug 03, #3-9 Sep 03 (see above)

Click here to add your name to the FAQ DOE Alert newsletter list server.

Statistics Made Easy™

DOE FAQ Alert ©2003 Stat-Ease, Inc.
All rights reserved.


Software      Training      Consulting      Publications      Order Online      Contact Us       Search

Stat-Ease, Inc.
2021 E. Hennepin Avenue, Ste 480
Minneapolis, MN 55413-2726
p: 612.378.9449, f: 612.378.2152