Issue: Volume 8, Number 1
Date: January 2008
From: Mark J. Anderson, Stat-Ease, Inc., Statistics Made Easy® Blog

Dear Experimenter,

Here's another set of frequently asked questions (FAQs) about doing design of experiments (DOE), plus alerts to timely information and free software updates. If you missed the previous DOE FAQ Alert, please click on the links at the bottom of this page. If you have a question that needs answering, click the Search tab and enter the key words. This finds not only answers from previous Alerts, but also other documents posted to the Stat-Ease web site.

Feel free to forward this newsletter to your colleagues. They can subscribe by going to If this newsletter prompts you to ask your own questions about DOE, please address them via mail

For an assortment of appetizers to get this Alert off to a good start, see these new blogs at (beginning with the most recent on down):
— Medical writer's '08 resolution: Do not report results from poorly designed experiments
— Tidings and tools for Yule
— Sports ‘randomination’?
— Is anyone out there, or is it just bla, bla blog?

Also check out the feedback to the "bla, bla" blog by Curious Cat — the pen name taken by John Hunter, who's late father, Bill, energized the field of DOE alongside George Box and Stu Hunter — co-authors of the landmark book "Statistics for Experimenters" (now in second edition).

Topics in the body text of this DOE FAQ Alert are headlined below (the "Expert" ones, if any, delve into statistical details).

1. Newsletter Alert: December issue of the Stat-Teaser details DOE for sales and marketing, plus power from factorial designs
2. FAQ: Problems analyzing a two-by-two factorial design
3. Webinar Alert: 10 Ways to Mess Up an Experiment & 8 Ways to Clean It Up
4. Reader Response: Advantages of actual replication
5. Reader Contribution: Designed experiment solves welding problem
6. Event Alert: Lean Six Sigma Conference, European DOE User Meeting
7. Workshop Alert: See when and where to learn about DOE

PS. Quote for the month: See the light. (Page through to the end of this e-mail to enjoy the actual quote.)


1. Newsletter Alert: December issue of the Stat-Teaser details DOE for sales and marketing, plus power from factorial designs

Many of you will soon receive a printed copy of the latest Stat-Teaser, but others, by choice or because you reside outside of North America, will get your only view of the September issue at It features an article by me titled "A Crash Course on DOE for Sales and Marketing," which details a workshop developed by Paul Selden, author of "Sales Process Engineering" (ASQ Quality Press).* This issue of the Stat-Teaser also provides part 2 of a primer on power titled "When Power is Too Low in Factorial Designs." It demonstrates a design which lacks sufficient power and discusses how it should be dealt with. Thank you for reading the Stat-Teaser newsletter. If you do get the hard copy, but find it just as convenient to read what we post to the Internet, consider contacting us to be taken off our mailing list, thus conserving resources. However, we do appreciate you passing around hard copies of the Stat-Teaser, so do not feel obliged to forego this. Also, it would be great (in my opinion!) if you forward the link from my DOE FAQ Alert, especially for this issue, which should be of particular interest to your sales and marketing colleagues.

*(For a complete description of "A Crash Course on DOE for Sales and Marketing" contact me directly at


2. FAQ: Problems analyzing a two-by-two factorial design

-----Original Message-----
From: Chicago
"When attempting to analyze a simple 2 x 2 factorial design, I clicked left-to-right through the progressive lineup of Design-Expert® software buttons: [Transform]>[Effects]>[ANOVA]> [Diagnostics]>[Model Graphs]. The program warned me that I had better select some terms for the model or it would use the mean, so I went back to [Effects] and clicked the biggest one — the AB term. Then it suggested that I correct the model for hierarchy. I did this, but the ANOVA table came up empty — no F or P value.
Pressing ahead I was alerted that:

>Diagnostic graphs cannot be created because the model is over specified. All degrees of freedom are in the model and none are assigned to the residual (error). Also the ANOVA had no calculated p-values because without residual error there is nothing to test against. To fix the problem, return to the Effects or Model button and assign at least one term to error.<

However, this did not stop me — I proceeded to [Model Graphs] and got a nice interaction plot, only it was missing the handy least-significant-difference (LSD) bars. What am I doing wrong?"

Answer (from Stat-Ease Consultant Wayne Adams):
"In the help system there is a short but sweet paragraph about two-factor experiments:

>If only two factors are tested, the two main effects can be estimated and tested statistically. However, if the interaction is also estimated, that leaves no room for error. Thus none of the three effects can be tested. We recommend replicating this experiment to provide estimation of pure error. Then the interaction and error can both be estimated and all effects tested statistically.<

Two-factor experiments need to have some extra runs in order to produce a meaningful ANOVA. This applies no matter how many levels the factors have. For example, a three-by-three design, such as three suppliers that each provide three types of material, would be just as troublesome for statistical analysis. Replication is a good way to get these extra runs. Once the experiment has three factors, the extra runs are not necessary (but replication is still a good idea)."

To recap, the user from Chicago ran only the four combinations, unreplicated, of two factors each at two levels. This design can estimate the overall mean, the two main effects and the two-factor interaction (2FI). The 2FI created the biggest effect so the user picked it. However, hierarchy is advised for polynomial modeling, as discussed in prior DOE FAQ Alerts (for example see #3 at This puts the two main effects into the model for support of the 2FI. But that leaves no estimate of error. (In other words, all three effects can be estimated but not the error.) Things go downhill from there for the statistical analysis and diagnostics. Luckily the user realized something must not be right about the interaction graph, because it showed no LSD bars. I suppose one might say this was a comedy of no error.
— Mark

(Learn more about design and analysis of two-level factorials by attending the three-day computer-intensive workshop "Experiment Design Made Easy." See for a description of this class and then link from this page to the course outline and schedule. Then, if you like, enroll online.)


3. Webinar Alert: 10 Ways to Mess Up an Experiment & 8 Ways to Clean It Up

You are invited to attend a free web conference by Stat-Ease on "10 Ways to Mess Up an Experiment & 8 Ways to Clean It Up" at 8 AM Central USA Time on Tuesday, February 5 and again at 11 AM Wednesday, February 6. I plan to present this talk and keep it relatively basic. It is intended for actual experimenters and applied statisticians who are looking for practical advice. The presentation will be based on contributions from my colleague Shari Kraber and independent consultant Jeff Hybarger. Attendance may be limited for one or both of these two one-hour webinar sessions. Contact our Communications Specialist, Karen Dulski, via to sign up. If you can be accommodated, she will send you the link for the WebConnect and dial-in for ConferenceNow voice via telephone. Toll-free access extends worldwide, but not to all countries.


4. Reader Response: Advantages of actual replication

From: Matthew L. Barrows, Process Engineering Manufacturing Technologist and Six Sigma Master Black Belt, Monsanto, Luling, Louisiana
"Mark, I read the Trimming the Fat article you had a link to in the last newsletter. I like the replication concept here, I had never really thought through or heard the comparison between OFAT and DOE described this way. I really like it, I also like the terminology of parallel vs. serial processing. But I have a
specific question on the replication. If I do an OFAT with 4 replications at each point, those are pure real replications where everything else was held the same. That seems different than a DOE with replication of one side of the cube where each replicate had different combinations of the other factors. So assuming the 3 factors all have some real effect, the DOE replications of say the high for Factor A, will have more
variation than the 4 replications of the high for factor A on the OFAT. So my estimate of the mean for the high level of factor A for the OFAT ought to be more precise than my estimate of the mean for the high level of A for the DOE. This is thinking simplistically as the DOE solves for the effects of all three factors and separates them. But I would not mind to hear your answer to the above simplistic comeback to your replication argument."
— Matt Barrows

Great question! First off, I agree that true replication, such as that done in the OFAT case, cannot be beat for estimating 'pure' error — provided it’s done properly by doing a re-set of all process factors. In fact, if the experimenter feels that the optimum conditions are in the factor region, it may pay to do a number of center points: We recommend at least four. These then should be mixed in randomly or at intervals throughout the design as it will actually be run. Then the residual error, derived from insignificant effects, can be compared to the pure error provided by the replicated center points in a lack-of-fit test. Amazingly, it is not uncommon to see no significant lack of fit, thus supporting the practice of running unreplicated two-level designs as a screening tool. The bottom line: It works! However, if you want to be conservative, replicate the center or some other point(s) in your design to get pure error estimation.
— Mark

Response (from Matthew):
"Mark, Actually, when I do DOE’s on my process or consult others in the company, very often I recommend exactly the same: To do replicated center points. And often we execute them randomly in time throughout the experiment or for a long experiment, we might do a center-point run each day. Then with that we can understand the time series effects and it very nicely gets to an estimate of long term variation (I like to call it 'wobble') in the process. In extreme cases, I have used the daily center points to correct out a trend in the process over the course of the experiment by adding it in as a covariate term in the DOE model. Anyway, beside the point, but glad to see your comments support that. But to support this line of argument for DOE, I think it would be good to do a simulation or work out the calculation, how the error estimate compares in the OFAT vs. the DOE for the example you provided. So take an assumed effect size for A, B, and C factors and run the numbers theoretically to see what the error estimate would be for the factor effect for A, B, and C in the 16-run OFAT vs. the 8-run DOE. And/or, what if you kept the total number of runs the same say 8 and 8 and see what the errors are comparatively. Just sort of thinking out loud here: When I argue for DOE’s vs. OFAT, I mostly use arguments around interaction effects and potentially finding sub-optimal optimums. And just the fact that in execution by their nature, very seldom are OFAT’s done randomly. I usually quote the assumed facts of the efficiency of DOE’s but had not really worked that argument in detail like you are attempting to do, so that is why I am so interested in it, especially as an instructor and consultant myself."
— Matt

My response:
Matt, I agree that it is a bit magical to derive error estimates from unreplicated designs. I, too, think interactions are what make factorials exciting. Let’s see if this thread generates further comments and perhaps enlightenment.
— Mark


5. Reader Contribution: Designed experiment solves welding problem

From: David W. Gore, P.E., Associate Professor, Middle Tennessee State University
"I was "sold" on DOE when I did a multi-welder study for an automobile seat manufacturer with a group of my students. The DOE solved the poor weld-nut quality problem as you can see from the attached report. If you think it has merit, feel free to use it!"

See the article on "University And Industry Collaboration to Solve Welding Quality Problem Using Design Of Experiments (DOE)" at and illustrations at


6. Events alert: Lean Six Sigma Conference, European DOE User Meeting

At the 2008 ASQ Lean Six Sigma Conference, held February 11-12 in Phoenix, Stat-Ease Consultant Pat Whitcomb will explain "How to Plan and Analyze a Verification DOE." Conference details are posted at Stat-Ease will offer an exhibit. Pat's talk, which garnered an "extremely overwhelming" positive review, is described at The Second European DOE User Meeting will be held March 10-12 in Berlin, Germany. Come to increase your understanding of design of experiments (DOE) techniques, learn of successful real-life applications of DOE, and also attend presentations specific to Stat-Ease software and its features. To receive more information when it is available, send an e-mail to Heidi Hansel via

Click for a list of upcoming appearances by Stat-Ease professionals. We hope to see you sometime in the near future!


7. Workshop Alert: See when and where to learn about DOE

Seats are filling fast for the following DOE classes:

—> Experiment Design Made Easy (EDME)
(Detailed at
> January 22-24 (San Diego, CA) **SOLD OUT**
> February 12-14 (Minneapolis, MN)
> March 4-6 (Philadelphia, PA)

—> Mixture Design for Optimal Formulations (MIX)
> January 29-31 (Minneapolis)

—> Response Surface Methods for Process Optimization (RSM)
> February 26-28 (Minneapolis)

—> DOE for DFSS: Variation by Design (DDFSS)
>March 11-12 (Minneapolis, MN)

See for complete schedule and site information on all Stat-Ease workshops open to the public. To enroll, click the "register online" link on our web site or call Elicia at 612.746.2038. If spots remain available, bring along several colleagues and take advantage of quantity discounts in tuition. Or, consider bringing in an expert from Stat-Ease to teach a private class at your site.*

*Once you achieve a critical mass of about 6 students, it becomes very economical to sponsor a private workshop, which is most convenient and effective for your staff. For a quote, e-mail


I hope you learned something from this issue. Address your general questions and comments to me at:



Mark J. Anderson, PE, CQE
Principal, Stat-Ease, Inc. (
2021 East Hennepin Avenue, Suite 480
Minneapolis, Minnesota 55413 USA

PS. Quotes for the month—See the light :

"Sometimes in order to see the light you have to risk the darkness."
—Iris Hineman (Lois Smith) in Steven Spielberg's movie "Minority Report" (2002)

Trademarks: Design-Ease, Design-Expert and Stat-Ease are registered trademarks of Stat-Ease, Inc.

Acknowledgements to contributors:
—Students of Stat-Ease training and users of Stat-Ease software
—Stat-Ease consultants Pat Whitcomb, Shari Kraber and Wayne Adams (see for resumes)
—Statistical advisor to Stat-Ease: Dr. Gary Oehlert (
—Stat-Ease programmers, especially Tryg Helseth and Neal Vaughn (
—Heidi Hansel, Stat-Ease marketing director, and all the remaining staff


Interested in previous FAQ DOE Alert e-mail newsletters?
To view a past issue, choose it below.

#1 Mar 01, #2 Apr 01, #3 May 01, #4 Jun 01, #5 Jul 01 , #6 Aug 01, #7 Sep 01, #8 Oct 01, #9 Nov 01, #10 Dec 01, #2-1 Jan 02, #2-2 Feb 02, #2-3 Mar 02, #2-4 Apr 02, #2-5 May 02, #2-6 Jun 02, #2-7 Jul 02, #2-8 Aug 02, #2-9 Sep 02, #2-10 Oct 02, #2-11 Nov 02, #2-12 Dec 02, #3-1 Jan 03, #3-2 Feb 03, #3-3 Mar 03, #3-4 Apr 03, #3-5 May 03, #3-6 Jun 03, #3-7 Jul 03, #3-8 Aug 03, #3-9 Sep 03 #3-10 Oct 03, #3-11 Nov 03, #3-12 Dec 03, #4-1 Jan 04, #4-2 Feb 04, #4-3 Mar 04, #4-4 Apr 04, #4-5 May 04, #4-6 Jun 04, #4-7 Jul 04, #4-8 Aug 04, #4-9 Sep 04, #4-10 Oct 04, #4-11 Nov 04, #4-12 Dec 04, #5-1 Jan 05, #5-2 Feb 05, #5-3 Mar 05, #5-4 Apr 05, #5-5 May 05, #5-6 Jun 05, #5-7 Jul 05, #5-8 Aug 05, #5-9 Sep 05, #5-10 Oct 05, #5-11 Nov 05, #5-12 Dec 05, #6-01 Jan 06, #6-02 Feb 06, #6-03 Mar 06, #6-4 Apr 06, #6-5 May 06, #6-6 Jun 06, #6-7 Jul 06, #6-8 Aug 06, #6-9 Sep 06, #6-10 Oct 06, #6-11 Nov 06, #6-12 Dec 06, #7-1 Jan 07, #7-2 Feb 07, #7-3 Mar 07, #7-4 Apr 07, #7-5 May 07, #7-6 Jun 07, #7-7 Jul 07, #7-8 Aug 07, #7-9 Sep 07, #7-10 Oct 07, #7-11 Nov 07, #7-12 Dec 07, #8-1 Jan 08 (see above)

Click here to add your name to the DOE FAQ Alert newsletter list server.

Statistics Made Easy®

DOE FAQ Alert ©2008 Stat-Ease, Inc.
All rights reserved.


Software      Training      Consulting      Publications      Order Online      Contact Us       Search

Stat-Ease, Inc.
2021 E. Hennepin Avenue, Ste 480
Minneapolis, MN 55413-2726
p: 612.378.9449, f: 612.378.2152