Stat-Ease
> If you are having trouble viewing this email view it online.
 
Vol: 11 | No: 3 | May/Jun'11
Stat-Ease
The DOE FAQ Alert
     
 

Heads-up (below!)
Two free webinars, from simple to sublime, coming soon—enroll now if you can participate!

Dear Experimenter,

Here’s another set of frequently asked questions (FAQs) about doing design of experiments (DOE), plus alerts to timely information and free software updates. If you missed the previous DOE FAQ Alert, click here.

TIP: Get immediate answers to questions about DOE via the Search feature on the main menu of the Stat-Ease® web site. This not only pores over previous alerts, but also the wealth of technical publications posted throughout the site.

Feel free to forward this newsletter to your colleagues. They can subscribe by going to this registration page.

Also, Stat-Ease offers an interactive website—The Support Forum for Experiment Design. Anyone (after gaining approval for registration) can post questions and answers to the forum, which is open for all to see (with moderation). Furthermore the forum provides program help for Design-Ease® and Design-Expert® software. Check it out and search for answers. If you come up empty, do not be shy: Ask your question! Also, this being a forum, we encourage you to weigh in with answers! The following Support Forum topic provides a sample of threads that developed since my last Alert:

  • Design Selection:
    • Could you explain why the centroid is the only point replicated in a mixture screening design?

To open yet another avenue of communications with fellow DOE aficionados, sign up for The Stat-Ease Professional Network on Linked In and start or participate in discussions with other software users.

 
Stats Made Easy Blog

StatsMadeEasy offers wry comments weekly from an engineer with a bent for experimentation and statistics. To get new StatsMadeEasy entries click this link:

Subscribe with Feedburner

“Your StatsMadeEasy Blog brightens up a dreary work day...”
—Applied Statistician, Florida Smiley Face

Topics discussed since the last issue of the DOE FAQ Alert (latest one first):

*Check out the first comment on this blog—it links to a very funny clip of Andy Griffith trying to explain some math and stats to his TV son Opie (Ron Howard).  Please do not be shy about adding your take about any news or views you see in StatsMadeEasy.  Thanks for paying attention.

 
     
 

If this newsletter prompts you to ask your own questions about DOE, please address them via e-mail to: [email protected].


-


Topics in the body text of this DOE FAQ Alert are headlined below (the expert ones, if any, delve into statistical details):

1:  FAQ: For operational reasons I segmented my factorial screening-design into blocks: Why are these not shown in the coded predictive-equation?
2:  FAQ: Graphical optimization plot with the confidence interval checked
3:  FAQ: If we are changing relatively small levels of components, would it be OK to use a factorial rather than mixture design?
4:  Webinar alert: Encores of “How to Get Started with DOE” (beginner) and re-scheduling of “Practical Aspects of Algorithmic Design of Physical Experiments” (advanced)
5:  Info alert: Case studies detailing the application of DOE to purification of minerals and for optimization of injection molding of plastic
6:  Reader response: Why not default directly to the designed-for model for response surface methods (RSM)?
7:  Events alert: Using DOE with Tolerance Intervals to Verify Specifications
8:  Workshop alert: See when and where to learn about DOE
 
 
PS. Quote for the month: Classic joke about statistics classes
(Page down to the end of this e-zine to enjoy the actual quote.)


- Back to top -
 
 

1: FAQ: For operational reasons I segmented my factorial screening-design into blocks: Why are these not shown in the coded predictive-equation?

Original Question:

From a Graduate Researcher:
“For operational reasons I segmented my factorial screening-design into blocks.  It went well: I discovered the vital factors in my process.  However, in the equation for coded-factors the block is not listed.  Why not?”

Answer:

From Stat-Ease Consultant Wayne Adams:
 “Blocks cannot be used as predictors as we assume anything blocked out is a random effect.  We are simply accounting for a known source of variation such that it allows the most efficient estimate of the factor coefficients chosen for the model.  It is further assumed that blocking variables do not interact with the control factors.

For example, let’s say you conduct an experiment on Tuesday and Thursday.  Rather than attempt to control or measure all the things that might vary from day-to-day, you simply block out them out by design.  Let’s further assume that the average response from Tuesday comes out higher than Thursday.  Most likely this just occurred by chance, that is, Tuesdays will not always be higher than Thursday.  A random effect such as this cannot be used in a predictive model.  It’s fair to say that day-to-day variation exists.  However, the blocking effect cannot be relied upon as a consistent estimate for next week, month, year, etc.”

(Learn more about blocking by attending the two-day computer-intensive workshop “Experiment Design Made Easy.”  Click on the title for a description of this class and link from this page to the course outline and schedule.  Then, if you like, enroll online.)


- Back to top -


2: FAQ: Graphical optimization plot with the confidence interval checked

Original Question:

From a Manufacturing Engineer:
“I need help understanding what the graphical contour shading indicates.  The darker yellow in particular.  We are trying to optimize a heat-sealing process based on the graphical optimization plot with the confidence interval checked.  Was this the best way to determine our optimal range?  Is it typical for the optimal region to be so small?  I appreciate all your help.”

Answer:

From Stat-Ease Consultant Brooks Henderson:
“Including the confidence interval by checking the box in the Criteria screen is usually a good idea.  This will narrow down your region.  If you were to not use it and operate on the edge of the operating window, you’d have a 50% chance of getting something outside of what you want (the response will be the value of the contour line on average, so half the time it’s above and half the time it’s below that value).  This graphical optimization is a good way to choose your optimal region.  The size of the region depends on the problem.  Sometimes, you will get no region that works.

One thing you should do is use numerical optimization to find a solution and this will set what “slice” of the design space you are looking at in the graphical optimization.  This may give you a bigger region. 
Remember when looking at the graphical optimization plot that you are only looking at two factors at a time.  Therefore, the value that the other factors are set-to will change how the graph looks.  Notice in the graph below that I set the Pressure (factor C) to its high level by dragging the red bar to the right (see the blue arrow).  This changes the look of the graph and where the yellow operating window lies, because the graph depends on the level of factor.

Graphical Optimization

Graphical optimization moved to a new location by sliding a Factor Tool bar

If you use numerical optimization first to find a solution, then the solutions will be preloaded in buttons across the top of the screen (see below).  If you click on these buttons, the levels of all factors not shown on the plot (factor C in this case) will be adjusted accordingly to get to that solution.

Numerical Optimization

Numerical optimization solutions ranked by desirability

I just thought of one other thing that might help you.  If you were referring to the shading in the graphical optimization plot, the yellow region is the region where all your criteria are met.  This is where you would meet your goals.  I call this the “operating window”.  If you add confidence intervals to your criteria, there will be a grayed-out portion of the yellow region which just narrows the operating region as I described to ensure you have 95% confidence that your average values will be within the yellow region.  For more information on the intervals (CI, PI, etc.) and how to use them, see the third page of our August 2008 Stat-Teaser newsletter.

(Learn more about graphical optimization by attending the two-day computer-intensive workshop “Response Surface Methods for Process Optimization.” Click on the title for a description of this class and link from this page to the course outline and schedule.  Then, if you like, enroll online.)


- Back to top -


3: FAQ: If we are changing relatively small levels of components, would it be OK to use a factorial rather than mixture design?

Original Question:

From a Global Technology Director, Engineered Materials:
“My team makes polymer formulations, which are typically made up of polymer resins, fillers, additives (such as impact modifiers), or stabilizers. Sometimes, we are changing the levels of major components of a formulation, and I understand that in those cases, mixture design is appropriate.  There are a lot of times when we are changing relatively small levels of an additive, and I have been trained previously for those that a factorial design is a reasonable approximation.  For example, I often keep an entire formulation constant, except for stabilizers, which in total comprise 1% of the formulation.

Is there a rule of thumb or a way to estimate the error for when a mixture is required versus a factorial design?”

Answer:

From Stat-Ease Consultant Wayne Adams:
“We get this question quite a bit.  A mixture design is used when the changes in the response are a function of the relative proportions of the variable components in the mixture.  If the amount of everything (not just variables) in the mixture were halved (thus the total decreases by half), the response would stay about the same.

In your factorial scenario the variable components are the stabilizers, which add up to a total of 1% while the remaining 99% makes up everything else.  You are really doing a mixture experiment but doing so with an inefficient factorial design.  Let’s assume there are three stabilizers that always make up 1% of the total.  This produces a three-component mixture design.  On the other hand, if each of the stabilizers can make up between 0 and 1% percent, thus totaling 3%, then the everything else is varying too. You’ll have a four component mixture: stabilizer 1, plus stabilizer 2, plus stabilizer 3, plus everything else.

Now to add to the confusion sometimes the effect of the formulation depends on the amount.  This happens with things like fertilizer; the perfect formulation for the lawn won’t work correctly unless applied in the correct amounts, too little does nothing but make the weeds grow, too much can burn the lawn.   In these cases a process variable is used to test the amount in a combined design.  Combined designs marry the mixture with a factorial.”

(Learn more about how to ideally vary ingredients by attending the computer-intensive two-day workshop “Mixture Design for Optimal Formulations.” Click on the title for a complete description.  Link from this page to the course outline and schedule.  Then, if you like, enroll online.)


- Back to top -


4: Webinar alert: Encore of “How to Get Started with DOE” (beginner) and rescheduling of “Practical Aspects of Algorithmic Design of Physical Experiments” (advanced)

Stat-Ease Consultant Brooks Henderson will present encores of his webinar showing “How to Get Started with DOE” on Thursday, April 28 at June 9 at 10:30 AM CDT USA* and again on June 15 at 5:00 PM CDT.  He will incorporate his Whirley-Pop DOE and some tips from the past webinars.  If you are new to DOE, this webinar is for you!  Stat-Ease webinars vary somewhat in length depending on the presenter and the particular session—mainly due to breaks for questions: Plan for 45 minutes to 1.5 hours, with 1 hour being the target median.  When developing these one-hour educational sessions, our presenters often draw valuable material from Stat-Ease DOE workshops.  Attendance may be limited, so sign up soon by contacting our Communications Specialist, Karen Dulski, via [email protected].  If you can be accommodated, she will provide immediate confirmation and, in timely fashion, the link with instructions from our web-conferencing vendor GotoWebinar.

Stat-Ease Consultant Patrick Whitcomb is the featured speaker for the June 20th webinar (re-scheduled from May) sponsored by the American Society of Quality (ASQ) Statistics Division.  It begins at 2:00 PM CDT USA* that Monday.  Pat will detail “Practical Aspects of Algorithmic Design of Physical Experiments”, an advanced DOE topic.  Register for his webinar here.

*(To determine the time in your zone of the world, try using this link.  We are based in Minneapolis, which appears on the city list that you must manipulate to calculate the time correctly.  Evidently, correlating the clock on international communications is even more complicated than statistics!  Good luck!)


- Back to top -


5: Info alert: Case studies detailing application of DOE to purification of minerals and for optimization of injection molding of plastic

The Engineering & Mining Journal (E&MJ) recently posted this detailing on how Design-Expert software helped FLSmidth Minerals optimize Rio Tinto’s Kennecott (Utah) copper concentrator.

A case study showing how “Design of Experiments helps optimize injection molding of conductive compounds” achieved publication in the March 2011 Injection Molding magazine.  See this posting by its publisher Plastics Today.


- Back to top -


6: Reader response: Why not default directly to the designed-for model for response surface methods (RSM)?

Comment:

From Niels Dekker, Statistician*:
**(Automotive & Aerospace Coatings, AkzoNobel Car Refinishes B.V., The Netherlands)
Dear Stat-Ease Consultants,

Reading the always very informative DOE FAQ Alert, the January-February issue, I was triggered by the answer given to FAQ #2 (Model selection and reduction from response surface method (RSM) experiments).  I recall having seen some similar discussions in the past on the internet about this (I forgot where, I have to admit).  However, I take a different approach from what you described.

I always start with the model that I (or someone else) designed the experiment for and only glance at the Fit Summary your software provides.  It is useful to directly see which model orders are aliased (especially if I get the data from someone else to help out with), but the listed model orders only test (logically) the added value (sequential model SS) on top of a lower model order of all the respective higher order model terms included.

First, because the design was built with a model in mind (like a hypothesis), I think it is good practice to always start off with that model and then start term-selection as you described.  More importantly, the Fit Summary occasionally suggests a lower-order model even though there are a few significant higher-order terms hidden along with many of the same order that are not significant.

In many cases, both approaches [using the designed-for model versus the one recommended by the software] will result in the same ‘final’ model but not always.  What are your thought on these different approaches?”

Response:

From Stat-Ease Consultant Wayne Adams:
“In general I agree, one should consider the designed-for model.  It is very easy to check both models (designed-for vs suggested)—that is what I usually do.  The suggested model can be thought of as the floor, that is, do not fit a lower-order model than this.

Let’s assume the designed-for model is quadratic, but the suggested model is the two-factor interaction (2FI) model.  I take a look at the adjusted and predicted R-squares for the designed-for and suggested model.  If the designed-for model’s adjusted R-square is higher, but the predicted R-square is lower than the suggested model, then this indicates that some of the quadratic terms may be contributing to the fit.  However, including all of the terms may only help to fit the noise around the true trend in the data.

Model fitting is more art than science.  However, the art needs to always be grounded in science.  I never rely on the statistics alone.  I often find it necessary to ask a subject-matter expert if a model make sense.

Try the designed-for model: The experiment was built with this in mind.  However, use caution to make sure that the model is not being over-fit.  Compare the fit of the suggested model to that of the reduced, designed- for model (R-squares, F-values, model-graphs).  As you said, they are often the same, but every once in a while a small subset of the quadratic terms complete the picture.

Thanks for contributing to the discussion.  Let us know if you have any further concerns, we’re always happy to hear them.”


- Back to top -


7: Events alert: Using DOE with Tolerance Intervals to Verify Specifications

Stat-Ease Consultant Patrick Whitcomb will detail “Using DOE with Tolerance Intervals to Verify Specifications” at the Quality and Productivity Research Conference (QPRC) in Roanoke, Virginia.  Click this link for more details on this QPRC, which runs from June 8 through 10.  If you can make it to this conference, please come to our demo of Design-Expert software for some tips from one its programmers, Hank Anderson.

Later this summer another Stat-Ease expert, Wayne Adams, will also make an appearance in Virginia—in Newport News for the Thermal & Fluids Analysis Workshop (TFAWS) hosted by NASA Langley Research Center on August 15-19.  See details on TFAWS here.

Click on our Events page for a list of upcoming appearances by Stat-Ease professionals.  We hope to see you sometime in the near future!

PS.  Do you need a speaker on DOE for a learning session within your company or technical society at regional, national, or even international levels?  If so, contact me.  It may not cost you anything if Stat-Ease has a consultant close by, or if a web conference will be suitable.  However, for presentations involving travel, we appreciate reimbursement for travel expenses.  In any case, it never hurts to ask Stat-Ease for a speaker on this topic.


- Back to top -


8: Workshop alert: See when and where to learn about DOE

Seats are filling fast for the following DOE classes.  If possible, enroll at least 4 weeks prior to the date so your place can be assured.  However, do not hesitate to ask whether seats remain on classes that are fast approaching!  Also, take advantage of a $395 discount when you take two complementary workshops that are offered on consecutive days.

All classes listed below will be held at the Stat-Ease training center in Minneapolis unless otherwise noted.

* Take both EDME and RSM in June to earn $395 off the combined tuition!

** Attend both SDOE and DELS to save $295 in overall cost.

*** Take both MIX and MIX2 to earn $395 off the combined tuition!

See this web page for complete schedule and site information on all Stat-Ease workshops open to the public.  To enroll, click the "register online" link on our web site or call Elicia at 612-746-2038.  If spots remain available, bring along several colleagues and take advantage of quantity discounts in tuition.  Or, consider bringing in an expert from Stat-Ease to teach a private class at your site.****

****Once you achieve a critical mass of about 6 students, it becomes very economical to sponsor a private workshop, which is most convenient and effective for your staff.  For a quote, e-mail [email protected].


- Back to top -

 

Please do not send me requests to subscribe or unsubscribe—follow the instructions at the very end of this message.
I hope you learned something from this issue. Address your general questions and comments to me at: [email protected].

Sincerely,

Mark

Mark J. Anderson, PE, CQE
Principal, Stat-Ease, Inc.
2021 East Hennepin Avenue, Suite 480
Minneapolis, Minnesota 55413 USA


PS. Quote for the month—Classic joke about statistics classes:


"
“If I had only one day left to live, I would live it in my statistics class—it would seem so much longer.”

—Anonymous


Trademarks: Stat-Ease, Design-Ease, Design-Expert and Statistics Made Easy are registered trademarks of Stat-Ease, Inc.

Acknowledgements to contributors:
—Students of Stat-Ease training and users of Stat-Ease software
Stat-Ease consultants Pat Whitcomb, Shari Kraber, Wayne Adams and Brooks Henderson
—Statistical advisor to Stat-Ease: Dr. Gary Oehlert
Stat-Ease programmers led by Neal Vaughn
—Heidi Hansel Wolfe, Stat-Ease marketing director, Karen Dulski, and all the remaining staff that provide such supreme support!

DOE FAQ Alert ©2011 Stat-Ease, Inc.
Circulation: Over 5500 worldwide
All rights reserved.


 
  Subscribe