1: FAQ: How to experiment on a multi-step process
Original Question:
From a Research Engineer:
“I am a user of Design-Expert v8 software. The product that my company fabricates undergoes a two-step process—injection molding followed by thermal bonding.
The injection molding is affected by process parameters such as injection speed, mold temperature, back pressure and melt temperature. The measured responses after this step are warpage, dimension and birefringence.
The second step involves thermal bonding of the injected parts, which is affected by temperature, speed and pressure; plus other factors. The primary responses at this stage are overall warpage and bond strength.
In this two-step case how should I conduct the DOE? Ultimately I just want a good part from the thermal bonding process. However the response from the preceding injection molding process affects this final result.
Your advice is very much appreciated.”
Answer from me:
This is a hard question, but one that I can relate to being a chemical engineer who worked for years on manufacturing process improvement. In cases such as this that involved a series of unit operations I would first work on the one that stood out for being a bottleneck or problematic due to quality and/or yield. Another approach is to go to the unit operation furthest upstream and work your way down. In any case, you will do well by focusing only on one unit operation at a time, if at all possible. The goal of DOE will be to develop a predictive model that helps you control the outputs.
Since you do not indicate that thermal bonding is particularly problematic, I suggest taking the latter approach—starting with the first unit operation: injection molding. Set up a two-level design on the four factors listed plus, perhaps, a number of others. Proceed according to the strategy of experimentation outlined below.
Strategy of experimentation flowchart
After developing profound knowledge on the injection molding, then turn your attention to the thermal bonding and finally the entire process.
Consultant Pat Whitcomb adds:
“If the best settings for thermal bonding depend on the settings used in injection molding; i.e. the two steps cannot be optimized independently, then you may need to run a split plot design. In a split plot a number of parts would be made during one injection molding run and then these parts would be used for a factorial on thermal bonding. Then on to the next injection molding run to make parts for another factorial on thermal bonding; and so on. For more on split plots see: http://statease.com/dex8files/manual/dx/DX8-03D-TwoLevelSplitPlot.pdf.”
(Learn more about the strategy of experimentation by attending the two-day computer-intensive workshop Experiment Design Made Easy. Click on the title for a description of this class and link from this page to the course outline and schedule. Then, if you like, enroll online.)
2: FAQ: Ignoring a discrepant response on a run that otherwise succeeded
Original Question:
From a Lean Six Sigma Consultant:
“Generally, Design-Expert does not allow Y2 to use all the data if Y1’s model needed some omissions.
In other words, why does Design-Expert force Y2 to depend on Y1’s data which has ignored points?”
Answer:
From Stat-Ease Consultant Wayne Adams:
“Try right-clicking in the cell you want to remove and setting the cell status to ignore. [See the screen shot below.] This way the data will be available to the other responses and just removed from the single response. Make sure you set the row status to normal or highlight first.”
Screen shot of Stat-Ease software showing how to ignore a single response value (row highlighted).
3: FAQ: Where to compare model coefficients for multiple responses
Original Question:
From a Technical Consultant on Energy and Nuclear Power:
“I have a historical design with 5 independent variables (not categorical) and 74 responses. The data seem good—fits to approximations are excellent. My question is: Is there a simple graph showing what factors and interactions affect each of the 74 responses. It is basically the data in the ANOVA summary, but in a way which is at-a-glance clear? The idea is that when doing trade-offs etc, it is clear which variables are the main effects and which interactions (if any) are significant. Thanks.”
Answer:
From Stat-Ease Consultant Brooks Henderson:
“There is one tool in Design-Expert version 8 that may help you out. Go to the “Summary” node in the software and Click on the “coefficients table” button on the floating “summary tool” palette. You will see something like the image below. Notice the three responses down the side in rows (Burst, Push, and Track). Then observe the list of all factors across the top in columns. The terms kept in the model for each response will display the coefficient and p-value in a color code based on the size of the p-value (see the legend at the bottom). This will give a clear picture of which terms/interactions are affecting each response.”
Coefficients Table
(Learn more about modeling historical data by attending the two-day computer-intensive workshop Response Surface Methods for Process Optimization. Click on the title for a complete description. Link from this page to the course outline and schedule. Then, if you like, enroll online.)
4: FAQ: What to make of multiple confirmation runs
Original Question:
From a Life Prediction Engineer:
“I completed a successful experiment that led us to a new and improved formulation that now might meet all customer specifications. A dozen (12) follow-up blends exhibited average responses that fell within the prediction intervals (PI) presented by the new Confirmation node that came out with Design-Expert version 8.0.4.* However, should we also worry whether each of the individual blends fall within the PI showed under the Point Prediction screen. Perhaps this creates a ‘double jeopardy,’ that is, being overly harsh in prosecuting the confirmation results.”
Answer:
From Stat-Ease Consultant Wayne Adams:
"Your instincts are correct: Focus on the average of the number (n) runs you complete for the confirmation—not the individual results. Statistical models only predict the average behavior of the system. If the average confirmation response is within the confirmation node’s prediction interval, then the model is confirmed.
Do not worry whether each of the individual blends fall within the original PI showed under the Point Prediction screen. This requires another statistical interval that contains the next outcome, and then the next outcome, followed by the next outcome, and so on and so forth. The formula for such an interval can be found in, Hahn and Meeker, Statistical Intervals, Wiley, 1991, pp. 62-64, Section 4.8 “Prediction Interval to Contain All of m Observations.” The prediction interval that contains m future outcomes is quite a bit wider than the prediction interval for 1 future outcome.
Even with this “all of m” corrected interval the conclusion drawn depends on how many fall (and how far) outside the limits these observations go. Take a look at the general distribution of the confirmation observations. One of the requirements for confirmation work is that it be done at the same conditions as the original block of experimental runs. If there is a consistent bias towards one side of the interval, then something different—random or unaccounted-for fixed effect(s)—probably occurred during the design than during the confirmation. Unfortunately (but being realistic), a whole host of things can cause the confirmation to fail, not the least of which being that the model is wrong.”
Confirmation node
*Design-Expert is now at version 8.0.5. If you have version 8, download the latest update here.
5: Info alert: Interaction revealed by factorial design leads to 65% yield increase; response surface methods (RSM) leveraged by Monte Carlo simulation; updated “DOE it Yourself” list of fun projects to do at home or school
A Williamette Valley Company (WVCO) chemist designed a two-level factorial experiment that revealed substantial interactions in their polyurethane process. Knowing this, WVCO implemented changes that increased first-pass yields 65 percent and overall plant yields by 20 percent. For details and the inspirational story, see this case study published in the August issue of Adhesives & Sealants Industry. As detailed in this Desktop Engineering story on how “Two-step optimization for product design takes manufacturing variability into account”, Chad Johnson and his TRW team used a combination of response surface methodology (RSM) and Monte Carlo analysis to optimize a braking system.
“DOE It Yourself”, a list of fun science projects compiled by me, has been updated (with new links mainly)—see it posted here. Enjoy!
6: Reader response: Selecting effects via the half-normal versus a backwards regression
Original Question:
From Chad Johnson TRW Certified 6-Sigma Master Black Belt Manager:
(Re: Jul/Aug DOE FAQ Alert #4. Expert FAQ: “Selecting effects via the half-normal versus a backwards regression: How do you explain discrepancies between these two approaches?”)
“Mark, I've learned something here about the caution required in de-selecting model terms using the backward selection algorithm. Ok, so....lesson learned when I have a factorial model implemented and I can see the half-normal plot. What do you suggest when using an RSM model? No half-normal plots.”
Answer from me:
Yes, Wayne and Shari have provided some good food for thought here on the advantage of using graphical versus numerical selection of effects. This is especially apropos to screening designs where one simply wants to separate the vital few factors from the trivial many. The purpose of RSM differs—this is intended to provide a mapping that is adequate for moving the process into a more desirable and/or robust region. Taking out insignificant model terms serves little purpose other than parsimony (no reason not to keep things simple, if possible!).
7: Webinar alert: (Encore) Basics of Response Surface Methodology (RSM) for Process Optimization, Part 1
Response Surface Methods (RSM) can lead you to the peak of process performance. In this intermediate-level webinar presented on Tuesday, October 18 at 10:30 AM CDT,* Stat-Ease Consultant Shari Kraber will introduce the fundamental concepts of response surface methods (RSM).
If you are new to RSM, this webinar is for you! Stat-Ease webinars vary somewhat in length depending on the presenter and the particular session—mainly due to breaks for questions: Plan for 45 minutes to 1.5 hours, with 1 hour being the target median. When developing these one-hour educational sessions, our presenters often draw valuable material from Stat-Ease DOE workshops. Attendance may be limited, so sign up soon by contacting our Communications Specialist, Karen Dulski, via [email protected]. If you can be accommodated, she will provide immediate confirmation and, in timely fashion, the link with instructions from our web-conferencing vendor GotoWebinar.
*(To determine the time in your zone of the world, try using this link. We are based in Minneapolis, which appears on the city list that you must manipulate to calculate the time correctly. Evidently, correlating the clock on international communications is even more complicated than statistics! Good luck!)
8: Events alert: Learn about “Managing Uncertainty in Design Space”
In back-to-back conferences, Consultant Pat Whitcomb will talk about “Managing Uncertainty in Design Space.” He will do so at the gathering of industrial statisticians and the like for their Fall Technical Conference in Kansas City on October 13-14. He follows up with the same presentation for chemical engineers and others attending AIChE's Annual Meeting in Minneapolis on October 17-19. For details on the talk, see this abstract. We hope you can attend one presentation or the other.
Those of you who work on development of medical devices should look up Stat-Ease at their exhibit (Booth #729) for the MD&M Minneapolis expo on November 2-3.
Click here for a list of upcoming appearances by Stat-Ease professionals. We hope to see you sometime in the near future!
9: Workshop alert: “Designed Experiments for Industry” in India (last chance); (New!) “Designed Experiments for Assay Optimization”
Seats are filling fast for the following DOE classes. If possible, enroll at least 4 weeks prior to the date so your place can be assured. However, do not hesitate to ask whether seats remain on classes that are fast approaching! Also, take advantage of a $395 discount when you take two complementary workshops that are offered on consecutive days.
All classes listed below will be held at the Stat-Ease training center in Minneapolis unless otherwise noted.
* Attend both SDOE and EDME to save $295 in overall cost.
** Take both EDME and RSM in February to earn $395 off the combined tuition!
*** Take both SDOE and DELS in February to earn $295 off the combined tuition!
**** Take both MIX and MIX2 to earn $395 off the combined tuition!
See this web page for complete schedule and site information on all Stat-Ease workshops open to the public. To enroll, click the "register online" link on our web site or call Elicia at 612-746-2038. If spots remain available, bring along several colleagues and take advantage of quantity discounts in tuition. Or, consider bringing in an expert from Stat-Ease to teach a private class at your site.****
****Once you achieve a critical mass of about 6 students, it becomes very economical to sponsor a private workshop, which is most convenient and effective for your staff. For a quote, e-mail [email protected].
|