Module 9.

Response Surface Methodology

Learning Outcomes

After successfully completing the Module 9 in the Response Surface Methodology (RSM), students will be able to

  1. Describe different Types Response Surface Designs
  2. Design, analyze, and interpret the results for
    1. Central Composite Design, CCD
    2. Box-Behnken Design, BBD
  3. Choose between the Central Composite and the Box-Behnken Designs to better optimize the predictor variables with respect to the response
  4. Use of both MS Excel for Designing both CCD and BBD, and analyze using Minitab Software
  5. Optimize predictors with respect to the response
  6. Design, analyze and interpret the results
    1. Multiple Response Surface Methodology
  7. Perform simultaneous optimization for multiple responses

1. What is Response Surface Methodology, RSM?

Video 1 defines the response surface methodology, RSM with examples.

Video 1. Introduction to Response Surface Methodology

Video 2 demonstrates response surface methodology analysis with examples in Minitab.

Video 2. Introduction to Response Surface Methodology Explained with Analysis

Response Surface Methodology, RSM (also known as Response Surface Modeling) is a technique to optimize the response(s) when two or more quantitative factors are involved. The dependent variables are known as responses, and the independent variables or factors are primarily known as the predictor variables in response surface methodology. While p-values are used for a particular point such as to test the hypothesis of “whether the 70-degree Fahrenheit is the most comfortable temperature or not,” the response surface is useful in determining a range of temperatures for the same comfort level. As maintaining the temperature exactly at a 70-degree could be very expensive, maintaining the temperatures within a range is often desired for cost-effective solution. Moreover, keeping very cool in summer or very hot in winter would be very wasteful. Response Surface Methodology, RSM, is very useful to optimize variables/factors more practically as compared to just the statistical significance test for a particular point (point estimate is the statistical jargon). For an example, to optimize humidity and temperature for the best comfort, the response surface is plotted in Figure 1. The human comfort is measured on a scale between 0 to 1o, where 10 is the most comfortable.

Figure 1. Response Surface Plot of Comfort vs Humidity and Temperature

While the response surface is visually appealing and provides a quick meaningful overview of the relationship, a contour plot is easier to understand with respect to the optimized values for the independent variables, for which the same level of comfort (response or dependent variable) can be achieved. For an example, the contour plot in Figure 2 shows that statistically the same level of comfort can be achieved for the same color reason in the plot. For an example, the middle dark oval reason of the contour plot in Figure 2 represents the comfort level over 7.5 can be achieved for the temperature approximately between 65- and 78-degrees Fahrenheit and between 25 and 70 percent of relative humidity. Similarly, the contour plot can be used to find the ranges of temperature and humidity to achieve the same level of comfort (response, y).

Figure 2. Contour Plot of Comfort vs Humidity and Temperature

In this particular example for the human comfort study, while maximizing comfort is the goal, minimizing the response (dependent variable) would also be desired for situations such as the human discomfort study for an example provided in Figure 3 and Figure 4. While the comfort is not simply the opposite of discomfort, discomfort study for sure wants to minimize the response (discomfort) optimizing the independent variables (temperature and humidity in this case). Therefore, the response surface methodology could either be used for maximizing or minimizing the responses. These maximum response or minimum response are known as the stationary point. If there are optimized points exist for the independent variables, the partial derivatives for these points will be equal to zero (Equation 1) as it can be understood from both maximum and minimum responses in Figure 1 & Figure 3.

Equation 1

Figure 3. Response Surface Plot of Discomfort vs Humidity and Temperature

Figure 4. Contour Plot of Discomfort vs Humidity and Temperature

In addition to the maximum and minimum response stationary points, there is a third type of optimization called the saddle point could also exist as a stationary point. Similar to a horse saddle or bicycle saddle, this optimization could be visualized through Figure 5 and Figure 6.

Figure 5. Response Surface Plot of Saddle Point Type Optimization

Figure 6. Contour Plot of Saddle Point Type of Optimization

2. How to Design the Response Surface Methodology, RSM Study?

Response surfaces are usually approximated by a second-order regression model as the higher-order effects are usually unimportant (Box, Hunter et al. 2005; Kutner, Nachtsheim et al. 2005). A second-order regression model (also known as the full quadratic) for k number of factors can be written as in Equation 2. In addition to the most popular method, the central composite design, CCD, Box-Behnken Design will also be demonstrated in the following sections.

Equation 2

2.1. Central Composite Design of RSM

Video 3 demonstrates the central composite design.

Video 3. Response Surface Methodology Basic, the Central Composite Design

The most popular method of response surface design is the Central Composite Design, CCD. The CCD is a two-level full factorial or fractional factorial design with added center points and the axial points (also known as star points) as shown in Figure 7. While the center point is added at the center, the axial points are applied at the middle of the levels of a factor for each level of the other factors. Therefore, the coordinators for the axial points are (-1, 0), (1,0), (0, -1), and (0, 1). For this design in Figure 7 (b), the axial (star) points are placed on the face of the square box of the 22 design. Therefore, the design in Figure 7 (b) becomes a 32 factorial for which a full quadratic model (second-order regression) can be fitted for the response surface. With the addition of multiple center points, the lack-of-fit could also be tested. The distance from the center and the axial points are denoted by α (α=1 for this design in Figure 7 (b)).

Figure 7. Central Composite Design, CCD (right) from the 22 Factorial Design (left)

The CCD in Figure 7 (b) is enough to develop a full quadratic model. However, the axial points can be designed more systematically to get even more information from the same number of experiments. For an example, the CCD in Figure 8 provides a couple of more advantages over the CCD in Figure 7 (b). As compared to the CCD in Figure 7 (b) with only three levels for each factor allows only up to full quadratic model, the CCD in Figure 8 consists of five distinctive levels for each factor, which allows for higher-order model such as the cubic model if there is any lack-of-fit observed in the quadratic model.

Figure 8. Central Composite Design, CCD from the Base 22 Factorial Design with Alpha Value 1.41 for Rotatability

The systematic placement of the axial points along the perimeter of the circle drawn for a 22 factorial design using the corner points provides some additional advantages, including the rotatability function of the design. Rotatability functionality of the design provides good prediction over the interested range of independent variables (x-variables or the predictor variables in RSM and regression). A good prediction model is defined by the ability to produce consistent and stable variance over the entire ranges of the independent/predictor variables. Figure 9 shows a good prediction model with consistent and stable variances over the entire range of the independent variables. Any point from the center has equal variance as this design is rotatable (Figure 9). The rotatability can also be seen in Figure 8.

Figure 9. Central Composite Design, CCD with Consistent and Stable Variance

Video 4 demonstrates the design process for the central composite design using MS Excel.

Video 4. Response Surface Methodology (RSM) Central Composite Design using MS Excel

Video 5 demonstrates the design process for the central composite design using Minitab.

Video 5. Response Surface Design Layout Construction using Minitab

Table 1 provides a two-factor central composite design, CCD with five replications at the center point. The design uses the alpha value of 1.41. The design in Table 1 is simply the arranged coordinates (standard order) of the design shown in Figure 8.

Table 1. Two-Factor Central Composite Design with 5 Replications at the Center Point

Table 1. Two-Factor Central Composite Design with 5 Replications at the Center Point

A generalization for the two-factor design is provided in Table 2 for some typical useful central composite designs. Many of these suggested designs can be found in most statistical software. Generally for the common central composite designs, the following is a guideline for determining the total number of experimental trials (Kutner, Nachtsheim et al. 2005).

Finally, the alpha value for rotatability is calculated using the following equation.

For an example, for two-factor central composite design without any replications for the corner and the axial points (nc=ns=1),

Table 2. Suggested Central Composite Designs (Kutner, Nachtsheim et al. 2005)

3. Response Surface Methodology Analysis Using Minitab

Let’s analyze the data for the human comfort study provided in Table 3. Video 6 provides the detail analysis of RSM using the Minitab, and its explanations.

Table 3. Human Comfort vs the Temperature and Humidity Study Data

*The data set used in the video is different from this data set.

Video 6. Response Surface Methodology Design of Experiments Analysis Explained Example using Minitab

4. Analysis Results Explained for the Response Surface Methodology, RSM

Video 6 provides the detail analysis of RSM using the Minitab, and its explanations. The Minitab Analysis Output is provided below. The output sequences vary from software to software, even within a software from version to version. However, the explanation sequence will be provided as suggested in the earlier Module 8 Applied Regression Analysis.

Figure 10. Minitab Analysis Output for the Comfort vs Temperature and Humidity

4.1. Step # 1: The Statistical Significance Test for the Response Surface Methodology, RSM

The statistical significance is checked using the analysis of variance (ANOVA) table. The overall model p-value (0.000) is less than the level of significance (0.05). Therefore, we reject the null hypothesis of no relationship between the dependent and the independent variables. Therefore, the full quadratic model of the temperature and the humidity factors (independent variables) significantly affect the response comfort (dependent variable)

The p-value for the linear terms for both factors, the temperature and the humidity, are also lower than the level of significance. Therefore, the linear terms significantly affect the comfort.

The p-value for the quadratic terms for both factors are also observed to be lower than the level of significance. Therefore, the quadratic terms for the temperature and the humidity significantly affect the comfort.

The interaction between the temperature and the humidity is observed to be insignificant with respect to the comfort.

The model suffers no lack-of-fit because the p-value (0.051) is larger than the level of significance (0.05). Therefore, the quadratic model with the predictor variable temperature and humidity significantly predict the human comfort.

4.2. Step # 2: The Practical Significance Test for the Response Surface Methodology, RSM

The practical significance test is performed using the model summary output table. The coefficient of determination, the adjusted R-square value is observed to be 98% indicating that the model parameters can explain variation in the dependent variable, the comfort response very well. Therefore, the model has a good practical significance.

4.3. Step # 3: Explanation of the Coefficients and the Functional Relationship for the Response Surface Methodology, RSM

The coefficient table and the response equation are utilized to explain the coefficients. The response equations for the coded and the uncoded levels are provided in Equation 3 and Equation 4, respectively. The regression equation in Equation 3 for the coded units are developed from the coefficient table of the Minitab output in Figure 10.

Equation 3

Equation 4

The sign of the coefficients indicates the direction of the relationship while the coefficient value represents the strength of the relationship. To explain the coefficients, let’s use the uncoded level equations which will be easier to understand in the context of the problem.

4.3.1. Explanation of the Constant

If all other terms are set zero (0), the comfort value is equal to -241.4.

4.3.2. Explanation of the Linear Coefficients

The linear coefficient value of the temperature is 6.869, which indicates that if all other terms are held constant in the model, the comfort will increase by 6.869 (=6.869*1) if the temperature increases by 1-degree Fahrenheit. Comfort will increase by 10 times (68.69 = 6.869*10) if the temperature increase by 10-degree Fahrenheit, and so on. Nevertheless, the explanation for coefficients one by one like this will be completely misleading. The comfort was measured on a scale from 0 to 10. Therefore, comfort of 68.69 does not make any sense. Therefore, the interpretation of a single term is NOT an interest in RSM or any multiple polynomial regression model. However, the interpretation of a single term is understandably misleading once we know how to explain it!

4.3.3. Explanation of the Quadratic Coefficients

The quadratic coefficient of the Temperature*Temperature is - 0.04837, which indicates that, if other terms are held constant in the model, the comfort will decrease by .04837 (=0.04837*1*1) if the temperature increase by 1-degree Fahrenheit. The comfort will decrease 100 times (-4.837=-0.04837*10*10) if the temperature increases by 10-degree Fahrenheit. The negative comfort does not exist in the scale. Therefore, the explanation for the individual terms does not make much sense, which is not an interest in the response surface methodology.

4.4. Step # 4: The Model Diagnostic for the Response Surface Methodology, RSM

4.4.1. Multicollinearity

The multicollinearity between the independent variables (predictor variables) are checked using the Variance Inflation Factor (VIF). The Minitab output table # 1 provides the value for the VIF. The following guideline is used for checking the multicollinearity between predictors.

  • ·1 = not correlated.
  • Between 1 and 5 = moderately correlated.
  • Greater than 5 = highly correlated.

Variance Inflation Factor (VIF) for all predictors are observed to be around 1, meaning that there is not multicollinearity between a predictor and the other predictors.

4.4.2. Normality, Constant, and Uncorrelated Variance

The residuals follow approximately a straight line in the pp-plot (normal probability plot) in Figure 11, indicating normal distribution for the residuals.

Figure 11. RSM Diagnostic the Normality of the Residuals

The residuals vs fitted plot in Figure 12 shows no obvious pattern, indicating no predictability of the residuals. Therefore, the residuals are considered homogenous (constant).

Figure 13 shows residuals vs the observation order plot, which shows no obvious pattern. Therefore, no correlation between the observation number and the residuals.

Figure 12. RSM Diagnostic Homogeneousness (Constancy) of Variance

Figure 13. RSM Diagnostic Uncorrelated Variance

4.4.3. Outlier, Leverage, and Influential Point

Outlier – an outlier is defined by a point whose residuals are relatively higher as compared to other data points. The residuals for the point # 1 and #5 are observed to be higher than usual (Table 4). Therefore, these two points are considered outliers for this data set.

Leverage – a leverage point is considered whose x value is large, but the y-value follows the fitted response. The diagonal element of the hat matrix, HI, is used to determine the x-outlier (or the leverage point). As this is a systematically designed experiment, there is no reason to have an x-outlier or leverage point. There is no unusually large HI value observed in the residual diagnostic analysis in Table 4. Therefore, no leverage point is observed in the residual analysis output.

Influential – a point is considered influential if the probability value (p-value) calculated from the Cook’s distance is over 50%. The probability value (p-value) is calculated using the Cook’s distance as the f-value with p and n-p degrees of freedom for the numerator and the denominator, respectively. The point number #1 and #5 shows the probability over 50% indicating these two points with large influence on the response surface. If the DFIT value is larger than 1 for small to medium data set and 2√(p⁄n) for large data set, they are considered influential points. The value for the 2√(p⁄n)=2√(6⁄13)=1.358 for this data set. This data set is considered small. Therefore, any DFIT value over 1 will be considered to have some influence on the data. According to the DFIT value, point # 1, 3, 5, 7, and 8 could be considered as large with respect to the residuals.

4.4.3.1. Delete the Outlier, Leverage, and Influential Points?

The decision, whether to delete or to keep the outlier and influential data points varies from situations to situations. Usually, influential points are recommended to be deleted from the data. In this human subject study, they are considered normal that individuals could have experienced a very wide range of comfort with respect to the temperature and humidity. Moreover, response surface analysis would not be possible if the single observation for these points are deleted. Therefore, rather than deleting these influential points, it is recommended to collect more data to see whether these data points are really unusual or not. If any points are deleted from the study, the analysis must be rerun.

Table 4. Residual Analysis for Outlier, Leverage, and Influential Data Point

p-value = FDIST(COOK, df1, df2); df1=p = 6 (five parameters + constant), df2=n-p = 11-6 = 7

5. Box-Behnken Response Surface Methodology

5.1. What is Box-Behnken Design for RSM

Box-Behnken Design, BBD for the response surface methodology, RSM, is specially designed to fit a second-order model, which is the primary interest in most RSM studies. To fit a second-order regression model (quadratic model), the BBD only needs three levels for each factor (Figure 15), rather than five levels in CCD (Figure 14). The BBD set a mid-level between the original low- and high-level of the factors, avoiding the extreme axial (star) points as in the CCD. Moreover, the BBD uses face points, often more practical, rather than the corner points in CCD. The addition of the mid-level point allows the efficient estimation of the coefficients of a second-order model (Box, Hunter et al. 2005). The BBD is almost rotatable as the CCD. Moreover, often, the BBD requires a smaller number of experimental runs.

Figure 14. Central Composite Design, CCD for Rotatability (left) and Face Center Design (right) [The central composite design, CCD with the axial points at the face is known as the face-centered central composite design or the face-centered cube if the axial points are placed at the face.]

Figure 15. Two Representation of the Box-Behnken Design, BBD for RSM

5.2. How to Design the Box-Behnken Design, BBD for RSM

Video 8 demonstrates an overview of the design, analysis, and explanation of the results for the Box-Behnken Design, BBD.

Video 8. Box Behnken Response Surface Methodology RSM Design and Analysis Example using Minitab & MS Excel

The BBD uses the 22 full factorial design to generate for the higher number of factors by systematically adding a mid-level between the low and the high levels of the factors. Table 5 and Table 6 provide the Box-Behnken designs for three, four, and five factors, respectively. Many of such designs can be found in any standard statistical package such as Minitab, Design Experts, JMP and SAS. To design the BBD, simply the 22 full factorial is used as the base design and then orthogonal blocks are created using the mid-levels for the other factors. Therefore, it becomes a three-level factorial design, for which a full quadratic (second order) model can be fitted for the response surface.

Table 5. Box-Behnken Design for Three and Four Factors

Table 6. Box-Behnken Design for Five Factors

5.3. How to Analyze and Explain the Results.

Video 8 demonstrates an overview of the design, analysis, and explanation of the results for the Box-Behnken Design, BBD. The analysis and the explanation for the results are exact same as the central composite design, CCD. Let’s look at the human comfort study with the lighting factor, X3 added to the it. The X1 and X2 are the temperature (degree Fahrenheit) and the humidity (%), respectively. Data is collected from a randomly selected 15 individuals with the specified room temperature, humidity and lighting level conditions (Table 7).

Table 7. Human Comfort vs Temperature, Humidity and Lighting

The design can be either created in MS Excel or Minitab, or using any software demonstrated in Video 8. Although the CCD and the BBD are different, their analyses are exact same and so the interpretations of the results. For an example, the overall model and the quadratic square terms are significant (ANOVA table in Figure 16). The variations in the response are also explained well by the model terms, which is about 99% (the model summary table in Figure 16). The Pareto analysis shows that the most effects are coming from the square terms of both the humidity and temperature only.

Figure 16. Box-Behnken Example Analysis Results

Figure 17. Box-Behnken Pareto Analysis of the Effects

The response surfaces and the contour plots for all combinations for variables are provided in Figure 18. The response surface between the humidity and the temperature shows the curvature, the effects from both square terms (top-left graph in Figure 18). The Oval dark reason in the contour plot between the humidity and temperature shows the maximum comfort (top-right graph in Figure 18). As the lighting effect was observed to be insignificant, the contour plots show almost vertical parallel lines when plotted against either of the significant variables of humidity or temperature (bottom four graphs of Figure 18). These vertical parallel lines indicate that as we proceed along the X3 variable (lighting factor), no change in comfort are observed. However, as we proceed along either the X1 or X2, there is a significant change observed in the comfort values as it can be seen through the color gradients. The response surfaces show a significant curvature (effect of the square terms) for both humidity and temperature while plotted against the lighting factor.

Figure 18. Box-Behnken Response Surfaces and Contour Plots

The optimization results in Figure 19 indicate that both middle level of for the humidity and the temperature are observed to be the best to achieve the maximum human comfort. The selected lighting conditions do not affect the human comfort in this study.

Figure 19. Box-Behnken Response Surfaces Optimization Output

5.4. Is Box Behnken Better than the Central Composite Design in the Response Surface Methodology

Video 9 shows a comparison analysis between the central composite design, CCD and the Box-Behnken Design, BBD.

Video 9. Is Box Behnken Better than the Central Composite Design in the Response Surface Methodology

All models are wrong (Box)! Therefore, developing a less inaccurate model would be preferable. Any model within a smaller reason is more accurate than a wider range, the Box-Behnken will arguably provide the better estimation of the parameters, as the levels are not too extreme as the central composite designs. For an example, in the study of the human comfort by the temperature and the humidity, the low level of the temperature of 65-degree Fahrenheit is already low. If the axial points are placed even further lower than this level, this could be a wastage of resources because we already know that this temperature will produce a lower comfort rating. It does not make much sense to study something that we already know. Moreover, in Box-Behnken, most treatment combinations are use the mid-levels of the other factors. Therefore, the study is conducted around the expected optimum reasons in the most experimental runs. For an example, there is a total of 150 mid-levels as compared to only 40 low or high levels in five-factor Box-Behnken Design in Table 6. This experiment will be more practical to conduct than a central composite design with five factors as the central composite will contains many extreme points.

Table 8. Comparison between the Central Composite and the Box-Behnken Designs

Nevertheless, the central composite design is the traditional fractional factorial design of experiments. Therefore, it has all the advantages of the fractional factorial design. Moreover, the CCD is rotatable, while the Box-Behnken is nearly rotatable or rotatable for some specific designs.

As the central composite design consists of five levels for each factor, it will be possible to test up to a fourth-order model. However, the Box-Behnken design consists of only three levels for each factor. Therefore, only a second-order model is possible for the Box-Behnken design.

Generally, for more well-informed processes, Box-Behnken could be more useful, while the central composite could be more useful in relatively unknown processes. This could be the reason why the central composite design is used more than the Box-Behnken design because most studies are conducted on to find something new. Nevertheless, for more refinement and optimization, Box-Behnken will provide more precision.

In summary, both designs have their advantages and disadvantages. The designers can choose any of these two depending on the optimization goals.

6. How to Design and Analyze Multiple Response Surface?

Video 10 provides the analysis and explanation of the results for the multiple response surface optimization.

The design of multiple response surfaces follows the exact same procedure, except for adding multiple response variables, such as y1, y2, y3, y4, and so on. Often, multiple responses (dependent variables, or y-variables) are already there and easier to collect and more economical, considering the amount information gained through the additional responses. Th experimental conditions or the experimental units are the costly deal in any design of experiment. Once the experimental conditions are set, often collecting multiple responses are justified rather than not doing it.

Video 10. Multiple Response Optimization Explained with Example using Minitab Response Surface Methodology RSM

7. Multiple Response Surface Optimization Example Problem

There are many situations in which contradictory targets are needed to be achieved, such as better quality with reduced cost. For an example, let’s look at the following bicycle riding speed and the heart rate responses. As we try to increase the speed of riding while keeping the heart rate as low as possible. Assume that we are optimizing the speed and the heart rate with respect to the pedaling speed and the tire pressure. Therefore, two independent variables (factors or predictors) and two dependent variables (responses) are used in a study of bicycle riding efficiency optimization. The descriptions of the variables are provided below.

  1. Independent Variable 1, X1: Pedaling Speed in Revolution Per Minute, RPM
  2. Independent Variable 2, X2: Tire Pressure in Pounds Per Square Inch, PSI
  3. Dependent Variable 1, Y1: The Average Speed of the Bicycle in Miles per Hour, MPH
  4. Dependent Variable 2, Y2: The Heart Rate in Beat Per Minute, BPM

Bicycles with multiple gears are used in the study so that any pedaling speed could be selected without increasing the effort or for any desired effort levels.

Some initial research on the variables are provided below.

  1. Heart rate (BPM)
    1. increases with the pedaling speed.
    2. may increase if the tire pressure is too low or too high due to too much rolling resistance or too bumpy rides, respectively.
  2. The average bicycle speed (MPH)
    1. is related with the pedaling speed in RPM. While too low or too high pedaling speed will both reduce the average speed, there is an optimum pedaling speed exist for the maximum average speed (MPH).
    2. is dependent on the tire pressure. The average speed is maximum for an optimum level of tire pressure, while too low or too high pressure will reduce the speed by increasing the rolling resistances or bumpy rides, respectively.

The response surface methodology design and collected data is provided in Table 9. Assume that all riding conditions, including the test roads, riding distance, fitness level, weight, bicycle, and the riding equipment, for all the 13 test subjects are keep very similar to reduce any bias in the study. All experimental units are assumed to be identical twins, but independent.

Table 9. Multiple Response Optimization for Bicycling Efficiency

8. Multiple Response Surface Optimization Analysis Results Explained

The analysis output results are provided in Figure 20, Figure 21, Figure 22, Figure 23, Figure 24, Figure 25, and Table 10.

8.1. RSM Regression Analysis Output Explained

In Figure 20 & Figure 21, the quadratic model fitted by the pedaling speed and the tire pressure as the predictor variables are observed to be significant with respect to both responses, including the heart rate and the average speed. Moreover, the model shows excellent r-square values. However, while the response surface model prediction for the heart rate does not show any lack-of-fit, the prediction of average speed shows some lack-of-fit. For the average speed prediction model shows a moderately high predition r-square value, indicating that more investigation is necessary for the average speed prediction.

The relative effects are visualized using the Pareto Chart in Figure 22. Any parameter crosses the dashed line is observed to be statistically significant. Only the quadratic term of the pedaling speed and the tire pressure are large and significant in predicting the average speed (top graph in Figure 22). Althogh both linear and quadratic terms for the tire pressure are observed to be signficant, relatively larger effect is observed from the linear term of the pedaling speed in predicting the heart rate (bottom graph in Figure 22).

Figure 20. Multiple Response Surface Regression: Speed versus Pedaling, Pressure

Figure 21. Multiple Response Surface Regression: HR versus Pedaling, Pressure

Figure 22. Multiple Response Surface Comparison of Effects, Top (Average Speed), Bottom (Heart Rate)

8.2. Individual Responses and Contour Plots Explained

The individual response surfaces and contour plots for both responses, including heart rate and the average speed are visualized in Figure 23. The average speed (top two graphs) is significantly affected by almost equality by the tire pressure and the pedaling RPM. However, the average speed is slightly more affected by the pedaling speed than the tire pressure (top left graph in Figure 23). While the heart rate stays stable with respect to the tire pressure, it increases with respect to the pedaling speed (bottom two graphs). No significant interactions are observed between the tire pressure and the pedaling speed with respect to either heart rate or average speed.

Figure 23. Multiple Response Surfaces and Contour Plots for Individual Responses of the Average Speed (top two graphs) and the Heart Rate (bottom tow graphs)

8.3. Overlaid Contour Plot Explained

Figure 24 shows the overlaid contour plot using the heart rate between 149 and 155 BPM and the speed between 18 and 20 miles per hour. These selections for the responses involve some trial and error procedure to find the best overlaid plot that could be useful to draw some reasonable conclusions on the multiple response surface optimization. For an example, in Figure 24, the maximum speed 20 miles per hour can be achieved when the heart rate response is observed between 149 and 155 beat per minute.

Figure 24. Multiple Response Surface Overlaid Contour Plot to Optimize both the Riding Speed and the Heart Rate Together

To achieve these optimum responses for the both heart rate and the average speed, the tire pressure must be set between 95 and 105 psi (approximated from the plot) and the pedaling speed must be set between 88 and 92 revolution per minute. While the Figure 24 provides the visual look at the approximated optimization, the detail optimization results can be found in Table 10.

The multiple trial and error runs in optimizing both the heart rate and average speed are provided in Table 10. The optimization result shows that target heart rate could be close to 152 beat per minute to achieve the maximum average speed. This optimization is shown in Figure 25.

Table 10. Simultaneous Multiple Response Surface Optimization

8.4. The Composite Desirability

The composite desirability measures the overall predictability for both responses by the predictor parameters. While minimizing the heart rate is desirable when maximizing the average speed, the composite desirability of the model is much lower (72%) for this ideal situation. The composite desirability is observed to be highest (98%) for the target heart rate 152 beat per minute if the average speed is set to maximize.

Figure 25. Simultaneous Multiple Response Surface Optimization

Figure 26 shows the final overlaid contour plot of pedaling speed (rpm) vs tire pressure (psi) in maximizing the speed while keeping the heart at a target level.

Figure 26. Multiple Response Surface Overlaid Contour Plot to Optimized

Reference

Box, G. E., J. S. Hunter, et al. (2005). Statistics for experimenters: design, discovery and innovation, Wiley-Interscience.

Kutner, M. H., C. J. Nachtsheim, et al. (2005). Applied linear statistical models, McGraw-Hill Irwin New York.