# Análisis de un diseño de de superficie de respuesta

Analyzing a surface response design allows identifying parameter values that optimize a response. Available in Excel using the XLSTAT software.

## What is an analysis of surface response design?

The analysis of a surface response design uses the same statistical and conceptual framework as linear regression. The main difference comes from the model that is used.

### Responses optimization and desirability

In the case of many response values *y1, ..., ym* it is possible to optimize each response value individually and to create a combined desirability function and analyze its values. Proposed by Derringer and Suich (1980), this approach is to first convert each response *yi* into an individual desirability function *di *that varies over the range 0 ≤ *di* ≤1.

When *yi* has reached its target, then *di *= 1. If *yi* is outside an acceptable region around the target, then *di* = 0. Between these two extreme cases, intermediate values of *di* exist as shown below.

The 3 different optimization cases for di are present with the following definitions:

*L*= lower value. Every value smaller than*L*has*di* = 0*U*= upper value. Every value bigger than*U*has*di*= 0.*T(L)*= left target value.*T(R)*= right target value. Every value between*T(L)*and*T(R)*has*di*=1.*s, t*= weighting parameters that define the shape of the optimization function between*L*and*T(L)*and*T(R)*and*U*

## Results for Analysis of surface response design in XLSTAT

**Variables information**: This table shows the information about the factors. For each factor the short name, long name, unit and physical unit are displayed.

**Responses optimization**: This table gives the 5 best solutions obtained during the responses optimization.

**Goodness of fit statistics**: The statistics relating to the fitting of the regression model are shown in this table:

**Observations**: The number of observations used in the calculations. In the formulas shown below, nn is the number of observations.**Sum of weights**: The sum of the weights of the observations used in the calculations. In the formulas shown below, WW is the sum of the weights.**DF**: The number of degrees of freedom for the chosen model (corresponding to the error part).**R²**: The determination coefficient for the model. This coefficient, whose value is between 0 and 1, is only displayed if the constant of the model has not been fixed by the user. The R² is interpreted as the proportion of the variability of the dependent variable explained by the model. The nearer R² is to 1, the better is the model. The problem with the R² is that it does not take into account the number of variables used to fit the model.**Adjusted R²**: The adjusted determination coefficient for the model. The adjusted R² can be negative if the R² is near to zero. This coefficient is only calculated if the constant of the model has not been fixed by the user.The adjusted R² is a correction to the R² which takes into account the number of variables used in the model.**MSE**: The mean squared error (MSE).**RMSE**: The root mean square of the errors (RMSE) is the square root of the MSE.**MAPE**: The*Mean Absolute Percentage Error*.**DW**: The Durbin-Watson statistic. This coefficient is the order 1 autocorrelation coefficient and is used to check that the residuals of the model are not autocorrelated, given that the independence of the residuals is one of the basic hypotheses of linear regression. The user can refer to a table of Durbin-Watson statistics to check if the independence hypothesis for the residuals is acceptable.**Cp**: Mallows Cp coefficient**AIC**: Akaike’s Information Criterion. This criterion, proposed by Akaike (1973) is derived from the information theory and uses Kullback and Leibler's measurement (1951). It is a model selection criterion which penalizes models for which adding new explanatory variables does not supply sufficient information to the model, the information being measured through the MSE. The aim is to minimize the AIC criterion.**SBC**: Schwarz’s Bayesian Criterion. This criterion, proposed by Schwarz (1978) is similar to the AIC, and the aim is to minimize it.**PC**: Amemiya’s Prediction Criterion. This criterion, proposed by Amemiya (1980) is used, like the adjusted R^2R2 to take account of the parsimony of the model.**Press RMSE**: Press' statistic is only displayed if the corresponding option has been activated in the dialog box. Press's RMSE can then be compared to the RMSE. A large difference between the two shows that the model is sensitive to the presence or absence of certain observations in the model.**Q²**: The Q² statistic is displayed. The closer Q² is to 1, the better and more robust is the model.

The **analysis of variance table** is used to evaluate the explanatory power of the explanatory variables. Where the constant of the model is not set to a given value, the explanatory power is evaluated by comparing the fit (as regards least squares) of the final model with the fit of the rudimentary model including only a constant equal to the mean of the dependent variable. Where the constant of the model is set, the comparison is made with respect to the model for which the dependent variable is equal to the constant which has been set.

If the Type I/II/III SS (SS: Sum of Squares) is activated, the corresponding tables are displayed.

The table of **Type I SS** values is used to visualize the influence that progressively adding explanatory variables has on the fitting of the model, as regards the sum of the squares of the errors (SSE), the mean squared error (MSE), Fisher's F, or the probability associated with Fisher's F. The lower the probability, the larger the contribution of the variable to the model, all the other variables already being in the model. The sums of squares in the Type I table always add up to the model SS. Note: the order in which the variables are selected in the model influences the values obtained.

The table of **Type II SS** values is used to visualize the influence that removing an explanatory variable has on the fitting of the model, all other variables being retained, as regards the sum of the squares of the errors (SSE), the mean squared error (MSE), Fisher's F, or the probability associated with Fisher's F. The lower the probability, the larger the contribution of the variable to the model, all the other variables already being in the model. Note: unlike Type I SS, the order in which the variables are selected in the model has no influence on the values obtained.

The table of **Type III SS** values is used to visualize the influence that removing an explanatory variable has on the fitting of the model, all other variables being retained, expect those were the effect is present (interactions), as regards the sum of the squares of the errors (SSE), the mean squared error (MSE), Fisher's F, or the probability associated with Fisher's F. The lower the probability, the larger the contribution of the variable to the model, all the other variables already being in the model. Note: unlike Type I SS, the order in which the variables are selected in the model has no influence on the values obtained. Type II and Type III are identical if there are no interactions or if the design is balanced.

The **parameters of the model** table displays the estimate of the parameters, the corresponding standard error, the Student’s t, the corresponding probability, as well as the confidence interval

The **equation of the model** is then displayed to make it easier to read or re-use the model.

The table of **standardized coefficients** (also called beta coefficients) are used to compare the relative weights of the variables. The higher the absolute value of a coefficient, the more important the weight of the corresponding variable. When the confidence interval around standardized coefficients has value 0 (this can be easily seen on the chart of standardized coefficients), the weight of a variable in the model is not significant.

The **predictions and residuals** table shows, for each observation, its weight, the value of the qualitative explanatory variable, if there is only one, the observed value of the dependent variable, the model's prediction, the residuals, the confidence intervals together with the fitted prediction and Cook's D if the corresponding options have been activated in the dialog box. Two types of confidence interval are displayed: a confidence interval around the mean (corresponding to the case where the prediction would be made for an infinite number of observations with a set of given values for the explanatory variables) and an interval around the isolated prediction (corresponding to the case of an isolated prediction for the values given for the explanatory variables). The second interval is always greater than the first, the random values being larger.

The **charts** which follow show the results mentioned above. If there is only one explanatory variable in the model, the first chart displayed shows the observed values, the regression line and both types of confidence interval around the predictions. The second chart shows the standardized residuals as a function of the explanatory variable. In principle, the residuals should be distributed randomly around the X-axis. If there is a trend or a shape, this shows a problem with the model.

The **three charts** displayed next respectively show the evolution of the standardized residuals as a function of the dependent variable, the distance between the predictions and the observations (for an ideal model, the points would all be on the bisector), and the standardized residuals on a bar chart. The last chart quickly shows if an abnormal number of values are outside the interval ]-2, 2[]−2,2[ given that the latter, assuming that the sample is normally distributed, should contain about 95% of the data.

Then the **contour plot** is displayed, if the design has two factors and the corresponding option is activated. The contour plot is shown as a two dimensional projection and as a 3D chart. Using these charts it is possible to analyze the dependence of the two factors simultaneously.

Then the **trace plots** are displayed, if the corresponding option is activated. The trace plots show for each factor the response variable as a function of the factor. All other factors are set to their mean value. These charts are shown in two options: with the standardized factors and with the factors in original values. Using these plots the dependence of a response on a given factor can be analyzed.

### analice sus datos con xlstat

Incluido en