--- title: "Multiple Regression with R" author: "D.-L. Couturier / R. Nicholls / C. Chilamakuri" date: '`r format(Sys.time(), "Last modified: %d %b %Y")`' output: html_document: theme: united highlight: tango code_folding: show toc: true toc_depth: 2 toc_float: true fig_width: 8 fig_height: 6 --- ```{r message = FALSE, warning = FALSE, echo = FALSE} # change working directory: should be the directory containg the Markdown files: #setwd("/Volumes/Files/courses/cruk/LinearModelAndExtensions/git_linear-models-r/") ``` # Section 1: Multiple Regression The in-built dataset `trees` contains data pertaining to the `Volume`, `Girth` and `Height` of 31 felled black cherry trees. In the Simple Regression session, we constructed a simple linear model for `Volume` using `Girth` as the independent variable. Now we will expand this by considering `Height` as another predictor. Start by plotting the dataset: ```{r} plot(trees) ``` This plots all variables against each other, enabling visual information about correlations within the dataset. Re-create the original model of `Volume` against `Girth`: ```{r} m1 = lm(Volume~Girth,data=trees) summary(m1) ``` Now include `Height` as an additional variable: ```{r} m2 = lm(Volume~Girth+Height,data=trees) summary(m2) ``` Note that the R^2 has improved, yet the `Height` term is less significant than the other two parameters. Try including the interaction term between `Girth` and `Height`: ```{r} m3 = lm(Volume~Girth*Height,data=trees) summary(m3) ``` All terms are highly significant. Note that the `Height` is more significant than in the previous model, despite the introduction of an additional parameter. We'll now try a different functional form - rather than looking for an additive model, we can explore a multiplicative model by applying a log-log transformation (leaving out the interaction term for now). ```{r} m4 = lm(log(Volume)~log(Girth)+log(Height),data=trees) summary(m4) ``` All terms are significant. Note that the residual standard error is much lower than for the previous models. However, this value cannot be compared with the previous models due to transforming the response variable. The R^2 value has increased further, despite reducing the number of parameters from four to three. ```{r} confint(m4) ``` Looking at the confidence intervals for the parameters reveals that the estimated power of `Girth` is around 2, and `Height` around 1. This makes a lot of sense, given the well-known dimensional relationship between `Volume`, `Girth` and `Height`! For completeness, we'll now add the interaction term. ```{r} m5 = lm(log(Volume)~log(Girth)*log(Height),data=trees) summary(m5) ``` The R^2 value has increased (of course, as all we've done is add an additional parameter), but interestingly none of the four terms are significant. This means that none of the individual terms alone are vital for the model - there is duplication of information between the variables. So we will revert back to the previous model. Given that it would be reasonable to expect the power of `Girth` to be 2, and Height to be 1, we will now fix those parameters, and instead just estimate the one remaining parameter. ```{r} m6 = lm(log(Volume)-log((Girth^2)*Height)~1,data=trees) summary(m6) ``` Note that there is no R^2 (as only the intercept was included in the model), and that the Residual Standard Error is incomparable with previous models due to changing the response variable. We can alternatively construct a model with the response being y, and the error term additive rather than multiplicative. ```{r} m7 = lm(Volume~0+I(Girth^2):Height,data=trees) summary(m7) ``` Note that the parameter estimates for the last two models are slightly different... this is due to differences in the error model. # Section 2: Model Selection Of the last two models, the one with the log-Normal error model would seem to have the more Normal residuals. This can be inspected by looking at diagnostic plots, by and using the `shapiro.test()`: ```{r} plot(m6) plot(m7) shapiro.test(residuals(m6)) shapiro.test(residuals(m7)) ``` The Akaike Information Criterion (AIC) can help to make decisions regarding which model is the most appropriate. Now calculate the AIC for each of the above models: ```{r} summary(m1) AIC(m1) summary(m2) AIC(m2) summary(m3) AIC(m3) summary(m4) AIC(m4) summary(m5) AIC(m5) summary(m6) AIC(m6) summary(m7) AIC(m7) ``` Whilst the AIC can help differentiate between similar models, it cannot help deciding between models that have different responses. Which model would you select as the most appropriate? # Section 3: Stepwise Regression The in-built dataset `swiss` contains data pertaining to fertility, along with a variety of socioeconomic indicators. We want to select a sensible model using stepwise regression. First regress `Fertility` agains all available indicators: ```{r} m8 = lm(Fertility~.,data=swiss) summary(m8) ``` Are all terms significant? Now use stepwise regression, performing backward elimination in order to automatically remove inappropriate terms: ```{r} library(MASS) summary(stepAIC(m8)) ``` Are all terms significant? Is this model suitable? What are the pro's and con's of this approach? # Section 4: Non-Linear Models The in-built dataset `trees` contains data pertaining to the `Volume`, `Girth` and `Height` of 31 felled black cherry trees. In the Simple Regression session, we constructed a simple linear model for `Volume` using `Girth` as the independent variable. Using Multiple Regression we trialled various models, including some that had multiple predictor variables and/or involved log-log transformations to explore power relationships. However, due to limitations of the method, we were not able to explore other options such as a parameterised power relationship with an additive error model. We will now attempt to fit this model: $y = \beta_0x_1^{\beta_1}x_2^{\beta_2}+\varepsilon$ Parameters for non-linear models may be estimated using the `nls` package in R. ```{r} volume = trees$Volume height = trees$Height girth = trees$Girth m9 = nls(volume~beta0*girth^beta1*height^beta2,start=list(beta0=1,beta1=2,beta2=1)) summary(m9) ``` Note that the parameters `beta0`, `beta1` and `beta2` weren't defined prior to the function call - `nls` knew what to do with them. Also note that we had to provide starting points for the parameters. What happens if you change them? Are all terms significant? Is this model appropriate? What else could be tried to achieve a better model? # Section 5: Practical Exercises ## Puromycin The in-built R dataset `Puromycin` contains data regarding the reaction velocity versus substrate concentration in an enzymatic reaction involving untreated cells or cells treated with Puromycin. - Plot `conc` (concentration) against `rate`. What is the nature of the relationship between `conc` and `rate`? ```{r message = FALSE, warning = FALSE, echo = TRUE} plot(conc~rate,data=Puromycin) # There is a non-linear positive relationship between conc and rate ``` - Find a transformation that linearises the data and stabilises the variance, making it possible to use linear regression. Create the corresponding linear regression model. Are all terms significant? ```{r message = FALSE, warning = FALSE, echo = TRUE} plot(log(conc)~rate,data=Puromycin) m10 = lm(log(conc)~rate,data=Puromycin) plot(m10) summary(m10) # Both terms are significant ``` - Add the `state` term to the model. What type of variable is this? Is the inclusion of this term appropriate? ```{r message = FALSE, warning = FALSE, echo = TRUE} m11 = lm(log(conc)~rate+state,data=Puromycin) plot(m11) summary(m11) # `state` is a boolean factor or indicator variable # The inclusion of `state` is appropriate, as the term is significant and the diagnostic plots look reasonable ``` - Now add a term representing the interaction between `rate` and `state`. Are all terms significant? What can you conclude? ```{r message = FALSE, warning = FALSE, echo = TRUE} m12 = lm(log(conc)~rate*state,data=Puromycin) summary(m12) # The `state` term is not significant when the interaction between `rate` and `state` is included in the model. So it may be better to remove the `state` term from the model. ``` - Given this information, create the regression model you believe to be the most appropriate for modelling `conc`. Regenerate the plot of `conc` against `rate`. Draw curves corresponding to the fitted values of the final model onto this plot (note that two separate curves should be drawn, corresponding to the two levels of `state`). ```{r message = FALSE, warning = FALSE, echo = TRUE} m13 = lm(log(conc)~rate+rate:state,data=Puromycin) summary(m13) # Solution one: plot(conc~rate,data=Puromycin) idx = order(Puromycin$rate) treated = Puromycin$state[idx] == "treated" untreated = Puromycin$state[idx] == "untreated" lines(exp(fitted(m13))[idx][treated]~Puromycin$rate[idx][treated]) lines(exp(fitted(m13))[idx][untreated]~Puromycin$rate[idx][untreated],col="red") # Solution two (better - more general): plot(conc~rate,data=Puromycin) xvals = range(Puromycin$rate)[1]:range(Puromycin$rate)[2] lines(exp(coef(m13)[1] + coef(m13)[2]*xvals) ~ xvals) lines(exp(coef(m13)[1] + coef(m13)[2]*xvals + coef(m13)[3]*xvals) ~ xvals, col="red") ``` ## Attitude The in-built R dataset `attitude` contains data from a survey of clerical employees. - Create a linear model regressing `rating` on `complaints`, and store the model in a variable. ```{r message = FALSE, warning = FALSE, echo = TRUE} m14 = lm(rating~complaints,data=attitude) ``` - Use the step function to perform forward selection stepwise regression, in order to automatically add appropriate terms, using a command similar to: `new_model = step(original_model,.~.+privileges+learning+raises+critical+advance)` ```{r message = FALSE, warning = FALSE, echo = TRUE} m15 = step(m14,.~.+privileges+learning+raises+critical+advance) ``` - Which term(s) were added? What is Akaike's Information Criterion (AIC) corresponding to the final model? Are all terms in the resulting model significant? Check diagnostic plots. Do you think this is a suitable model? ```{r message = FALSE, warning = FALSE, echo = TRUE} # just the `learning` term was added # AIC = 118.00 for the final model summary(m15) # Only the `complaints` term is significant in the final model - the intercept and coefficient of `learning` are not significant. plot(m15) # Despite the residuals not being perfectly Normally distributed, the model does seem reasonable. ```