In this lab, you will begin to get oriented with R and work with some data.
Attempt each exercise in order.
In each code chunk, if you see “# INSERT CODE HERE”, then you are expected to add some code to create the intended output (Make sure to erase “# INSERT CODE HERE” and place your code in its place).
If my instructions say to “Run the code below…” then you do not need to add any code to the chunk.
Many exercises may require you to type some text below the code chunk, interpreting the output and answering the questions.
Please follow the Davidson Honor Code and rules from the course syllabus regarding seeking help with this assignment.
When you are finished, click the “Knit” button at the top of this panel. If there are no errors, an word file should pop up after a few seconds.
Take a look at the resulting word file that pops up. Make sure everything looks correct, your name is listed at the top, and that there is no ‘junk’ code or output.
Save the word file (to your local computer, and/or to a cloud location) as: Lab 5 “Insert Your Name”.
Use this link to upload your word file to my Google Drive folder. Do not upload the original .Rmd version.
This assignment is due Thursday, July 14, 2022, no later than 9:30 am Eastern. Points will be deducted for late submissions.
TIP: Start early so that you can troubleshoot any issues with knitting to word.
There are 6 possible points on this assignment.
Baseline (C level work)
Average (B level work)
Advanced (A level work)
In this exercise, we will generate simulated data, and will then use this data to perform best subset selection.
Use the rnorm()
function to generate a predictor X
of length n = 100, as well as a noise vector \(\epsilon\) of length n = 100 (Hint:
remember to set seed!).
Generate a response vector Y of length n = 100 according to the model \(Y = B_0 + B_1X + B_2X^2 + B_3X^3 + \epsilon\)
Use the regsubsets()
function (within the
leaps
package) to perform best subset selection in order to
choose the best model containing the predictors \(X,X^2,...,X^{10}\). What is the best model
obtained according to \(C_p\), BIC, and
adjusted \(R^2\)? Show some plots to
provide evidence for your answer, and report the coefficients of the
best model obtained (Note: you might need to use the
data.frame()
function to create a single data set
containing both X and Y. Additionally, you need to assign your
regsubsets()
model summary to view the variables of
interest).
Repeat (C), using forward stepwise selection and also using backwards stepwise selection. How does your answer compare to the results in (C)
Now fit a lasso model to the simulated data, again using \(X,X^2, . . . , X^{10}\) as predictors. Use cross-validation to select the optimal value of \(\lambda\). Create plots of the cross-validation error as a function of \(\lambda\). Report the resulting coefficient estimates, and discuss the results obtained.
#insert code here
ANSWER:
In this exercise, we will predict the number of applications received
using the other variables in the College
data set.
Split the data set into a training set and a test set.
Fit a linear model using least squares on the training set, and report the test error obtained.
Fit a ridge regression model on the training set, with \(\lambda\) chosen by cross-validation. Report the test error obtained.
Fit a lasso model on the training set, with \(\lambda\) chosen by cross-validation. Report the test error obtained, along with the number of non-zero coefficient estimates.
Fit a PCR model on the training set, with \(M\) chosen by cross-validation. Report the test error obtained, along with the value of \(M\) selected by cross-validation.
Fit a PLS model on the training set, with \(M\) chosen by cross-validation. Report the test error obtained, along with the value of \(M\) selected by cross-validation.
Comment on the results obtained. How accurately can we pre-dict the number of college applications received? Is there much difference among the test errors resulting from these five approaches?
#insert code here
ANSWER: