--- title: "VariableSelection Vignette" output: rmarkdown::html_vignette: toc: TRUE number_sections: TRUE vignette: > %\VignetteIndexEntry{VariableSelection Vignette} %\VignetteEngine{knitr::rmarkdown} %\VignetteEncoding{UTF-8} --- ```{r, include = FALSE} knitr::opts_chunk$set( collapse = TRUE, comment = "#>" ) ``` # Performing variable selection - Stepwise variable selection, and branch and bound selection can be done using `VariableSelection()`. - `VariableSelection()` can accept either a `BranchGLM` object or a formula along with the data and the desired family and link to perform the variable selection. - Available metrics are AIC, BIC and HQIC, which are used to compare models and to select the best models. - `VariableSelection()` returns some information about the search, more detailed information about the best models can be seen by using the `summary()` function. - Note that `VariableSelection()` will properly handle interaction terms and categorical variables. - `keep` can also be specified if any set of variables are desired to be kept in every model. ## Metrics - The 3 different metrics available for comparing models are the following - Akaike information criterion (AIC), which typically results in models that are useful for prediction - $AIC = -2logLik + 2 \times p$ - Bayesian information criterion (BIC), which results in models that are more parsimonious than those selected by AIC - $BIC = -2logLik + \log{(n)} \times p$ - Hannan-Quinn information criterion (HQIC), which is in the middle of AIC and BIC - $HQIC = -2logLik + 2 * \log({\log{(n)})} \times p$ ## Stepwise algorithms - Forward selection, backward elimination, fast backward elimination, double backward elimination, and fast double backward elimination are all stepwise variable selection algorithms. - They are not guaranteed to find the best model or even a good model, but they are very fast. - Forward selection is recommended if the number of variables is greater than the number of observations or if many of the larger models don't converge. - Parallel computation can be used for these algorithms, but is generally only necessary for large datasets. - The `plot` function can be used to see the path taken by the stepwise algorithms ### Forward selection ```{r, fig.height = 4, fig.width = 6} # Loading BranchGLM package library(BranchGLM) # Fitting gamma regression model cars <- mtcars # Fitting gamma regression with inverse link GammaFit <- BranchGLM(mpg ~ ., data = cars, family = "gamma", link = "inverse") # Forward selection with mtcars forwardVS <- VariableSelection(GammaFit, type = "forward") forwardVS ## Getting final coefficients coef(forwardVS, which = 1) ## Plotting path plot(forwardVS) ``` ### Backward elimination #### Traditional variant ```{r, fig.height = 4, fig.width = 6} # Backward elimination with mtcars backwardVS <- VariableSelection(GammaFit, type = "backward") backwardVS ## Getting final coefficients coef(backwardVS, which = 1) ## Plotting path plot(backwardVS) ``` #### Fast variant Fast backward elimination is equivalent to traditional backward elimination, except much faster. The results from the two algorithms may differ if many of the larger models in the candidate set of models are difficult to fit numerically. ```{r, fig.height = 4, fig.width = 6} # Fast backward elimination with mtcars fastbackwardVS <- VariableSelection(GammaFit, type = "fast backward") fastbackwardVS ## Getting final coefficients coef(fastbackwardVS, which = 1) ## Plotting path plot(fastbackwardVS) ``` We got the same model that we got from traditional backward elimination, but we only had to fit `r fastbackwardVS$numchecked` models while we had to fit `r backwardVS$numchecked` models using traditional backward elimination. ### Double backward elimination One of the double backward elimination variants could also be used to (potentially) find higher quality models than what is found by traditional backward elimination. ```{r, fig.height = 4, fig.width = 6} # Fast double backward elimination with mtcars fastdoublebackwardVS <- VariableSelection(GammaFit, type = "fast double backward") fastdoublebackwardVS ## Getting final coefficients coef(fastdoublebackwardVS, which = 1) ## Plotting path plot(fastdoublebackwardVS) ``` However, in this case they get the same final model and the double backward elimination algorithm fits slightly more models. ## Branch and bound - The branch and bound algorithms can be much slower than the stepwise algorithms, but they are guaranteed to find the best models. - The branch and bound algorithms are typically much faster than an exhaustive search and can also be made even faster if parallel computation is used. ### Branch and bound example - If `showprogress` is true, then progress of the branch and bound algorithm will be reported occasionally. - Parallel computation can be used with these algorithms and can lead to very large speedups. ```{r} # Branch and bound with mtcars VS <- VariableSelection(GammaFit, type = "branch and bound", showprogress = FALSE) VS ## Getting final coefficients coef(VS, which = 1) ``` - A formula with the data and the necessary BranchGLM fitting information can also be used instead of supplying a `BranchGLM` object. ```{r} # Can also use a formula and data formulaVS <- VariableSelection(mpg ~ . ,data = cars, family = "gamma", link = "inverse", type = "branch and bound", showprogress = FALSE, metric = "AIC") formulaVS ## Getting final coefficients coef(formulaVS, which = 1) ``` ### Using bestmodels - The bestmodels argument can be used to find the top k models according to the metric. ```{r, fig.height = 4, fig.width = 6} # Finding top 10 models formulaVS <- VariableSelection(mpg ~ . ,data = cars, family = "gamma", link = "inverse", type = "branch and bound", showprogress = FALSE, metric = "AIC", bestmodels = 10) formulaVS ## Plotting results plot(formulaVS, type = "b") ## Getting all coefficients coef(formulaVS, which = "all") ``` ### Using cutoff - The cutoff argument can be used to find all models that have a metric value that is within cutoff of the minimum metric value found. ```{r, fig.height = 4, fig.width = 6} # Finding all models with an AIC within 2 of the best model formulaVS <- VariableSelection(mpg ~ . ,data = cars, family = "gamma", link = "inverse", type = "branch and bound", showprogress = FALSE, metric = "AIC", cutoff = 2) formulaVS ## Plotting results plot(formulaVS, type = "b") ``` ## Using keep - Specifying variables via `keep` will ensure that those variables are kept through the selection process. ```{r, fig.height = 4, fig.width = 6} # Example of using keep keepVS <- VariableSelection(mpg ~ . ,data = cars, family = "gamma", link = "inverse", type = "branch and bound", keep = c("hp", "cyl"), metric = "AIC", showprogress = FALSE, bestmodels = 10) keepVS ## Getting summary and plotting results plot(keepVS, type = "b") ## Getting coefficients for top 10 models coef(keepVS, which = "all") ``` ## Categorical variables - Categorical variables are automatically grouped together, if this behavior is not desired, then the indicator variables for that categorical variable should be created before using `VariableSelection()` - First we show an example of the default behavior of the function with a categorical variable. In this example the categorical variable of interest is Species. ```{r, fig.height = 4, fig.width = 6} # Variable selection with grouped beta parameters for species Data <- iris VS <- VariableSelection(Sepal.Length ~ ., data = Data, family = "gaussian", link = "identity", metric = "AIC", bestmodels = 10, showprogress = FALSE) VS ## Plotting results plot(VS, cex.names = 0.75, type = "b") ``` - Next we show an example where the beta parameters for each level for Species are handled separately ```{r, fig.height = 4, fig.width = 6} # Treating categorical variable beta parameters separately ## This function automatically groups together parameters from a categorical variable ## to avoid this, you need to create the indicator variables yourself x <- model.matrix(Sepal.Length ~ ., data = iris) Sepal.Length <- iris$Sepal.Length Data <- cbind.data.frame(Sepal.Length, x[, -1]) VSCat <- VariableSelection(Sepal.Length ~ ., data = Data, family = "gaussian", link = "identity", metric = "AIC", bestmodels = 10, showprogress = FALSE) VSCat ## Plotting results plot(VSCat, cex.names = 0.75, type = "b") ``` ## Convergence issues - It is not recommended to use the branch and bound algorithms if many of the upper models do not converge since it can make the algorithms very slow. - Sometimes when using one of the backward elimination algorithms and all the upper models that are tested do not converge, no final model can be selected. - For these reasons, if there are convergence issues it is recommended to use forward selection.