This learner provides fitting procedures for elastic net models, including
both lasso (L1) and ridge (L2) penalized regression, using the glmnet
package. The function cv.glmnet
is used to select an
appropriate value of the regularization parameter lambda. For details on
these regularized regression models and glmnet, consider consulting
Friedman et al. (2010)
).
A learner object inheriting from Lrnr_base
with
methods for training and prediction. For a full list of learner
functionality, see the complete documentation of Lrnr_base
.
lambda = NULL
: An optional vector of lambda values to compare.
type.measure = "deviance"
: The loss to use when selecting
lambda. Options documented in cv.glmnet
.
nfolds = 10
: Number of k-fold/V-fold cross-validation folds for
cv.glmnet
to consider when selecting the optimal lambda
with cross-validation. Smallest nfolds value allowed by glmnet
is 3. For further details, consult the documentation of
cv.glmnet
.
alpha = 1
: The elastic net parameter: alpha = 0
is Ridge
(L2-penalized) regression, while alpha = 1
specifies Lasso
(L1-penalized) regression. Values in the closed unit interval specify a
weighted combination of the two penalties. For further details, consult
the documentation of glmnet
.
nlambda = 100
: The number of lambda values to fit. Comparing
fewer values will speed up computation, but may hurt the statistical
performance. For further details, consult the documentation of
cv.glmnet
.
use_min = TRUE
: If TRUE
, the smallest value of the lambda
regularization parameter is used for prediction (i.e.,
lambda = cv_fit$lambda.min
); otherwise, a larger value is used
(i.e., lambda = cv_fit$lambda.1se
). The distinction between the
two variants is clarified in the documentation of
cv.glmnet
.
nfolds = 10
: Number of folds (default is 10). Smallest value
allowable by glmnet
is 3.
...
: Other parameters passed to cv.glmnet
and glmnet
, and additional arguments defined in
Lrnr_base
, such as params
like formula
.
Friedman J, Hastie T, Tibshirani R (2010). “Regularization paths for generalized linear models via coordinate descent.” Journal of statistical software, 33(1), 1.
Other Learners:
Custom_chain
,
Lrnr_HarmonicReg
,
Lrnr_arima
,
Lrnr_bartMachine
,
Lrnr_base
,
Lrnr_bayesglm
,
Lrnr_bilstm
,
Lrnr_caret
,
Lrnr_cv_selector
,
Lrnr_cv
,
Lrnr_dbarts
,
Lrnr_define_interactions
,
Lrnr_density_discretize
,
Lrnr_density_hse
,
Lrnr_density_semiparametric
,
Lrnr_earth
,
Lrnr_expSmooth
,
Lrnr_gam
,
Lrnr_ga
,
Lrnr_gbm
,
Lrnr_glm_fast
,
Lrnr_glm_semiparametric
,
Lrnr_glmtree
,
Lrnr_glm
,
Lrnr_grfcate
,
Lrnr_grf
,
Lrnr_gru_keras
,
Lrnr_gts
,
Lrnr_h2o_grid
,
Lrnr_hal9001
,
Lrnr_haldensify
,
Lrnr_hts
,
Lrnr_independent_binomial
,
Lrnr_lightgbm
,
Lrnr_lstm_keras
,
Lrnr_mean
,
Lrnr_multiple_ts
,
Lrnr_multivariate
,
Lrnr_nnet
,
Lrnr_nnls
,
Lrnr_optim
,
Lrnr_pca
,
Lrnr_pkg_SuperLearner
,
Lrnr_polspline
,
Lrnr_pooled_hazards
,
Lrnr_randomForest
,
Lrnr_ranger
,
Lrnr_revere_task
,
Lrnr_rpart
,
Lrnr_rugarch
,
Lrnr_screener_augment
,
Lrnr_screener_coefs
,
Lrnr_screener_correlation
,
Lrnr_screener_importance
,
Lrnr_sl
,
Lrnr_solnp_density
,
Lrnr_solnp
,
Lrnr_stratified
,
Lrnr_subset_covariates
,
Lrnr_svm
,
Lrnr_tsDyn
,
Lrnr_ts_weights
,
Lrnr_xgboost
,
Pipeline
,
Stack
,
define_h2o_X()
,
undocumented_learner
data(mtcars)
mtcars_task <- sl3_Task$new(
data = mtcars,
covariates = c(
"cyl", "disp", "hp", "drat", "wt", "qsec", "vs", "am",
"gear", "carb"
),
outcome = "mpg"
)
# simple prediction with lasso penalty
lasso_lrnr <- Lrnr_glmnet$new()
lasso_fit <- lasso_lrnr$train(mtcars_task)
lasso_preds <- lasso_fit$predict()
# simple prediction with ridge penalty
ridge_lrnr <- Lrnr_glmnet$new(alpha = 0)
ridge_fit <- ridge_lrnr$train(mtcars_task)
ridge_preds <- ridge_fit$predict()