This learner is the cross-validated (CV) selector, and it is intended
for use as the `metalearner`

in `Lrnr_sl`

.
`Lrnr_cv_selector`

selects the candidate with the best CV
predictive performance (i.e., lowest CV risk). Specifically,
it aims to optimize the CV risk, and it is defined by a constrained
weighted combination: the weights can either be zero or one, and they
must sum to one. `Lrnr_cv_selector`

optimizes the CV
predictive performance under these constraints by assigning the
candidate with the best CV predictive performance a weight of one
and all others a weight of zero. Thus, `Lrnr_cv_selector`

and its predictions will be identical to the best-performing
candidate learner and its predictions; this is why we say
`Lrnr_cv_selector`

"selects" the candidate with the best
CV predictive performance.

A learner object inheriting from `Lrnr_base`

with
methods for training and prediction. For a full list of learner
functionality, see the complete documentation of `Lrnr_base`

.

`eval_function = loss_squared_error`

: A function that takes as input a vector of predicted values as its first argument and a vector of observed outcome values as its second argument, and then returns a vector of losses or a numeric risk. See loss_functions and risk_functions for options.`folds = NULL`

: Optional origami-structured cross-validation folds from the task for training`Lrnr_sl`

, e.g.,`task$folds`

. This argument is only required and utilized when`eval_function`

is not a loss function, since the risk has to be calculated on each validation set separately and then averaged across them in order to estimate the cross-validated risk. This argument is ignored when`eval_function`

is a loss.

Other Learners:
`Custom_chain`

,
`Lrnr_HarmonicReg`

,
`Lrnr_arima`

,
`Lrnr_bartMachine`

,
`Lrnr_base`

,
`Lrnr_bayesglm`

,
`Lrnr_bilstm`

,
`Lrnr_caret`

,
`Lrnr_cv`

,
`Lrnr_dbarts`

,
`Lrnr_define_interactions`

,
`Lrnr_density_discretize`

,
`Lrnr_density_hse`

,
`Lrnr_density_semiparametric`

,
`Lrnr_earth`

,
`Lrnr_expSmooth`

,
`Lrnr_gam`

,
`Lrnr_ga`

,
`Lrnr_gbm`

,
`Lrnr_glm_fast`

,
`Lrnr_glm_semiparametric`

,
`Lrnr_glmnet`

,
`Lrnr_glmtree`

,
`Lrnr_glm`

,
`Lrnr_grfcate`

,
`Lrnr_grf`

,
`Lrnr_gru_keras`

,
`Lrnr_gts`

,
`Lrnr_h2o_grid`

,
`Lrnr_hal9001`

,
`Lrnr_haldensify`

,
`Lrnr_hts`

,
`Lrnr_independent_binomial`

,
`Lrnr_lightgbm`

,
`Lrnr_lstm_keras`

,
`Lrnr_mean`

,
`Lrnr_multiple_ts`

,
`Lrnr_multivariate`

,
`Lrnr_nnet`

,
`Lrnr_nnls`

,
`Lrnr_optim`

,
`Lrnr_pca`

,
`Lrnr_pkg_SuperLearner`

,
`Lrnr_polspline`

,
`Lrnr_pooled_hazards`

,
`Lrnr_randomForest`

,
`Lrnr_ranger`

,
`Lrnr_revere_task`

,
`Lrnr_rpart`

,
`Lrnr_rugarch`

,
`Lrnr_screener_augment`

,
`Lrnr_screener_coefs`

,
`Lrnr_screener_correlation`

,
`Lrnr_screener_importance`

,
`Lrnr_sl`

,
`Lrnr_solnp_density`

,
`Lrnr_solnp`

,
`Lrnr_stratified`

,
`Lrnr_subset_covariates`

,
`Lrnr_svm`

,
`Lrnr_tsDyn`

,
`Lrnr_ts_weights`

,
`Lrnr_xgboost`

,
`Pipeline`

,
`Stack`

,
`define_h2o_X()`

,
`undocumented_learner`

```
if (FALSE) {
data(cpp_imputed)
covs <- c("apgar1", "apgar5", "parity", "gagebrth", "mage", "meducyrs")
task <- sl3_Task$new(cpp_imputed, covariates = covs, outcome = "haz")
hal_lrnr <- Lrnr_hal9001$new(
max_degree = 1, num_knots = c(20, 10), smoothness_orders = 0
)
lasso_lrnr <- Lrnr_glmnet$new()
glm_lrnr <- Lrnr_glm$new()
ranger_lrnr <- Lrnr_ranger$new()
lrnrs <- c(hal_lrnr, lasso_lrnr, glm_lrnr, ranger_lrnr)
names(lrnrs) <- c("hal", "lasso", "glm", "ranger")
lrnr_stack <- make_learner(Stack, lrnrs)
metalrnr_discrete_MSE <- Lrnr_cv_selector$new(loss_squared_error)
discrete_sl <- Lrnr_sl$new(
learners = lrnr_stack, metalearner = metalrnr_discrete_MSE
)
discrete_sl_fit <- discrete_sl$train(task)
discrete_sl_fit$cv_risk(loss_squared_error)
}
```