This meta-learner provides fitting procedures for any pairing of loss or risk
function and metalearner function, subject to constraints. The optimization
problem is solved by making use of solnp
, using
Lagrange multipliers. An important note from the solnp
documentation states that the control parameters tol
and delta
are key in getting any possibility of successful convergence, therefore it
is suggested that the user change these appropriately to reflect their
problem specification. For further details, consult the documentation of the
Rsolnp package.
A learner object inheriting from Lrnr_base
with
methods for training and prediction. For a full list of learner
functionality, see the complete documentation of Lrnr_base
.
learner_function = metalearner_linear
: A function(alpha, X) that
takes a vector of covariates and a matrix of data and combines them
into a vector of predictions. See metalearners
for
options.
eval_function = loss_squared_error
: A function(pred, truth) that
takes prediction and truth vectors and returns a loss vector or a risk
scalar. See loss_functions
and
risk_functions
for options and more detail.
make_sparse = TRUE
: If TRUE
, zeros out small alpha values.
convex_combination = TRUE
: If TRUE
, constrain alpha to sum
to 1.
init_0 = FALSE
: If TRUE
, alpha is initialized to all 0's,
useful for TMLE. Otherwise, it is initialized to equal weights summing
to 1, useful for Super Learner.
rho = 1
: This is used as a penalty weighting scaler for
infeasibility in the augmented objective function. The higher its
value the more the weighting to bring the solution into the feasible
region (default 1). However, very high values might lead to numerical
ill conditioning or significantly slow down convergence.
outer.iter = 400
: Maximum number of major (outer) iterations.
inner.iter = 800
: Maximum number of minor (inner) iterations.
delta = 1e-7
:Relative step size in forward difference evaluation.
tol = 1e-8
: Relative tolerance on feasibility and optimality.
trace = FALSE
: The value of the objective function and the
parameters are printed at every major iteration.
...
: Additional arguments defined in Lrnr_base
,
such as params
(like formula
) and name
.
Other Learners:
Custom_chain
,
Lrnr_HarmonicReg
,
Lrnr_arima
,
Lrnr_bartMachine
,
Lrnr_base
,
Lrnr_bayesglm
,
Lrnr_bilstm
,
Lrnr_caret
,
Lrnr_cv_selector
,
Lrnr_cv
,
Lrnr_dbarts
,
Lrnr_define_interactions
,
Lrnr_density_discretize
,
Lrnr_density_hse
,
Lrnr_density_semiparametric
,
Lrnr_earth
,
Lrnr_expSmooth
,
Lrnr_gam
,
Lrnr_ga
,
Lrnr_gbm
,
Lrnr_glm_fast
,
Lrnr_glm_semiparametric
,
Lrnr_glmnet
,
Lrnr_glmtree
,
Lrnr_glm
,
Lrnr_grfcate
,
Lrnr_grf
,
Lrnr_gru_keras
,
Lrnr_gts
,
Lrnr_h2o_grid
,
Lrnr_hal9001
,
Lrnr_haldensify
,
Lrnr_hts
,
Lrnr_independent_binomial
,
Lrnr_lightgbm
,
Lrnr_lstm_keras
,
Lrnr_mean
,
Lrnr_multiple_ts
,
Lrnr_multivariate
,
Lrnr_nnet
,
Lrnr_nnls
,
Lrnr_optim
,
Lrnr_pca
,
Lrnr_pkg_SuperLearner
,
Lrnr_polspline
,
Lrnr_pooled_hazards
,
Lrnr_randomForest
,
Lrnr_ranger
,
Lrnr_revere_task
,
Lrnr_rpart
,
Lrnr_rugarch
,
Lrnr_screener_augment
,
Lrnr_screener_coefs
,
Lrnr_screener_correlation
,
Lrnr_screener_importance
,
Lrnr_sl
,
Lrnr_solnp_density
,
Lrnr_stratified
,
Lrnr_subset_covariates
,
Lrnr_svm
,
Lrnr_tsDyn
,
Lrnr_ts_weights
,
Lrnr_xgboost
,
Pipeline
,
Stack
,
define_h2o_X()
,
undocumented_learner
# define ML task
data(cpp_imputed)
covs <- c("apgar1", "apgar5", "parity", "gagebrth", "mage", "meducyrs")
task <- sl3_Task$new(cpp_imputed, covariates = covs, outcome = "haz")
# build relatively fast learner library (not recommended for real analysis)
lasso_lrnr <- Lrnr_glmnet$new()
glm_lrnr <- Lrnr_glm$new()
ranger_lrnr <- Lrnr_ranger$new()
lrnrs <- c(lasso_lrnr, glm_lrnr, ranger_lrnr)
names(lrnrs) <- c("lasso", "glm", "ranger")
lrnr_stack <- make_learner(Stack, lrnrs)
# instantiate SL with solnp metalearner
solnp_meta <- Lrnr_solnp$new()
sl <- Lrnr_sl$new(lrnr_stack, solnp_meta)
sl_fit <- sl$train(task)