This meta-learner provides fitting procedures for any pairing of loss or risk function and metalearner function, subject to constraints. The optimization problem is solved by making use of solnp, using Lagrange multipliers. An important note from the solnp documentation states that the control parameters tol and delta are key in getting any possibility of successful convergence, therefore it is suggested that the user change these appropriately to reflect their problem specification. For further details, consult the documentation of the Rsolnp package.

Format

An R6Class object inheriting from Lrnr_base.

Value

A learner object inheriting from Lrnr_base with methods for training and prediction. For a full list of learner functionality, see the complete documentation of Lrnr_base.

Parameters

  • learner_function = metalearner_linear: A function(alpha, X) that takes a vector of covariates and a matrix of data and combines them into a vector of predictions. See metalearners for options.

  • eval_function = loss_squared_error: A function(pred, truth) that takes prediction and truth vectors and returns a loss vector or a risk scalar. See loss_functions and risk_functions for options and more detail.

  • make_sparse = TRUE: If TRUE, zeros out small alpha values.

  • convex_combination = TRUE: If TRUE, constrain alpha to sum to 1.

  • init_0 = FALSE: If TRUE, alpha is initialized to all 0's, useful for TMLE. Otherwise, it is initialized to equal weights summing to 1, useful for Super Learner.

  • rho = 1: This is used as a penalty weighting scaler for infeasibility in the augmented objective function. The higher its value the more the weighting to bring the solution into the feasible region (default 1). However, very high values might lead to numerical ill conditioning or significantly slow down convergence.

  • outer.iter = 400: Maximum number of major (outer) iterations.

  • inner.iter = 800: Maximum number of minor (inner) iterations.

  • delta = 1e-7:Relative step size in forward difference evaluation.

  • tol = 1e-8: Relative tolerance on feasibility and optimality.

  • trace = FALSE: The value of the objective function and the parameters are printed at every major iteration.

  • ...: Additional arguments defined in Lrnr_base, such as params (like formula) and name.

Examples

# define ML task
data(cpp_imputed)
covs <- c("apgar1", "apgar5", "parity", "gagebrth", "mage", "meducyrs")
task <- sl3_Task$new(cpp_imputed, covariates = covs, outcome = "haz")

# build relatively fast learner library (not recommended for real analysis)
lasso_lrnr <- Lrnr_glmnet$new()
glm_lrnr <- Lrnr_glm$new()
ranger_lrnr <- Lrnr_ranger$new()
lrnrs <- c(lasso_lrnr, glm_lrnr, ranger_lrnr)
names(lrnrs) <- c("lasso", "glm", "ranger")
lrnr_stack <- make_learner(Stack, lrnrs)

# instantiate SL with solnp metalearner
solnp_meta <- Lrnr_solnp$new()
sl <- Lrnr_sl$new(lrnr_stack, solnp_meta)
sl_fit <- sl$train(task)