The Highly Adaptive Lasso (HAL) is a nonparametric regression function that
has been demonstrated to optimally estimate functions with bounded (finite)
variation norm. The algorithm proceeds by first building an adaptive basis
(i.e., the HAL basis) based on indicator basis functions (or higher-order
spline basis functions) representing covariates and interactions of the
covariates up to a pre-specified degree. The fitting procedures included in
this learner use fit_hal
from the hal9001
package. For details on HAL regression, consider consulting the following
Benkeser and van der Laan (2016)
),
Coyle et al. (2020)
),
Hejazi et al. (2020)
).
A learner object inheriting from Lrnr_base
with
methods for training and prediction. For a full list of learner
functionality, see the complete documentation of Lrnr_base
.
max_degree = 2
: An integer specifying the highest order of
interaction terms for which basis functions ought to be generated.
smoothness_orders = 1
: An integer specifying the smoothness of
the basis functions. See details of hal9001
package's
fit_hal
function for more information.
num_knots = 5
: An integer vector of length 1 or of length
max_degree
, specifying the maximum number of knot points
(i.e., bins) for each covariate. If num_knots
is a unit-length
vector, then the same num_knots
are used for each degree. See
details of hal9001
package's fit_hal
function for more information.
fit_control
: List of arguments, including those specified in
fit_hal
's fit_control
documentation, and
any additional arguments to be passed to cv.glmnet
or glmnet
. See the hal9001
package
fit_hal
function fdocumentation or more
information.
...
: Other parameters passed to fit_hal
and additional arguments defined in Lrnr_base
, such as
params
like formula
.
Other Learners:
Custom_chain
,
Lrnr_HarmonicReg
,
Lrnr_arima
,
Lrnr_bartMachine
,
Lrnr_base
,
Lrnr_bayesglm
,
Lrnr_bilstm
,
Lrnr_caret
,
Lrnr_cv_selector
,
Lrnr_cv
,
Lrnr_dbarts
,
Lrnr_define_interactions
,
Lrnr_density_discretize
,
Lrnr_density_hse
,
Lrnr_density_semiparametric
,
Lrnr_earth
,
Lrnr_expSmooth
,
Lrnr_gam
,
Lrnr_ga
,
Lrnr_gbm
,
Lrnr_glm_fast
,
Lrnr_glm_semiparametric
,
Lrnr_glmnet
,
Lrnr_glmtree
,
Lrnr_glm
,
Lrnr_grfcate
,
Lrnr_grf
,
Lrnr_gru_keras
,
Lrnr_gts
,
Lrnr_h2o_grid
,
Lrnr_haldensify
,
Lrnr_hts
,
Lrnr_independent_binomial
,
Lrnr_lightgbm
,
Lrnr_lstm_keras
,
Lrnr_mean
,
Lrnr_multiple_ts
,
Lrnr_multivariate
,
Lrnr_nnet
,
Lrnr_nnls
,
Lrnr_optim
,
Lrnr_pca
,
Lrnr_pkg_SuperLearner
,
Lrnr_polspline
,
Lrnr_pooled_hazards
,
Lrnr_randomForest
,
Lrnr_ranger
,
Lrnr_revere_task
,
Lrnr_rpart
,
Lrnr_rugarch
,
Lrnr_screener_augment
,
Lrnr_screener_coefs
,
Lrnr_screener_correlation
,
Lrnr_screener_importance
,
Lrnr_sl
,
Lrnr_solnp_density
,
Lrnr_solnp
,
Lrnr_stratified
,
Lrnr_subset_covariates
,
Lrnr_svm
,
Lrnr_tsDyn
,
Lrnr_ts_weights
,
Lrnr_xgboost
,
Pipeline
,
Stack
,
define_h2o_X()
,
undocumented_learner
data(cpp_imputed)
covs <- c("apgar1", "apgar5", "parity", "gagebrth", "mage", "meducyrs")
task <- sl3_Task$new(cpp_imputed, covariates = covs, outcome = "haz")
# instantiate with max 2-way interactions, 0-order splines, and binning
# (i.e., num_knots) that decreases with increasing interaction degree
hal_lrnr <- Lrnr_hal9001$new(max_degree = 2, num_knots = c(5, 3))
hal_fit <- hal_lrnr$train(task)
hal_preds <- hal_fit$predict()