This learner provides fitting procedures for xgboost models, using the xgboost package, via xgb.train. Such models are classification and regression trees with extreme gradient boosting. For details on the fitting procedure, consult the documentation of the xgboost and Chen and Guestrin (2016) ).

Format

An R6Class object inheriting from Lrnr_base.

Value

A learner object inheriting from Lrnr_base with methods for training and prediction. For a full list of learner functionality, see the complete documentation of Lrnr_base.

Parameters

  • nrounds=20: Number of fitting iterations.

  • ...: Other parameters passed to xgb.train.

References

Chen T, Guestrin C (2016). “Xgboost: A scalable tree boosting system.” In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, 785--794.

Examples

data(mtcars)
mtcars_task <- sl3_Task$new(
  data = mtcars,
  covariates = c(
    "cyl", "disp", "hp", "drat", "wt", "qsec", "vs", "am",
    "gear", "carb"
  ),
  outcome = "mpg"
)

# initialization, training, and prediction with the defaults
xgb_lrnr <- Lrnr_xgboost$new()
xgb_fit <- xgb_lrnr$train(mtcars_task)
xgb_preds <- xgb_fit$predict()

# get feature importance from fitted model
xgb_varimp <- xgb_fit$importance()