This learner provides feed-forward neural networks with a single hidden layer, and for multinomial log-linear models.
R6Class
object.
Learner object with methods for both training and prediction. See
Lrnr_base
for documentation on learners.
formula
A formula of the form class ~ x1 + x2 + ...
weights
(case) weights for each example – if missing defaults to 1
size
number of units in the hidden layer. Can be zero if there are skip-layer units.
entropy
switch for entropy (= maximum conditional likelihood) fitting. Default by least-squares.
decay
parameter for weight decay. Default 0.
maxit
maximum number of iterations. Default 100.
linout
switch for linear output units. Default logistic output units.
...
Other parameters passed to
nnet
.
Individual learners have their own sets of parameters. Below is a list of shared parameters, implemented by Lrnr_base
, and shared
by all learners.
covariates
A character vector of covariates. The learner will use this to subset the covariates for any specified task
outcome_type
A variable_type
object used to control the outcome_type used by the learner. Overrides the task outcome_type if specified
...
All other parameters should be handled by the invidual learner classes. See the documentation for the learner class you're instantiating
Other Learners:
Custom_chain
,
Lrnr_HarmonicReg
,
Lrnr_arima
,
Lrnr_bartMachine
,
Lrnr_base
,
Lrnr_bayesglm
,
Lrnr_bilstm
,
Lrnr_caret
,
Lrnr_cv_selector
,
Lrnr_cv
,
Lrnr_dbarts
,
Lrnr_define_interactions
,
Lrnr_density_discretize
,
Lrnr_density_hse
,
Lrnr_density_semiparametric
,
Lrnr_earth
,
Lrnr_expSmooth
,
Lrnr_gam
,
Lrnr_ga
,
Lrnr_gbm
,
Lrnr_glm_fast
,
Lrnr_glm_semiparametric
,
Lrnr_glmnet
,
Lrnr_glmtree
,
Lrnr_glm
,
Lrnr_grfcate
,
Lrnr_grf
,
Lrnr_gru_keras
,
Lrnr_gts
,
Lrnr_h2o_grid
,
Lrnr_hal9001
,
Lrnr_haldensify
,
Lrnr_hts
,
Lrnr_independent_binomial
,
Lrnr_lightgbm
,
Lrnr_lstm_keras
,
Lrnr_mean
,
Lrnr_multiple_ts
,
Lrnr_multivariate
,
Lrnr_nnls
,
Lrnr_optim
,
Lrnr_pca
,
Lrnr_pkg_SuperLearner
,
Lrnr_polspline
,
Lrnr_pooled_hazards
,
Lrnr_randomForest
,
Lrnr_ranger
,
Lrnr_revere_task
,
Lrnr_rpart
,
Lrnr_rugarch
,
Lrnr_screener_augment
,
Lrnr_screener_coefs
,
Lrnr_screener_correlation
,
Lrnr_screener_importance
,
Lrnr_sl
,
Lrnr_solnp_density
,
Lrnr_solnp
,
Lrnr_stratified
,
Lrnr_subset_covariates
,
Lrnr_svm
,
Lrnr_tsDyn
,
Lrnr_ts_weights
,
Lrnr_xgboost
,
Pipeline
,
Stack
,
define_h2o_X()
,
undocumented_learner
set.seed(123)
# load example data
data(cpp_imputed)
covars <- c("bmi", "parity", "mage", "sexn")
outcome <- "haz"
# create sl3 task
task <- sl3_Task$new(cpp_imputed, covariates = covars, outcome = outcome)
# train neural networks and make predictions
lrnr_nnet <- Lrnr_nnet$new(linout = TRUE, size = 10, maxit = 1000)
fit <- lrnr_nnet$train(task)
#> # weights: 61
#> initial value 3473.327147
#> iter 10 value 2332.708133
#> iter 20 value 2285.802094
#> iter 30 value 2259.282969
#> iter 40 value 2186.037648
#> iter 50 value 2163.163175
#> iter 60 value 2158.274072
#> iter 70 value 2157.252818
#> iter 80 value 2153.548324
#> iter 90 value 2147.689763
#> iter 100 value 2140.508801
#> iter 110 value 2136.151335
#> iter 120 value 2131.667979
#> iter 130 value 2127.812539
#> iter 140 value 2126.292145
#> iter 150 value 2121.169605
#> iter 160 value 2119.921691
#> iter 170 value 2117.912191
#> iter 180 value 2116.976429
#> iter 190 value 2110.064796
#> iter 200 value 2105.265439
#> iter 210 value 2090.487599
#> iter 220 value 2083.288260
#> iter 230 value 2078.431944
#> iter 240 value 2075.836591
#> iter 250 value 2072.891518
#> iter 260 value 2069.692797
#> iter 270 value 2067.111192
#> iter 280 value 2065.639707
#> iter 290 value 2064.105394
#> iter 300 value 2059.875667
#> iter 310 value 2059.346266
#> iter 320 value 2059.315552
#> iter 330 value 2059.104485
#> final value 2059.103001
#> converged
preds <- fit$predict(task)