Run the BART algorithm for supervised learning.
Usage
bart(
X_train,
y_train,
leaf_basis_train = NULL,
rfx_group_ids_train = NULL,
rfx_basis_train = NULL,
X_test = NULL,
leaf_basis_test = NULL,
rfx_group_ids_test = NULL,
rfx_basis_test = NULL,
num_gfr = 5,
num_burnin = 0,
num_mcmc = 100,
previous_model_json = NULL,
previous_model_warmstart_sample_num = NULL,
general_params = list(),
mean_forest_params = list(),
variance_forest_params = list()
)
Arguments
- X_train
Covariates used to split trees in the ensemble. May be provided either as a dataframe or a matrix. Matrix covariates will be assumed to be all numeric. Covariates passed as a dataframe will be preprocessed based on the variable types (e.g. categorical columns stored as unordered factors will be one-hot encoded, categorical columns stored as ordered factors will passed as integers to the core algorithm, along with the metadata that the column is ordered categorical).
- y_train
Outcome to be modeled by the ensemble.
- leaf_basis_train
(Optional) Bases used to define a regression model
y ~ W
in each leaf of each regression tree. By default, BART assumes constant leaf node parameters, implicitly regressing on a constant basis of ones (i.e.y ~ 1
).- rfx_group_ids_train
(Optional) Group labels used for an additive random effects model.
- rfx_basis_train
(Optional) Basis for "random-slope" regression in an additive random effects model. If
rfx_group_ids_train
is provided with a regression basis, an intercept-only random effects model will be estimated.- X_test
(Optional) Test set of covariates used to define "out of sample" evaluation data. May be provided either as a dataframe or a matrix, but the format of
X_test
must be consistent with that ofX_train
.- leaf_basis_test
(Optional) Test set of bases used to define "out of sample" evaluation data. While a test set is optional, the structure of any provided test set must match that of the training set (i.e. if both
X_train
andleaf_basis_train
are provided, then a test set must consist ofX_test
andleaf_basis_test
with the same number of columns).- rfx_group_ids_test
(Optional) Test set group labels used for an additive random effects model. We do not currently support (but plan to in the near future), test set evaluation for group labels that were not in the training set.
- rfx_basis_test
(Optional) Test set basis for "random-slope" regression in additive random effects model.
- num_gfr
Number of "warm-start" iterations run using the grow-from-root algorithm (He and Hahn, 2021). Default: 5.
- num_burnin
Number of "burn-in" iterations of the MCMC sampler. Default: 0.
- num_mcmc
Number of "retained" iterations of the MCMC sampler. Default: 100.
- previous_model_json
(Optional) JSON string containing a previous BART model. This can be used to "continue" a sampler interactively after inspecting the samples or to run parallel chains "warm-started" from existing forest samples. Default:
NULL
.- previous_model_warmstart_sample_num
(Optional) Sample number from
previous_model_json
that will be used to warmstart this BART sampler. One-indexed (so that the first sample is used for warm-start by settingprevious_model_warmstart_sample_num = 1
). Default:NULL
.- general_params
(Optional) A list of general (non-forest-specific) model parameters, each of which has a default value processed internally, so this argument list is optional.
cutpoint_grid_size
Maximum size of the "grid" of potential cutpoints to consider in the GFR algorithm. Default:100
.standardize
Whether or not to standardize the outcome (and store the offset / scale in the model object). Default:TRUE
.sample_sigma2_global
Whether or not to update thesigma^2
global error variance parameter based onIG(sigma2_global_shape, sigma2_global_scale)
. Default:TRUE
.sigma2_global_init
Starting value of global error variance parameter. Calibrated internally as1.0*var(y_train)
, wherey_train
is the possibly standardized outcome, if not set.sigma2_global_shape
Shape parameter in theIG(sigma2_global_shape, sigma2_global_scale)
global error variance model. Default:0
.sigma2_global_scale
Scale parameter in theIG(sigma2_global_shape, sigma2_global_scale)
global error variance model. Default:0
.variable_weights
Numeric weights reflecting the relative probability of splitting on each variable. Does not need to sum to 1 but cannot be negative. Defaults torep(1/ncol(X_train), ncol(X_train))
if not set here. Note that if the propensity score is included as a covariate in either forest, its weight will default to1/ncol(X_train)
.random_seed
Integer parameterizing the C++ random number generator. If not specified, the C++ random number generator is seeded according tostd::random_device
.keep_burnin
Whether or not "burnin" samples should be included in the stored samples of forests and other parameters. DefaultFALSE
. Ignored ifnum_mcmc = 0
.keep_gfr
Whether or not "grow-from-root" samples should be included in the stored samples of forests and other parameters. DefaultFALSE
. Ignored ifnum_mcmc = 0
.keep_every
How many iterations of the burned-in MCMC sampler should be run before forests and parameters are retained. Default1
. Settingkeep_every <- k
for somek > 1
will "thin" the MCMC samples by retaining everyk
-th sample, rather than simply every sample. This can reduce the autocorrelation of the MCMC samples.num_chains
How many independent MCMC chains should be sampled. Ifnum_mcmc = 0
, this is ignored. Ifnum_gfr = 0
, then each chain is run from root fornum_mcmc * keep_every + num_burnin
iterations, withnum_mcmc
samples retained. Ifnum_gfr > 0
, each MCMC chain will be initialized from a separate GFR ensemble, with the requirement thatnum_gfr >= num_chains
. Default:1
.verbose
Whether or not to print progress during the sampling loops. Default:FALSE
.
- mean_forest_params
(Optional) A list of mean forest model parameters, each of which has a default value processed internally, so this argument list is optional.
num_trees
Number of trees in the ensemble for the conditional mean model. Default:200
. Ifnum_trees = 0
, the conditional mean will not be modeled using a forest, and the function will only proceed ifnum_trees > 0
for the variance forest.alpha
Prior probability of splitting for a tree of depth 0 in the mean model. Tree split prior combinesalpha
andbeta
viaalpha*(1+node_depth)^-beta
. Default:0.95
.beta
Exponent that decreases split probabilities for nodes of depth > 0 in the mean model. Tree split prior combinesalpha
andbeta
viaalpha*(1+node_depth)^-beta
. Default:2
.min_samples_leaf
Minimum allowable size of a leaf, in terms of training samples, in the mean model. Default:5
.max_depth
Maximum depth of any tree in the ensemble in the mean model. Default:10
. Can be overridden with-1
which does not enforce any depth limits on trees.sample_sigma2_leaf
Whether or not to update the leaf scale variance parameter based onIG(sigma2_leaf_shape, sigma2_leaf_scale)
. Cannot (currently) be set to true ifncol(leaf_basis_train)>1
. Default:FALSE
.sigma2_leaf_init
Starting value of leaf node scale parameter. Calibrated internally as1/num_trees
if not set here.sigma2_leaf_shape
Shape parameter in theIG(sigma2_leaf_shape, sigma2_leaf_scale)
leaf node parameter variance model. Default:3
.sigma2_leaf_scale
Scale parameter in theIG(sigma2_leaf_shape, sigma2_leaf_scale)
leaf node parameter variance model. Calibrated internally as0.5/num_trees
if not set here.keep_vars
Vector of variable names or column indices denoting variables that should be included in the forest. Default:NULL
.drop_vars
Vector of variable names or column indices denoting variables that should be excluded from the forest. Default:NULL
. If bothdrop_vars
andkeep_vars
are set,drop_vars
will be ignored.
- variance_forest_params
(Optional) A list of variance forest model parameters, each of which has a default value processed internally, so this argument list is optional.
num_trees
Number of trees in the ensemble for the conditional variance model. Default:0
. Variance is only modeled using a tree / forest ifnum_trees > 0
.alpha
Prior probability of splitting for a tree of depth 0 in the variance model. Tree split prior combinesalpha
andbeta
viaalpha*(1+node_depth)^-beta
. Default:0.95
.beta
Exponent that decreases split probabilities for nodes of depth > 0 in the variance model. Tree split prior combinesalpha
andbeta
viaalpha*(1+node_depth)^-beta
. Default:2
.min_samples_leaf
Minimum allowable size of a leaf, in terms of training samples, in the variance model. Default:5
.max_depth
Maximum depth of any tree in the ensemble in the variance model. Default:10
. Can be overridden with-1
which does not enforce any depth limits on trees.leaf_prior_calibration_param
Hyperparameter used to calibrate theIG(var_forest_prior_shape, var_forest_prior_scale)
conditional error variance model. Ifvar_forest_prior_shape
andvar_forest_prior_scale
are not set below, this calibration parameter is used to set these values tonum_trees / leaf_prior_calibration_param^2 + 0.5
andnum_trees / leaf_prior_calibration_param^2
, respectively. Default:1.5
.var_forest_leaf_init
Starting value of root forest prediction in conditional (heteroskedastic) error variance model. Calibrated internally aslog(0.6*var(y_train))/num_trees
, wherey_train
is the possibly standardized outcome, if not set.var_forest_prior_shape
Shape parameter in theIG(var_forest_prior_shape, var_forest_prior_scale)
conditional error variance model (which is only sampled ifnum_trees > 0
). Calibrated internally asnum_trees / leaf_prior_calibration_param^2 + 0.5
if not set.var_forest_prior_scale
Scale parameter in theIG(var_forest_prior_shape, var_forest_prior_scale)
conditional error variance model (which is only sampled ifnum_trees > 0
). Calibrated internally asnum_trees / leaf_prior_calibration_param^2
if not set.keep_vars
Vector of variable names or column indices denoting variables that should be included in the forest. Default:NULL
.drop_vars
Vector of variable names or column indices denoting variables that should be excluded from the forest. Default:NULL
. If bothdrop_vars
andkeep_vars
are set,drop_vars
will be ignored.
Value
List of sampling outputs and a wrapper around the sampled forests (which can be used for in-memory prediction on new data, or serialized to JSON on disk).
Examples
n <- 100
p <- 5
X <- matrix(runif(n*p), ncol = p)
f_XW <- (
((0 <= X[,1]) & (0.25 > X[,1])) * (-7.5) +
((0.25 <= X[,1]) & (0.5 > X[,1])) * (-2.5) +
((0.5 <= X[,1]) & (0.75 > X[,1])) * (2.5) +
((0.75 <= X[,1]) & (1 > X[,1])) * (7.5)
)
noise_sd <- 1
y <- f_XW + rnorm(n, 0, noise_sd)
test_set_pct <- 0.2
n_test <- round(test_set_pct*n)
n_train <- n - n_test
test_inds <- sort(sample(1:n, n_test, replace = FALSE))
train_inds <- (1:n)[!((1:n) %in% test_inds)]
X_test <- X[test_inds,]
X_train <- X[train_inds,]
y_test <- y[test_inds]
y_train <- y[train_inds]
bart_model <- bart(X_train = X_train, y_train = y_train, X_test = X_test,
num_gfr = 10, num_burnin = 0, num_mcmc = 10)