Skip to content

BART#

stochtree.bart.BARTModel #

Class that handles sampling, storage, and serialization of stochastic forest models for supervised learning. The class takes its name from Bayesian Additive Regression Trees, an MCMC sampler originally developed in Chipman, George, McCulloch (2010), but supports several sampling algorithms:

  • MCMC: The "classic" sampler defined in Chipman, George, McCulloch (2010). In order to run the MCMC sampler, set num_gfr = 0 (explained below) and then define a sampler according to several parameters:
    • num_burnin: the number of iterations to run before "retaining" samples for further analysis. These "burned in" samples are helpful for allowing a sampler to converge before retaining samples.
    • num_chains: the number of independent sequences of MCMC samples to generate (typically referred to in the literature as "chains")
    • num_mcmc: the number of "retained" samples of the posterior distribution
    • keep_every: after a sampler has "burned in", we will run the sampler for keep_every * num_mcmc iterations, retaining one of each keep_every iteration in a chain.
  • GFR (Grow-From-Root): A fast, greedy approximation of the BART MCMC sampling algorithm introduced in He and Hahn (2021). GFR sampler iterations are governed by the num_gfr parameter, and there are two primary ways to use this sampler:
    • Standalone: setting num_gfr > 0 and both num_burnin = 0 and num_mcmc = 0 will only run and retain GFR samples of the posterior. This is typically referred to as "XBART" (accelerated BART).
    • Initializer for MCMC: setting num_gfr > 0 and num_mcmc > 0 will use ensembles from the GFR algorithm to initialize num_chains independent MCMC BART samplers, which are run for num_mcmc iterations. This is typically referred to as "warm start BART".

In addition to enabling multiple samplers, we support a broad set of models. First, note that the original BART model of Chipman, George, McCulloch (2010) is

\[\begin{equation*} \begin{aligned} y &= f(X) + \epsilon\\ f(X) &\sim \text{BART}(\cdot)\\ \epsilon &\sim N(0, \sigma^2)\\ \sigma^2 &\sim IG(\nu, \nu\lambda) \end{aligned} \end{equation*}\]

In words, there is a nonparametric mean function governed by a tree ensemble with a BART prior and an additive (mean-zero) Gaussian error term, whose variance is parameterized with an inverse gamma prior.

The BARTModel class supports the following extensions of this model:

  • Leaf Regression: Rather than letting f(X) define a standard decision tree ensemble, in which each tree uses X to partition the data and then serve up constant predictions, we allow for models f(X,Z) in which X and Z together define a partitioned linear model (X partitions the data and Z serves as the basis for regression models). This model can be run by specifying leaf_basis_train in the sample method.
  • Heteroskedasticity: Rather than define \(\epsilon\) parameterically, we can let a forest \(\sigma^2(X)\) model a conditional error variance function. This can be done by setting num_trees_variance > 0 in the params dictionary passed to the sample method.

sample(X_train, y_train, leaf_basis_train=None, rfx_group_ids_train=None, rfx_basis_train=None, X_test=None, leaf_basis_test=None, rfx_group_ids_test=None, rfx_basis_test=None, num_gfr=5, num_burnin=0, num_mcmc=100, general_params=None, mean_forest_params=None, variance_forest_params=None, previous_model_json=None, previous_model_warmstart_sample_num=None) #

Runs a BART sampler on provided training set. Predictions will be cached for the training set and (if provided) the test set. Does not require a leaf regression basis.

Parameters:

Name Type Description Default
X_train array

Training set covariates on which trees may be partitioned.

required
y_train array

Training set outcome.

required
leaf_basis_train array

Optional training set basis vector used to define a regression to be run in the leaves of each tree.

None
rfx_group_ids_train array

Optional group labels used for an additive random effects model.

None
rfx_basis_train array

Optional basis for "random-slope" regression in an additive random effects model.

None
X_test array

Optional test set covariates.

None
leaf_basis_test array

Optional test set basis vector used to define a regression to be run in the leaves of each tree. Must be included / omitted consistently (i.e. if leaf_basis_train is provided, then leaf_basis_test must be provided alongside X_test).

None
rfx_group_ids_test array

Optional test set group labels used for an additive random effects model. We do not currently support (but plan to in the near future), test set evaluation for group labels that were not in the training set.

None
rfx_basis_test array

Optional test set basis for "random-slope" regression in additive random effects model.

None
num_gfr int

Number of "warm-start" iterations run using the grow-from-root algorithm (He and Hahn, 2021). Defaults to 5.

5
num_burnin int

Number of "burn-in" iterations of the MCMC sampler. Defaults to 0. Ignored if num_gfr > 0.

0
num_mcmc int

Number of "retained" iterations of the MCMC sampler. Defaults to 100. If this is set to 0, GFR (XBART) samples will be retained.

100
general_params dict

Dictionary of general model parameters, each of which has a default value processed internally, so this argument is optional.

  • cutpoint_grid_size (int): Maximum number of cutpoints to consider for each feature. Defaults to 100.
  • standardize (bool): Whether or not to standardize the outcome (and store the offset / scale in the model object). Defaults to True.
  • sample_sigma2_global (bool): Whether or not to update the sigma^2 global error variance parameter based on IG(sigma2_global_shape, sigma2_global_scale). Defaults to True.
  • sigma2_init (float): Starting value of global variance parameter. Set internally to the outcome variance (standardized if standardize = True) if not set here.
  • sigma2_global_shape (float): Shape parameter in the IG(sigma2_global_shape, b_glsigma2_global_scaleobal) global error variance model. Defaults to 0.
  • sigma2_global_scale (float): Scale parameter in the IG(sigma2_global_shape, b_glsigma2_global_scaleobal) global error variance model. Defaults to 0.
  • variable_weights (np.array): Numeric weights reflecting the relative probability of splitting on each variable. Does not need to sum to 1 but cannot be negative. Defaults to uniform over the columns of X_train if not provided.
  • random_seed (int): Integer parameterizing the C++ random number generator. If not specified, the C++ random number generator is seeded according to std::random_device.
  • keep_burnin (bool): Whether or not "burnin" samples should be included in predictions. Defaults to False. Ignored if num_mcmc == 0.
  • keep_gfr (bool): Whether or not "warm-start" / grow-from-root samples should be included in predictions. Defaults to False. Ignored if num_mcmc == 0.
  • keep_every (int): How many iterations of the burned-in MCMC sampler should be run before forests and parameters are retained. Defaults to 1. Setting keep_every = k for some k > 1 will "thin" the MCMC samples by retaining every k-th sample, rather than simply every sample. This can reduce the autocorrelation of the MCMC samples.
  • num_chains (int): How many independent MCMC chains should be sampled. If num_mcmc = 0, this is ignored. If num_gfr = 0, then each chain is run from root for num_mcmc * keep_every + num_burnin iterations, with num_mcmc samples retained. If num_gfr > 0, each MCMC chain will be initialized from a separate GFR ensemble, with the requirement that num_gfr >= num_chains. Defaults to 1.
None
mean_forest_params dict

Dictionary of mean forest model parameters, each of which has a default value processed internally, so this argument is optional.

  • num_trees (int): Number of trees in the conditional mean model. Defaults to 200. If num_trees = 0, the conditional mean will not be modeled using a forest and sampling will only proceed if num_trees > 0 for the variance forest.
  • alpha (float): Prior probability of splitting for a tree of depth 0 in the conditional mean model. Tree split prior combines alpha and beta via alpha*(1+node_depth)^-beta. Defaults to 0.95.
  • beta (float): Exponent that decreases split probabilities for nodes of depth > 0 in the conditional mean model. Tree split prior combines alpha and beta via alpha*(1+node_depth)^-beta. Defaults to 2.
  • min_samples_leaf (int): Minimum allowable size of a leaf, in terms of training samples, in the conditional mean model. Defaults to 5.
  • max_depth (int): Maximum depth of any tree in the ensemble in the conditional mean model. Defaults to 10. Can be overriden with -1 which does not enforce any depth limits on trees.
  • sample_sigma2_leaf (bool): Whether or not to update the tau leaf scale variance parameter based on IG(sigma2_leaf_shape, sigma2_leaf_scale). Cannot (currently) be set to true if leaf_basis_train has more than one column. Defaults to False.
  • sigma2_leaf_init (float): Starting value of leaf node scale parameter. Calibrated internally as 1/num_trees if not set here.
  • sigma2_leaf_shape (float): Shape parameter in the IG(sigma2_leaf_shape, sigma2_leaf_scale) leaf node parameter variance model. Defaults to 3.
  • sigma2_leaf_scale (float): Scale parameter in the IG(sigma2_leaf_shape, sigma2_leaf_scale) leaf node parameter variance model. Calibrated internally as 0.5/num_trees if not set here.
  • keep_vars (list or np.array): Vector of variable names or column indices denoting variables that should be included in the mean forest. Defaults to None.
  • drop_vars (list or np.array): Vector of variable names or column indices denoting variables that should be excluded from the mean forest. Defaults to None. If both drop_vars and keep_vars are set, drop_vars will be ignored.
None
variance_forest_params dict

Dictionary of variance forest model parameters, each of which has a default value processed internally, so this argument is optional.

  • num_trees (int): Number of trees in the conditional variance model. Defaults to 0. Variance is only modeled using a tree / forest if num_trees > 0.
  • alpha (float): Prior probability of splitting for a tree of depth 0 in the conditional variance model. Tree split prior combines alpha and beta via alpha*(1+node_depth)^-beta. Defaults to 0.95.
  • beta (float): Exponent that decreases split probabilities for nodes of depth > 0 in the conditional variance model. Tree split prior combines alpha and beta via alpha*(1+node_depth)^-beta. Defaults to 2.
  • min_samples_leaf (int): Minimum allowable size of a leaf, in terms of training samples, in the conditional variance model. Defaults to 5.
  • max_depth (int): Maximum depth of any tree in the ensemble in the conditional variance model. Defaults to 10. Can be overriden with -1 which does not enforce any depth limits on trees.
  • leaf_prior_calibration_param (float): Hyperparameter used to calibrate the [optional] IG(var_forest_prior_shape, var_forest_prior_scale) conditional error variance model. If var_forest_prior_shape and var_forest_prior_scale are not set below, this calibration parameter is used to set these values to num_trees / leaf_prior_calibration_param^2 + 0.5 and num_trees / leaf_prior_calibration_param^2, respectively. Defaults to 1.5.
  • var_forest_leaf_init (float): Starting value of root forest prediction in conditional (heteroskedastic) error variance model. Calibrated internally as np.log(0.6*np.var(y_train))/num_trees_variance, where y_train is the possibly standardized outcome, if not set.
  • var_forest_prior_shape (float): Shape parameter in the [optional] IG(var_forest_prior_shape, var_forest_prior_scale) conditional error variance forest (which is only sampled if num_trees > 0). Calibrated internally as num_trees / leaf_prior_calibration_param^2 + 0.5 if not set here.
  • var_forest_prior_scale (float): Scale parameter in the [optional] IG(var_forest_prior_shape, var_forest_prior_scale) conditional error variance forest (which is only sampled if num_trees > 0). Calibrated internally as num_trees / leaf_prior_calibration_param^2 if not set here.
  • keep_vars (list or np.array): Vector of variable names or column indices denoting variables that should be included in the variance forest. Defaults to None.
  • drop_vars (list or np.array): Vector of variable names or column indices denoting variables that should be excluded from the variance forest. Defaults to None. If both drop_vars and keep_vars are set, drop_vars will be ignored.
None
previous_model_json str

JSON string containing a previous BART model. This can be used to "continue" a sampler interactively after inspecting the samples or to run parallel chains "warm-started" from existing forest samples. Defaults to None.

None
previous_model_warmstart_sample_num int

Sample number from previous_model_json that will be used to warmstart this BART sampler. Zero-indexed (so that the first sample is used for warm-start by setting previous_model_warmstart_sample_num = 0). Defaults to None.

None

Returns:

Name Type Description
self BARTModel

Sampled BART Model.

predict(covariates, basis=None, rfx_group_ids=None, rfx_basis=None) #

Return predictions from every forest sampled (either / both of mean and variance). Return type is either a single array of predictions, if a BART model only includes a mean or variance term, or a tuple of prediction arrays, if a BART model includes both.

Parameters:

Name Type Description Default
covariates array

Test set covariates.

required
basis array

Optional test set basis vector, must be provided if the model was trained with a leaf regression basis.

None
rfx_group_ids array

Optional group labels used for an additive random effects model.

None
rfx_basis array

Optional basis for "random-slope" regression in an additive random effects model.

None

Returns:

Name Type Description
mu_x (array, optional)

Mean forest and / or random effects predictions.

sigma2_x (array, optional)

Variance forest predictions.

predict_mean(covariates, basis=None, rfx_group_ids=None, rfx_basis=None) #

Predict expected conditional outcome from a BART model.

Parameters:

Name Type Description Default
covariates array

Test set covariates.

required
basis array

Optional test set basis vector, must be provided if the model was trained with a leaf regression basis.

None

Returns:

Type Description
array

Mean forest predictions.

predict_variance(covariates) #

Predict expected conditional variance from a BART model.

Parameters:

Name Type Description Default
covariates array

Test set covariates.

required

Returns:

Type Description
array

Variance forest predictions.

to_json() #

Converts a sampled BART model to JSON string representation (which can then be saved to a file or processed using the json library)

Returns:

Type Description
str

JSON string representing model metadata (hyperparameters), sampled parameters, and sampled forests

from_json(json_string) #

Converts a JSON string to an in-memory BART model.

Parameters:

Name Type Description Default
json_string str

JSON string representing model metadata (hyperparameters), sampled parameters, and sampled forests

required

from_json_string_list(json_string_list) #

Convert a list of (in-memory) JSON strings that represent BART models to a single combined BART model object which can be used for prediction, etc...

Parameters:

Name Type Description Default
json_string_list list of str

List of JSON strings which can be parsed to objects of type JSONSerializer containing Json representation of a BART model

required

is_sampled() #

Whether or not a BART model has been sampled.

Returns:

Type Description
bool

True if a BART model has been sampled, False otherwise