bcf.BCFModel
bcf.BCFModel()Class that handles sampling, storage, and serialization of stochastic forest models for causal effect estimation. The class takes its name from Bayesian Causal Forests, an MCMC sampler originally developed in Hahn, Murray, Carvalho (2020), but supports several sampling algorithms:
- MCMC: The “classic” sampler defined in Hahn, Murray, Carvalho (2020). In order to run the MCMC sampler, set
num_gfr = 0(explained below) and then define a sampler according to several parameters:num_burnin: the number of iterations to run before “retaining” samples for further analysis. These “burned in” samples are helpful for allowing a sampler to converge before retaining samples.num_chains: the number of independent sequences of MCMC samples to generate (typically referred to in the literature as “chains”)num_mcmc: the number of “retained” samples of the posterior distributionkeep_every: after a sampler has “burned in”, we will run the sampler forkeep_every*num_mcmciterations, retaining one of eachkeep_everyiteration in a chain.
- GFR (Grow-From-Root): A fast, greedy approximation of the BART MCMC sampling algorithm introduced in Krantsevich, He, and Hahn (2023). GFR sampler iterations are governed by the
num_gfrparameter, and there are two primary ways to use this sampler:- Standalone: setting
num_gfr > 0and bothnum_burnin = 0andnum_mcmc = 0will only run and retain GFR samples of the posterior. This is typically referred to as “XBART” (accelerated BART). - Initializer for MCMC: setting
num_gfr > 0andnum_mcmc > 0will use ensembles from the GFR algorithm to initializenum_chainsindependent MCMC BART samplers, which are run fornum_mcmciterations. This is typically referred to as “warm start BART”.
- Standalone: setting
In addition to enabling multiple samplers, we support a broad set of models. First, note that the original BCF model of Hahn, Murray, Carvalho (2020) is
\[\begin{equation*} \begin{aligned} y &= a(X) + b_z(X) + \epsilon\\ b_z(X) &= (b_1 Z + b_0 (1-Z)) t(X)\\ b_0, b_1 &\sim N\left(0, \frac{1}{2}\right)\\\\ a(X) &\sim \text{BART}()\\ t(X) &\sim \text{BART}()\\ \epsilon &\sim N(0, \sigma^2)\\ \sigma^2 &\sim IG(a, b) \end{aligned} \end{equation*}\]
for continuous outcome \(y\), binary treatment \(Z\), and covariates \(X\).
In words, there are two nonparametric mean functions – a “prognostic” function and a “treatment effect” function – governed by tree ensembles with BART priors and an additive (mean-zero) Gaussian error term, whose variance is parameterized with an inverse gamma prior.
The BCFModel class supports the following extensions of this model:
- Continuous Treatment: If \(Z\) is continuous rather than binary, we define \(b_z(X) = \tau(X, Z) = Z \tau(X)\), where the “leaf model” for the \(\tau\) forest is essentially a regression on continuous \(Z\).
- Heteroskedasticity: Rather than define \(\epsilon\) parameterically, we can let a forest \(\sigma^2(X)\) model a conditional error variance function. This can be done by setting
num_trees_variance > 0in theparamsdictionary passed to thesamplemethod.
Methods
| Name | Description |
|---|---|
| sample | Runs a BCF sampler on provided training set. Outcome predictions and estimates of the prognostic and treatment effect functions |
| predict | Predict outcome model components (CATE function and prognostic function) as well as overall outcome for every provided observation. |
| compute_contrast | Compute a contrast using a BCF model by making two sets of outcome predictions and taking their |
| compute_posterior_interval | Compute posterior credible intervals for specified terms from a fitted BART model. It supports intervals for mean functions, variance functions, random effects, and overall predictions. |
| sample_posterior_predictive | Sample from the posterior predictive distribution for outcomes modeled by BART |
| to_json | Converts a sampled BART model to JSON string representation (which can then be saved to a file or |
| from_json | Converts a JSON string to an in-memory BART model. |
| from_json_string_list | Convert a list of (in-memory) JSON strings that represent BCF models to a single combined BCF model object |
| is_sampled | Whether or not a BCF model has been sampled. |
| has_term | Whether or not a model includes a term. |
| extract_parameter | Extract a vector, matrix or array of parameter samples from a BCF model by name. |
| summary | Summarize a BCF fit with a description of the model that was fit and numeric summaries of any sampled quantities |
sample
bcf.BCFModel.sample(
X_train,
Z_train,
y_train,
propensity_train=None,
rfx_group_ids_train=None,
rfx_basis_train=None,
X_test=None,
Z_test=None,
propensity_test=None,
rfx_group_ids_test=None,
rfx_basis_test=None,
num_gfr=5,
num_burnin=0,
num_mcmc=100,
previous_model_json=None,
previous_model_warmstart_sample_num=None,
general_params=None,
prognostic_forest_params=None,
treatment_effect_forest_params=None,
variance_forest_params=None,
random_effects_params=None,
)Runs a BCF sampler on provided training set. Outcome predictions and estimates of the prognostic and treatment effect functions will be cached for the training set and (if provided) the test set.
Parameters
| Name | Type | Description | Default |
|---|---|---|---|
| X_train | np.array or pd.DataFrame | Covariates used to split trees in the ensemble. Can be passed as either a matrix or dataframe. | required |
| Z_train | np.array | Array of (continuous or binary; univariate or multivariate) treatment assignments. | required |
| y_train | np.array | Outcome to be modeled by the ensemble. | required |
| propensity_train | np.array | Optional vector of propensity scores. If not provided, this will be estimated from the data. | None |
| rfx_group_ids_train | np.array | Optional group labels used for an additive random effects model. | None |
| rfx_basis_train | np.array | Optional basis for “random-slope” regression in an additive random effects model. | None |
| X_test | np.array | Optional test set of covariates used to define “out of sample” evaluation data. | None |
| Z_test | np.array | Optional test set of (continuous or binary) treatment assignments. Must be provided if X_test is provided. |
None |
| propensity_test | np.array | Optional test set vector of propensity scores. If not provided (but X_test and Z_test are), this will be estimated from the data. |
None |
| rfx_group_ids_test | np.array | Optional test set group labels used for an additive random effects model. We do not currently support (but plan to in the near future), test set evaluation for group labels that were not in the training set. | None |
| rfx_basis_test | np.array | Optional test set basis for “random-slope” regression in additive random effects model. | None |
| num_gfr | int | Number of “warm-start” iterations run using the grow-from-root algorithm (He and Hahn, 2021). Defaults to 5. |
5 |
| num_burnin | int | Number of “burn-in” iterations of the MCMC sampler. Defaults to 0. Ignored if num_gfr > 0. |
0 |
| num_mcmc | int | Number of “retained” iterations of the MCMC sampler. Defaults to 100. If this is set to 0, GFR (XBART) samples will be retained. |
100 |
| general_params | dict | Dictionary of general model parameters, each of which has a default value processed internally. See Notes for supported keys. | None |
| prognostic_forest_params | dict | Dictionary of prognostic forest model parameters, each of which has a default value processed internally. See Notes for supported keys. | None |
| treatment_effect_forest_params | dict | Dictionary of treatment effect forest model parameters, each of which has a default value processed internally. See Notes for supported keys. | None |
| variance_forest_params | dict | Dictionary of variance forest model parameters, each of which has a default value processed internally. See Notes for supported keys. | None |
| random_effects_params | dict | Dictionary of random effects parameters, each of which has a default value processed internally. See Notes for supported keys. | None |
| previous_model_json | str | JSON string containing a previous BCF model. This can be used to “continue” a sampler interactively after inspecting the samples or to run parallel chains “warm-started” from existing forest samples. Defaults to None. |
None |
| previous_model_warmstart_sample_num | int | Sample number from previous_model_json that will be used to warmstart this BCF sampler. Zero-indexed (so that the first sample is used for warm-start by setting previous_model_warmstart_sample_num = 0). Defaults to None. If num_chains in the general_params list is > 1, then each successive chain will be initialized from a different sample, counting backwards from previous_model_warmstart_sample_num. That is, if previous_model_warmstart_sample_num = 10 and num_chains = 4, then chain 1 will be initialized from sample 10, chain 2 from sample 9, chain 3 from sample 8, and chain 4 from sample 7. If previous_model_json is provided but previous_model_warmstart_sample_num is NULL, the last sample in the previous model will be used to initialize the first chain, counting backwards as noted before. If more chains are requested than there are samples in previous_model_json, a warning will be raised and only the last sample will be used. |
None |
Returns
| Name | Type | Description |
|---|---|---|
| self | BCFModel | Sampled BCF Model. |
Notes
general_params keys
- cutpoint_grid_size (int): Maximum number of cutpoints to consider for each feature. Defaults to
100. - standardize (bool): Whether or not to standardize the outcome (and store the offset / scale in the model object). Defaults to
True. - sample_sigma2_global (bool): Whether or not to update the
sigma^2global error variance parameter based onIG(sigma2_global_shape, sigma2_global_scale). Defaults toTrue. - sigma2_global_init (float): Starting value of global variance parameter. Set internally to the outcome variance (standardized if
standardize = True) if not set here. - sigma2_global_shape (float): Shape parameter in the
IG(sigma2_global_shape, sigma2_global_scale)global error variance model. Defaults to0. - sigma2_global_scale (float): Scale parameter in the
IG(sigma2_global_shape, sigma2_global_scale)global error variance model. Defaults to0. - variable_weights (np.array): Numeric weights reflecting the relative probability of splitting on each variable in each of the forests. Does not need to sum to 1 but cannot be negative. Defaults to
np.repeat(1/X_train.shape[1], X_train.shape[1]). Note that if the propensity score is included as a covariate in either forest, its weight defaults to1/X_train.shape[1]. To assign a custom weight to the propensity score, include it as a column inX_train, setpropensity_covariate = "none", and adjustkeep_varsaccordingly. - propensity_covariate (str): Whether to include the propensity score as a covariate in either or both forests. Options:
"none","prognostic","treatment_effect", or"both". If not"none"and no propensity score is provided, it will be estimated from (X_train,Z_train) usingBARTModel. Defaults to"prognostic". - adaptive_coding (bool): Whether to use an “adaptive coding” scheme in which a binary treatment is not coded as (0,1) or (-1,1) but learned via parameters
b_0andb_1that attach to the outcome model[b_0 (1-Z) + b_1 Z] tau(X). Ignored whenZis not binary. Defaults toFalse. - control_coding_init (float): Initial value of the “control” group coding parameter. Ignored when
Zis not binary. Default:-0.5. - treated_coding_init (float): Initial value of the “treated” group coding parameter. Ignored when
Zis not binary. Default:0.5. - random_seed (int): Integer parameterizing the C++ random number generator. If not specified, seeded according to
std::random_device. - keep_burnin (bool): Whether or not “burnin” samples should be included in predictions. Defaults to
False. Ignored ifnum_mcmc == 0. - keep_gfr (bool): Whether or not “warm-start” / grow-from-root samples should be included in predictions. Defaults to
False. Ignored ifnum_mcmc == 0. - keep_every (int): How many iterations of the burned-in MCMC sampler should be run before forests and parameters are retained. Defaults to
1. Settingkeep_every = kfor somek > 1will “thin” the MCMC samples by retaining everyk-th sample, which can reduce autocorrelation. - num_chains (int): How many independent MCMC chains should be sampled. If
num_mcmc = 0, this is ignored. Ifnum_gfr = 0, each chain is run from root fornum_mcmc * keep_every + num_burniniterations withnum_mcmcsamples retained. Ifnum_gfr > 0, each chain is initialized from a separate GFR ensemble, requiringnum_gfr >= num_chains. Defaults to1. Whennum_chains > 1, samples from all chains are stored consecutively (chain 1 first, then chain 2, etc.). See the multi-chain vignettes for details. - outcome_model (stochtree.OutcomeModel): An object of class
OutcomeModelspecifying the outcome model. Default:OutcomeModel(outcome="continuous", link="identity"). Pre-empts the deprecatedprobit_outcome_modelparameter if specified. - probit_outcome_model (bool): Deprecated in favor of
outcome_model. Whether or not the outcome should be modeled as explicitly binary via a probit link. IfTrue,ymust only contain the values0and1. Default:False. - num_threads (int): Number of threads to use in the GFR and MCMC algorithms, as well as prediction. Defaults to
1if OpenMP is unavailable, otherwise to the maximum number of available threads.
prognostic_forest_params keys
- num_trees (int): Number of trees in the prognostic forest. Defaults to
250. Must be a positive integer. - alpha (float): Prior probability of splitting for a tree of depth 0 in the prognostic forest. Tree split prior combines
alphaandbetaviaalpha*(1+node_depth)^-beta. Defaults to0.95. - beta (float): Exponent that decreases split probabilities for nodes of depth > 0 in the prognostic forest. Tree split prior combines
alphaandbetaviaalpha*(1+node_depth)^-beta. Defaults to2. - min_samples_leaf (int): Minimum allowable size of a leaf, in terms of training samples, in the prognostic forest. Defaults to
5. - max_depth (int): Maximum depth of any tree in the prognostic forest. Defaults to
10. Can be overridden with-1to impose no depth limit. - variable_weights (np.array): Numeric weights reflecting the relative probability of splitting on each variable in the prognostic forest. Does not need to sum to 1 but cannot be negative. Defaults to uniform over the columns of
X_train. - sample_sigma2_leaf (bool): Whether or not to update the
tauleaf scale variance parameter based onIG(sigma2_leaf_shape, sigma2_leaf_scale). Cannot currently be set toTrueifbasis_trainhas more than one column. Defaults toTrue. - sigma2_leaf_init (float): Starting value of leaf node scale parameter. Calibrated internally as
1/num_treesif not set here. - sigma2_leaf_shape (float): Shape parameter in the
IG(sigma2_leaf_shape, sigma2_leaf_scale)leaf node parameter variance model. Defaults to3. - sigma2_leaf_scale (float): Scale parameter in the
IG(sigma2_leaf_shape, sigma2_leaf_scale)leaf node parameter variance model. Calibrated internally as0.5/num_treesif not set here. - keep_vars (list or np.array): Variable names or column indices to include in the prognostic (
mu(X)) forest. Defaults toNone. - drop_vars (list or np.array): Variable names or column indices to exclude from the prognostic (
mu(X)) forest. Defaults toNone. Ignored ifkeep_varsis also set. - num_features_subsample (int): How many features to subsample when growing each tree for the GFR algorithm. Defaults to the number of features in the training dataset.
treatment_effect_forest_params keys
- num_trees (int): Number of trees in the treatment effect forest. Defaults to
100. Must be a positive integer. - alpha (float): Prior probability of splitting for a tree of depth 0 in the treatment effect forest. Tree split prior combines
alphaandbetaviaalpha*(1+node_depth)^-beta. Defaults to0.25. - beta (float): Exponent that decreases split probabilities for nodes of depth > 0 in the treatment effect forest. Tree split prior combines
alphaandbetaviaalpha*(1+node_depth)^-beta. Defaults to3. - min_samples_leaf (int): Minimum allowable size of a leaf, in terms of training samples, in the treatment effect forest. Defaults to
5. - max_depth (int): Maximum depth of any tree in the treatment effect forest. Defaults to
5. Can be overridden with-1to impose no depth limit. - sample_sigma2_leaf (bool): Whether or not to update the
tauleaf scale variance parameter based onIG(sigma2_leaf_shape, sigma2_leaf_scale). Cannot currently be set toTrueifbasis_trainhas more than one column. Defaults toFalse. - sigma2_leaf_init (float): Starting value of leaf node scale parameter. Calibrated internally as
0.5 * np.var(y) / num_treesif not set here (0.5 / num_treesifyis continuous andstandardize = True). - sigma2_leaf_shape (float): Shape parameter in the
IG(sigma2_leaf_shape, sigma2_leaf_scale)leaf node parameter variance model. Defaults to3. - sigma2_leaf_scale (float): Scale parameter in the
IG(sigma2_leaf_shape, sigma2_leaf_scale)leaf node parameter variance model. Calibrated internally as0.5/num_treesif not set here. - delta_max (float): Maximum plausible conditional distributional treatment effect (
P(Y(1)=1|X) - P(Y(0)=1|X)) for binary outcomes under a probit model. Must be > 0 and < 1. Defaults to0.9. Ignored ifsigma2_leaf_initis set directly. - keep_vars (list or np.array): Variable names or column indices to include in the treatment effect (
tau(X)) forest. Defaults toNone. - drop_vars (list or np.array): Variable names or column indices to exclude from the treatment effect (
tau(X)) forest. Defaults toNone. Ignored ifkeep_varsis also set. - num_features_subsample (int): How many features to subsample when growing each tree for the GFR algorithm. Defaults to the number of features in the training dataset.
- sample_intercept (bool): Whether to sample a global treatment effect intercept
tau_0so the full CATE istau_0 + tau(X). Compatible withadaptive_coding = True. Defaults toTrue. - tau_0_prior_var (float): Variance of the normal prior on
tau_0(applied independently to each treatment dimension). Auto-calibrated to outcome variance whenNoneand outcome is continuous. Only used whensample_intercept = True.
variance_forest_params keys
- num_trees (int): Number of trees in the conditional variance model. Defaults to
0. Variance is only modeled using a forest ifnum_trees > 0. - alpha (float): Prior probability of splitting for a tree of depth 0 in the conditional variance model. Tree split prior combines
alphaandbetaviaalpha*(1+node_depth)^-beta. Defaults to0.95. - beta (float): Exponent that decreases split probabilities for nodes of depth > 0 in the conditional variance model. Tree split prior combines
alphaandbetaviaalpha*(1+node_depth)^-beta. Defaults to2. - min_samples_leaf (int): Minimum allowable size of a leaf, in terms of training samples, in the conditional variance model. Defaults to
5. - max_depth (int): Maximum depth of any tree in the conditional variance model. Defaults to
10. Can be overridden with-1to impose no depth limit. - leaf_prior_calibration_param (float): Hyperparameter used to calibrate the
IG(var_forest_prior_shape, var_forest_prior_scale)conditional error variance model. Used to setvar_forest_prior_shape = num_trees / leaf_prior_calibration_param^2 + 0.5andvar_forest_prior_scale = num_trees / leaf_prior_calibration_param^2when those are not set directly. Defaults to1.5. - var_forest_leaf_init (float): Starting value of root forest prediction in the heteroskedastic error variance model. Calibrated internally as
np.log(0.6*np.var(y_train))/num_trees_varianceif not set. - var_forest_prior_shape (float): Shape parameter in the
IG(var_forest_prior_shape, var_forest_prior_scale)conditional error variance forest (only sampled ifnum_trees > 0). Calibrated internally asnum_trees / 1.5^2 + 0.5if not set here. - var_forest_prior_scale (float): Scale parameter in the
IG(var_forest_prior_shape, var_forest_prior_scale)conditional error variance forest (only sampled ifnum_trees > 0). Calibrated internally asnum_trees / 1.5^2if not set here. - keep_vars (list or np.array): Variable names or column indices to include in the variance forest. Defaults to
None. - drop_vars (list or np.array): Variable names or column indices to exclude from the variance forest. Defaults to
None. Ignored ifkeep_varsis also set. - num_features_subsample (int): How many features to subsample when growing each tree for the GFR algorithm. Defaults to the number of features in the training dataset.
random_effects_params keys
- model_spec (str): Specification of the random effects model. Options are
"custom","intercept_only", and"intercept_plus_treatment". If"custom", a user-provided basis must be passed throughrfx_basis_train. If"intercept_only"or"intercept_plus_treatment", a basis is dispatched internally andrfx_basis_train/rfx_basis_testare ignored. Default:"custom". - working_parameter_prior_mean: Prior mean for the random effects “working parameter”. Default:
None. Must be a 1D numpy array matching the number of random effects bases, or a scalar expanded to a vector. - group_parameter_prior_mean: Prior mean for the random effects “group parameters”. Default:
None. Must be a 1D numpy array matching the number of random effects bases, or a scalar expanded to a vector. - working_parameter_prior_cov: Prior covariance matrix for the random effects “working parameter”. Default:
None. Must be a square numpy matrix matching the number of random effects bases, or a scalar expanded to a diagonal matrix. - group_parameter_prior_cov: Prior covariance matrix for the random effects “group parameters”. Default:
None. Must be a square numpy matrix matching the number of random effects bases, or a scalar expanded to a diagonal matrix. - variance_prior_shape (float): Shape parameter for the inverse-gamma prior on the variance of the random effects “group parameter”. Default:
1. - variance_prior_scale (float): Scale parameter for the inverse-gamma prior on the variance of the random effects “group parameter”. Default:
1.
predict
bcf.BCFModel.predict(
X,
Z,
propensity=None,
rfx_group_ids=None,
rfx_basis=None,
type='posterior',
terms='all',
scale='linear',
)Predict outcome model components (CATE function and prognostic function) as well as overall outcome for every provided observation. Predicted outcomes are computed as yhat = mu_x + Z*tau_x where mu_x is a sample of the prognostic function and tau_x is a sample of the treatment effect (CATE) function. When random effects are present, they are either included in yhat additively if rfx_model_spec == "custom". They are included in mu_x if rfx_model_spec == "intercept_only" or partially included in mu_x and partially included in tau_x rfx_model_spec == "intercept_plus_treatment".
Parameters
| Name | Type | Description | Default |
|---|---|---|---|
| X | np.array or pd.DataFrame | Test set covariates. | required |
| Z | np.array | Test set treatment indicators. | required |
| propensity | np.array |
Optional test set propensities. Must be provided if propensities were provided when the model was sampled. | None |
| rfx_group_ids | np.array | Optional group labels used for an additive random effects model. | None |
| rfx_basis | np.array | Optional basis for “random-slope” regression in an additive random effects model. Not necessary if rfx_model_spec is “intercept_only” or “intercept_plus_treatment”, but if rfx_basis is provided, it will supercede the basis implied by rfx_model_spec. |
None |
| type | str | Type of prediction to return. Options are “mean”, which averages the predictions from every draw of a BART model, and “posterior”, which returns the entire matrix of posterior predictions. Default: “posterior”. | 'posterior' |
| terms | str | Which model terms to include in the prediction. This can be a single term or a list of model terms. Options include “y_hat”, “prognostic_function”, “mu”, “cate”, “tau”, “rfx”, “variance_forest”, or “all”. If a model has random effects fit with either “intercept_only” or “intercept_plus_treatment” model_spec, then “prognostic_function” refers to the predictions of the prognostic forest plus the random intercept and “cate” refers to the predictions of the treatment effect forest plus the random slope on the treatment variable. For these models, the forest predictions alone can be requested via “mu” (prognostic forest) and “tau” (treatment effect forest). In all other cases, “mu” will return exactly the same result as “prognostic_function” and “tau” will return exactly the same result as “cate”. If a model doesn’t have mean forest, random effects, or variance forest predictions, but one of those terms is request, the request will simply be ignored. If none of the requested terms are present in a model, this function will return NULL along with a warning. Default: “all”. |
'all' |
| scale | str | Scale on which to return predictions. Options are “linear” (the default), which returns predictions on the original outcome scale, and “probit”, which returns predictions on the probit (latent) scale. Only applicable for models fit with probit link. | 'linear' |
Returns
| Name | Type | Description |
|---|---|---|
| Dict of numpy arrays for each prediction term, or a simple numpy array if a single term is requested. |
compute_contrast
bcf.BCFModel.compute_contrast(
X_0,
X_1,
Z_0,
Z_1,
propensity_0=None,
propensity_1=None,
rfx_group_ids_0=None,
rfx_group_ids_1=None,
rfx_basis_0=None,
rfx_basis_1=None,
type='posterior',
scale='linear',
)Compute a contrast using a BCF model by making two sets of outcome predictions and taking their difference. This function provides the flexibility to compute any contrast of interest by specifying covariates, leaf basis, and random effects bases / IDs for both sides of a two term contrast. For simplicity, we refer to the subtrahend of the contrast as the “control” or Y0 term and the minuend of the contrast as the Y1 term, though the requested contrast need not match the “control vs treatment” terminology of a classic two-treatment causal inference problem. We mirror the function calls and terminology of the predict.bartmodel function, labeling each prediction data term with a 1 to denote its contribution to the treatment prediction of a contrast and 0 to denote inclusion in the control prediction.
Parameters
| Name | Type | Description | Default |
|---|---|---|---|
| X_0 | np.array or pd.DataFrame | Covariates used for prediction in the “control” case. Must be a numpy array or dataframe. | required |
| X_1 | np.array or pd.DataFrame | Covariates used for prediction in the “treatment” case. Must be a numpy array or dataframe. | required |
| Z_0 | np.array | Treatments used for prediction in the “control” case. Must be a numpy array or vector. | required |
| Z_1 | np.array | Treatments used for prediction in the “treatment” case. Must be a numpy array or vector. | required |
| propensity_0 | np.array |
Propensities used for prediction in the “control” case. Must be a numpy array or vector. | None |
| propensity_1 | np.array |
Propensities used for prediction in the “treatment” case. Must be a numpy array or vector. | None |
| rfx_group_ids_0 | np.array | Test set group labels used for prediction from an additive random effects model in the “control” case. We do not currently support (but plan to in the near future), test set evaluation for group labels that were not in the training set. Must be a numpy array. | None |
| rfx_group_ids_1 | np.array | Test set group labels used for prediction from an additive random effects model in the “control” case. We do not currently support (but plan to in the near future), test set evaluation for group labels that were not in the training set. Must be a numpy array. | None |
| rfx_basis_0 | np.array | Test set basis for used for prediction from an additive random effects model in the “control” case. Must be a numpy array. | None |
| rfx_basis_1 | np.array | Test set basis for used for prediction from an additive random effects model in the “treatment” case. Must be a numpy array. | None |
| type | str | Aggregation level of the contrast. Options are “mean”, which averages the contrast evaluations over every draw of a BCF model, and “posterior”, which returns the entire matrix of posterior contrast estimates. Default: “posterior”. | 'posterior' |
| scale | str | Scale of the contrast. Options are “linear”, which returns a contrast on the original scale of the mean forest / RFX terms, and “probability”, which transforms each contrast term into a probability of observing y == 1 before taking their difference. “probability” is only valid for models fit with a probit outcome model. Default: “linear”. |
'linear' |
Returns
| Name | Type | Description |
|---|---|---|
| Array, either 1d or 2d depending on whether type = "mean" or "posterior". |
compute_posterior_interval
bcf.BCFModel.compute_posterior_interval(
X=None,
Z=None,
propensity=None,
rfx_group_ids=None,
rfx_basis=None,
terms='all',
level=0.95,
scale='linear',
)Compute posterior credible intervals for specified terms from a fitted BART model. It supports intervals for mean functions, variance functions, random effects, and overall predictions.
Parameters
| Name | Type | Description | Default |
|---|---|---|---|
| X | np.array | Optional array or data frame of covariates at which to compute the intervals. Required if the requested term depends on covariates (e.g., prognostic forest, treatment effect forest, variance forest, or overall predictions). | None |
| Z | np.array | Optional array of treatment assignments. Required if the requested term is "y_hat" (overall predictions). |
None |
| propensity | np.array | Optional array of propensity scores. Required if the underlying model depends on user-provided propensities. | None |
| rfx_group_ids | np.array | Optional vector of group IDs for random effects. Required if the requested term includes random effects. | None |
| rfx_basis | np.array | Optional matrix of basis function evaluations for random effects. Required if the requested term includes random effects. | None |
| terms | str | Character string specifying the model term(s) for which to compute intervals. Options for BCF models are "prognostic_function", "mu", "cate", "tau", "tau_0", "variance_forest", "rfx", or "y_hat". Defaults to "all". Note that "mu" is only different from "prognostic_function" if random effects are included with a model spec of "intercept_only" or "intercept_plus_treatment" and "tau" is only different from "cate" if random effects are included with a model spec of "intercept_plus_treatment". "tau_0" is only available when the model was fit with sample_intercept = True. |
'all' |
| scale | str | Scale of mean function predictions. Options are “linear”, which returns predictions on the original scale of the mean forest / RFX terms, and “probability”, which transforms predictions into a probability of observing y == 1. “probability” is only valid for models fit with a probit outcome model. Defaults to "linear". |
'linear' |
| level | float | A numeric value between 0 and 1 specifying the credible interval level. Defaults to 0.95 for a 95% credible interval. | 0.95 |
Returns
| Name | Type | Description |
|---|---|---|
| dict | A dict containing the lower and upper bounds of the credible interval for the specified term. If multiple terms are requested, a dict with intervals for each term is returned. |
sample_posterior_predictive
bcf.BCFModel.sample_posterior_predictive(
X,
Z,
propensity=None,
rfx_group_ids=None,
rfx_basis=None,
num_draws_per_sample=None,
)Sample from the posterior predictive distribution for outcomes modeled by BART
Parameters
| Name | Type | Description | Default |
|---|---|---|---|
| X | np.array | An array or data frame of covariates. | required |
| Z | np.array | An array of treatment assignments. | required |
| propensity | np.array | Optional array of propensity scores. Required if the underlying model depends on user-provided propensities. | None |
| rfx_group_ids | np.array | Optional vector of group IDs for random effects. Required if the requested term includes random effects. | None |
| rfx_basis | np.array | Optional matrix of basis function evaluations for random effects. Required if the requested term includes random effects. | None |
| num_draws_per_sample | int | The number of posterior predictive samples to draw for each posterior sample. Defaults to a heuristic based on the number of samples in a BCF model (i.e. if the BCF model has >1000 draws, we use 1 draw from the likelihood per sample, otherwise we upsample to ensure intervals are based on at least 1000 posterior predictive draws). | None |
Returns
| Name | Type | Description |
|---|---|---|
| np.array | A matrix of posterior predictive samples. If num_draws = 1. |
to_json
bcf.BCFModel.to_json()Converts a sampled BART model to JSON string representation (which can then be saved to a file or processed using the json library)
Returns
| Name | Type | Description |
|---|---|---|
| str | JSON string representing model metadata (hyperparameters), sampled parameters, and sampled forests |
from_json
bcf.BCFModel.from_json(json_string)Converts a JSON string to an in-memory BART model.
Parameters
| Name | Type | Description | Default |
|---|---|---|---|
| json_string | str | JSON string representing model metadata (hyperparameters), sampled parameters, and sampled forests | required |
from_json_string_list
bcf.BCFModel.from_json_string_list(json_string_list)Convert a list of (in-memory) JSON strings that represent BCF models to a single combined BCF model object which can be used for prediction, etc…
Parameters
| Name | Type | Description | Default |
|---|---|---|---|
| json_string_list | list of str | List of JSON strings which can be parsed to objects of type JSONSerializer containing Json representation of a BCF model |
required |
is_sampled
bcf.BCFModel.is_sampled()Whether or not a BCF model has been sampled.
Returns
| Name | Type | Description |
|---|---|---|
| bool | True if a BCF model has been sampled, False otherwise |
has_term
bcf.BCFModel.has_term(term)Whether or not a model includes a term.
Parameters
| Name | Type | Description | Default |
|---|---|---|---|
| term | str | Character string specifying the model term to check for. Options for BCF models are "prognostic_function", "mu", "cate", "tau", "variance_forest", "rfx", "y_hat", or "all". |
required |
Returns
| Name | Type | Description |
|---|---|---|
| bool | True if the model includes the specified term, False otherwise |
extract_parameter
bcf.BCFModel.extract_parameter(term)Extract a vector, matrix or array of parameter samples from a BCF model by name. Random effects are handled by a separate extract_parameter_samples method attached to the underlying RandomEffectsContainer object due to the complexity of the random effects parameters. If the requested model term is not found, an error is thrown. The following conventions are used for parameter names:
- Global error variance:
"sigma2","global_error_scale","sigma2_global" - Prognostic forest leaf scale:
"sigma2_leaf_mu","leaf_scale_mu","mu_leaf_scale" - Treatment effect forest leaf scale:
"sigma2_leaf_tau","leaf_scale_tau","tau_leaf_scale" - Adaptive coding parameters:
"adaptive_coding"(returns both the control and treated parameters jointly, with control in the first row and treated in the second row) - In-sample mean function predictions:
"y_hat_train" - Test set mean function predictions:
"y_hat_test" - In-sample treatment effect forest predictions:
"tau_hat_train" - Test set treatment effect forest predictions:
"tau_hat_test" - Treatment effect intercept:
"tau_0","treatment_intercept","tau_intercept" - In-sample variance forest predictions:
"sigma2_x_train","var_x_train" - Test set variance forest predictions:
"sigma2_x_test","var_x_test"
Parameters
| Name | Type | Description | Default |
|---|---|---|---|
| term | str | Name of the parameter to extract (e.g., "sigma2", "y_hat_train", etc.) |
required |
Returns
| Name | Type | Description |
|---|---|---|
| np.array | Array of parameter samples. If the underlying parameter is a scalar, this will be a vector of length num_samples. If the underlying parameter is vector-valued, this will be (parameter_dimension x num_samples) matrix, and if the underlying parameter is multidimensional, this will be an array of dimension (parameter_dimension_1 x parameter_dimension_2 x … x num_samples). |
summary
bcf.BCFModel.summary()Summarize a BCF fit with a description of the model that was fit and numeric summaries of any sampled quantities
Prints summary directly to the console with no return type.
Returns
| Name | Type | Description |
|---|---|---|
| None |