statsmodels.stats.oaxaca.OaxacaBlinder.two_fold

OaxacaBlinder.two_fold(std=False, two_fold_type='pooled', submitted_weight=None, n=None, conf=None)[source]

Calculates the two-fold or pooled Oaxaca Blinder Decompositions

Returns:
OaxacaResults

A results container for the two-fold decomposition.

Methods

std: boolean, optional

If true, bootstrapped standard errors will be calculated.

two_fold_type: string, optional

This method allows for the specific calculation of the non-discriminatory model. There are four different types available at this time. pooled, cotton, reimers, self_submitted. Pooled is assumed and if a non-viable parameter is given, pooled will be ran. pooled - This type assumes that the pooled model’s parameters (a normal regression) is the non-discriminatory model. This includes the indicator variable. This is generally the best idea. If you have economic justification for using others, then use others. nuemark - This is similar to the pooled type, but the regression is not done including the indicator variable. cotton - This type uses the adjusted in Cotton (1988), which accounts for the undervaluation of one group causing the overevalution of another. It uses the sample size weights for a linear combination of the two model parameters reimers - This type uses a linear combination of the two models with both parameters being 50% of the non-discriminatory model. self_submitted - This allows the user to submit their own weights. Please be sure to put the weight of the larger mean group only. This should be submitted in the submitted_weights variable.

submitted_weight: int/float, required only for self_submitted,

This is the submitted weight for the larger mean. If the weight for the larger mean is p, then the weight for the other mean is 1-p. Only submit the first value.

n: int, optional

A amount of iterations to calculate the bootstrapped standard errors. This defaults to 5000.

conf: float, optional

This is the confidence required for the standard error calculation. Defaults to .99, but could be anything less than or equal to one. One is heavy discouraged, due to the extreme outliers inflating the variance.


Last update: Dec 14, 2023