statsmodels.stats.proportion.tost_proportions_2indep

statsmodels.stats.proportion.tost_proportions_2indep(count1, nobs1, count2, nobs2, low, upp, method=None, compare='diff', correction=True)[source]

Equivalence test based on two one-sided test_proportions_2indep

This assumes that we have two independent binomial samples.

The Null and alternative hypothesis for equivalence testing are

for compare = ‘diff’

  • H0: prop1 - prop2 <= low or upp <= prop1 - prop2

  • H1: low < prop1 - prop2 < upp

for compare = ‘ratio’

  • H0: prop1 / prop2 <= low or upp <= prop1 / prop2

  • H1: low < prop1 / prop2 < upp

for compare = ‘odds-ratio’

  • H0: or <= low or upp <= or

  • H1: low < or < upp

where odds-ratio or = prop1 / (1 - prop1) / (prop2 / (1 - prop2))

Parameters:
count1, nobs1

count and sample size for first sample

count2, nobs2

count and sample size for the second sample

low, upp

equivalence margin for diff, risk ratio or odds ratio

methodstr

method for computing confidence interval. If method is None, then a default method is used. The default might change as more methods are added.

diff:
  • ‘wald’,

  • ‘agresti-caffo’

  • ‘score’ if correction is True, then this uses the degrees of freedom correction nobs / (nobs - 1) as in Miettinen Nurminen 1985.

ratio:
  • ‘log’: wald test using log transformation

  • ‘log-adjusted’: wald test using log transformation,

    adds 0.5 to counts

  • ‘score’ if correction is True, then this uses the degrees of freedom correction nobs / (nobs - 1) as in Miettinen Nurminen 1985.

odds-ratio:
  • ‘logit’: wald test using logit transformation

  • ‘logit-adjusted’:wald test using logit transformation,

    adds 0.5 to counts

  • ‘logit-smoothed’:wald test using logit transformation, biases

    cell counts towards independence by adding two observations in total.

  • ‘score’ if correction is True, then this uses the degrees of freedom

    correction nobs / (nobs - 1) as in Miettinen Nurminen 1985

comparestr in [‘diff’, ‘ratio’ ‘odds-ratio’]

If compare is diff, then the confidence interval is for diff = p1 - p2. If compare is ratio, then the confidence interval is for the risk ratio defined by ratio = p1 / p2. If compare is odds-ratio, then the confidence interval is for the odds-ratio defined by or = p1 / (1 - p1) / (p2 / (1 - p2).

correctionbool

If correction is True (default), then the Miettinen and Nurminen small sample correction to the variance nobs / (nobs - 1) is used. Applies only if method=’score’.

Returns:
pvaluefloat

p-value is the max of the pvalues of the two one-sided tests

t1test results

results instance for one-sided hypothesis at the lower margin

t1test results

results instance for one-sided hypothesis at the upper margin

Notes

Status: experimental, API and defaults might still change.