The goal of jive is to implement jackknife instrumental-variable estimators (JIVE) and various alternatives.
You can install the development version of jive like so:
devtools::install_github("kylebutts/jive")
We are going to use the data from Stevenson (2018). Stevenson leverages the quasi-random assignment of 8 judges (magistrates) in Philadelphia to study the effects pretrial detention on several outcomes, including whether or not a defendant subsequently pleads guilty.
library(jive)
data(stevenson)
jive(
guilt ~ i(black) + i(white) | bailDate | jail3 ~ 0 | judge_pre,
data = stevenson
)
#> Coefficients:
#> Estimate Robust SE Z value Pr(>z)
#> jail3 -0.0218451 -0.0075172 2.906 0.003661 **
#> ---
#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#> 331,971 observations, 7 instruments, 2,352 covariates
#> First-stage F: stat = 32.626
#> Sargan: stat = 3.342, p = 0.765
#> CD: stat = 3.319, p = 0.768
ujive(
guilt ~ i(black) + i(white) | bailDate | jail3 ~ 0 | judge_pre,
data = stevenson
)
#> Coefficients:
#> Estimate Robust SE Z value Pr(>z)
#> jail3 0.159091 0.070564 2.2546 0.02416 *
#> ---
#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#> 331,971 observations, 7 instruments, 2,352 covariates
#> First-stage F: stat = 32.626
#> Sargan: stat = 3.342, p = 0.765
#> CD: stat = 3.319, p = 0.768
ijive(
guilt ~ i(black) + i(white) | bailDate | jail3 ~ 0 | judge_pre,
data = stevenson
)
#> Coefficients:
#> Estimate Robust SE Z value Pr(>z)
#> jail3 0.159529 0.070533 2.2618 0.02371 *
#> ---
#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#> 331,971 observations, 7 instruments, 2,352 covariates
#> First-stage F: stat = 32.626
#> Sargan: stat = 3.342, p = 0.765
#> CD: stat = 3.319, p = 0.768
# Leave-cluster out
ijive(
guilt ~ i(black) + i(white) | bailDate | jail3 ~ 0 | judge_pre,
data = stevenson,
cluster = ~ bailDate,
lo_cluster = TRUE # Default, but just to be explicit
)
#> Coefficients:
#> Estimate Clustered SE Z value Pr(>z)
#> jail3 0.174195 0.073551 2.3684 0.01787 *
#> ---
#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#> 331,971 observations, 7 instruments, 2,352 covariates
#> First-stage F: stat = 32.626
#> Sargan: stat = 3.342, p = 0.765
#> CD: stat = 3.319, p = 0.768
The package will allow you to estimate (leave-out) leniency measures:
out = ijive(
guilt ~ i(black) + i(white) | bailDate | jail3 ~ 0 | judge_pre,
data = stevenson, return_leniency = TRUE
)
stevenson$judge_lo_leniency = out$That
hist(stevenson$judge_lo_leniency)
library(binsreg)
# filter out outliers
stevenson = subset(stevenson, judge_lo_leniency > -0.02 & judge_lo_leniency < 0.02)
binsreg::binsreg(
y = stevenson$jail3, x = stevenson$judge_lo_leniency,
cb = TRUE
)
#> Warning in binsreg::binsreg(y = stevenson$jail3, x =
#> stevenson$judge_lo_leniency, : To speed up computation, bin/degree selection
#> uses a subsample of roughly max(5,000, 0.01n) observations if the sample size
#> n>5,000. To use the full sample, set randcut=1.
#> [1] "Note: A large number of random draws/evaluation points is recommended to obtain the final results."
#> Call: binsreg
#>
#> Binscatter Plot
#> Bin/Degree selection method (binsmethod) = IMSE direct plug-in (select # of bins)
#> Placement (binspos) = Quantile-spaced
#> Derivative (deriv) = 0
#>
#> Group (by) = Full Sample
#> Sample size (n) = 297226
#> # of distinct values (Ndist) = 34888
#> # of clusters (Nclust) = NA
#> dots, degree (p) = 0
#> dots, smoothness (s) = 0
#> # of bins (nbins) = 34
Consider the following instrumental variables setup
where
Then, the prediction,
When the dimension of
In general, the JIVE estimator (and variants) are given by
where
Source: Kolesar (2013) and Angrist, Imbens, and Kreuger (1999)
The original JIVE estimate produces
where
Source: Kolesar (2013)
For UJIVE, a leave-out procedure is used in the first-stage for fitted
values
Source: Ackerberg and Devereux (2009)
The IJIVE procedure, first residualizes
Note that
Source: Frandsen, Leslie, and McIntyre (2023)
This is a modified version of IJIVE as proposed by Frandsen, Leslie, and McIntyre (2023). This is necessary if the errors are correlated within clusters (e.g. court cases assigned on the same day to the same judge). The modified version is given by:
where
In this package, the same adjustment (replacing the diagonal
Heteroskedastic-robust standard errors are given by
where
Quoting from the papers that propose each:
- UJIVE: “UJIVE is consistent for a convex combination of local average treatment effects under many instrument asymptotics that also allow for many covariates and heteroscedasticity”
- IJIVE: “We introduce two simple new variants of the jackknife instrumental variables (JIVE) estimator for overidentified linear models and show that they are superior to the existing JIVE estimator, significantly improving on its small-sample-bias properties”
- CJIVE: “In settings where inference must be clustered, however, the [IJIVE] fails to eliminate the many-instruments bias. We propose a cluster-jackknife approach in which first-stage predicted values for each observation are constructed from a regression that leaves out the observation’s entire cluster, not just the observation itself. The cluster-jackknife instrumental variables estimator (CJIVE) eliminates many-instruments bias, and consistently estimates causal effects in the traditional linear model and local average treatment effects in the heterogeneous treatment effects framework.”
This package uses a ton of algebra tricks to speed up the computation of
the JIVE (and variants). First, the package is written using fixest
which estiamtes high-dimensional fixed effects very quickly. All the
credit to @lrberge for this.
All uses of the projection matrix, fixest
estimates.
For IJIVE, another trick is used. Instead of actually residualizing
everything, it’s useful to use the original fixest
to
estimate them. For this reason, we can use block projection matrix
decomposition
to transform the hat matrix into two components that are easier to
calculate with fixest
regressions:
Then, through some algebra
Angrist, Imbens, and Krueger (1999)