Part IV: CAUSAL ANALYSIS

Chapter 19: A Framework for Causal Analysis

This chapter introduces a framework for causal analysis. The chapter starts by introducing the potential outcomes framework to define subjects, interventions, outcomes, and effect. We define the individual treatment effect, the average treatment effect, and the average treatment effect on the treated. We then show how these effects can be understood using the closely related definition of ceteris paribus comparisons. We close the conceptual part by introducing causal maps, which visualize data analysts’ assumptions about the relationships between several variables. We start our discussion of how to uncover an average treatment effect using actual data by focusing on the sources of variation in the causal variable, and we distinguish exogenous and endogenous sources. We define random assignment and show how it helps uncover the average effect. We then turn to issues with identifying effects in observational data. We define confounders, and we discuss that, in principle, we could identify average effects by conditioning on them. We then briefly discuss additional issues about variables we should not condition on, and the consequences of the typical mismatch between latent variables we think about and variables we can measure in real data. Finally, we discuss internal validity and external validity in causal analysis.

Chapter 20: Designing and Analyzing Experiments

This chapter discusses the most important questions about designing an experiment and analyzing data from an experiment to estimate the average effect of an intervention. The first part of the chapter focuses on design; the second part focuses on analysis. We start by discussing different kinds of controlled experiments, such as field experiments, A/B testing, and survey experiments. We discuss how to carry out random assignment in practice, why and how to check covariate balance, and how to actually estimate the effect and carry out statistical inference using the estimate. We introduce imperfect compliance and its consequences, as well as spillovers and other potential threats to internal validity. Among the more advanced topics, we introduce the local average treatment effect and power calculation or sample size calculation that calculates the number of subjects that we would need for our experiment. We conclude the chapter by discussing how we can think about external validity of controlled experiments, and whether and how we can use data to help assess external validity.

Chapter 21: Regression and Matching with Observational Data

In this chapter we discuss how to condition on potential confounder variables in practice, and how to interpret the results when our question is causal. We start with multiple linear regression, and we discuss how to select the variables to condition on and how to decide on their functional form. We then turn to matching, which is an intuitive alternative that turns out to be quite complicated to carry out in practice. We discuss exact matching and matching on the propensity score. Matching can detect a lack of common support (when some values of confounders appear only among treated or untreated observations). However, with common support, regression and matching, when applied according to good practice, tend to give similar results. We also give a very brief introduction to other methods: instrumental variables and regression discontinuity. These methods can give good effect estimates even when we don’t have all confounders in our data, but they can only be applied in specific circumstances. This chapter reviews methods that can be used for all kinds of observational data in principle but used for cross-sectional data in practice, because data with a time series dimension, especially panel data, offers additional opportunities, which call for more specific methods.

Chapter 22: Difference-in-Differences

This chapter introduces difference-in-differences analysis, or diff-in-diffs for short, and its use in understanding the effect of an intervention. We explain how to use xt panel data covering two time periods to carry out diff-in-diffs by comparing average changes from before an intervention to after it, and how to implement this in a simple regression. We discuss the parallel trends assumption that’s needed for the results to show average effects and how we can assess its validity by examining pre-intervention trends. Finally, we discuss some generalizations of the method to include observed confounder variables, to estimate the effect of a quantitative causal variable, or to use pooled cross-sectional data instead of an xt panel, with different subjects before and after the intervention.

Chapter 23: Methods for Panel Data

This chapter introduces the most widely used regression methods to uncover the effect of an intervention when observational time series (tseries) data or cross-section time-series (xt) panel data is available with more than two time periods. We discuss the potential advantages of having more time periods in allowing within-subject comparisons, assessing pre-intervention trends, and tracing out effects through time. We then review time series regressions, and we discuss what kind of average effect they can estimate, under what conditions, and how adding lags and leads can help uncover delayed effects and reverse effects. Then we discuss when we can pool several similar time series to get a more precise estimate of the effect. This is xt panel data with few cross-sectional units, and we discuss how to use such data to estimate an effect for a single cross-sectional unit.
In the second, and larger, part of the chapter, we turn to xt panel data with many cross-sectional units and more than two time periods, to estimate an average effect across the cross-sectional units. We introduce two methods: panel fixed-effects regressions (FE regressions) and panel regressions in first differences (FD regressions). Both can be viewed as generalizations of the diff-in-diffs method we covered in Chapter 22. We explain when each kind of regression may give a good estimate of the average effect, and we show how adding lags and leads can help uncover delayed effects, differences in pre-trends, and reverse effects. We discuss adding binary time variables to deal with aggregate trends of any form in FE or FD regressions, and how we can treat unit-specific linear trends in FD regressions. We discuss clustered standard error estimation that helps address serial correlation and heteroskedasticity at the same time. We briefly discuss how to analyze unbalanced panel data, and we close the chapter by comparing FE and FD regressions.

Chapter 24: Appropriate Control Groups for Panel Data

This chapter discusses how data analysts can select a subset of the untreated observations in the data that are the best to learn about the counterfactual, and when that needs to be a conscious choice instead of using all available observations in the data. We introduce two methods. The first one is the synthetic control method, which creates a single counterfactual to an intervention that affects a single subject. We discuss how to select the donor pool of subjects that are similar to the treated subject and how the synthetic control algorithm uses pre-treatment variables to assign weights to each of them to create a single synthetic control subject. The part of the chapter discusses the event study method, which helps trace the time path of the effect on many subjects that experience an intervention at different time points. Event studies are FD or FE panel regressions with a twist. Besides introducing the method, we discuss how we can choose an appropriate control group by defining pseudo-interventions and making sure their are similar to treated subjects in terms of average pre-treatment variables. We show how we can include them in event study regressions and how we can visualize the results of such regressions and interpret its estimated coefficients.

Updated: