In: Operations Management
1. State how study designs compare with respect to validity of causal inference (managerial epidemiology)
2. Define the term controlled clinical trials and give examples.
1) Causal inferences have primarily relied on so-called “gold standard” experimental designs,
Causal model: A description, most often expressed as a system of equations or a diagram, of a researcher's assumptions about hypothesized or known causal relationships among variables relevant to a particular research question.
2.
Treatment, exposure, or independent variable: The explanatory variable of interest in a study. In this paper, we use these terms synonymously even for exposures that are not medical “treatments”, such as social resources or environmental exposures. Some writers also describe this as the “right-hand-side variable”.
3.
Outcome, dependent variable, or left-hand-side variable: The causal effect of interest in a research study is the impact of an exposure(s) on an outcome(s).
4.
Potential outcome: The outcome that an individual (or other unit of analysis, such as family or neighborhood) would experience if his/her treatment takes any particular value. Each individual is conceptualized as having a potential outcome for each possible treatment value. Potential outcomes are sometimes referred to as counterfactual outcomes.
5.
Exogenous versus endogenous variables: These terms are common in economics, where a variable is described as exogenous if its values are not determined by other variables in the causal model. The variable is called endogenous if it is influenced by other variables in the causal model. If a third variable influences both the exposure and outcome, this implies the exposure is endogenous.
Alt-text: Box 1
In our role evaluating investigator-initiated submissions for a grant-making program focused on improving population health and addressing health inequities (the Evidence for Action program of the Robert Wood Johnson Foundation), these disciplinary methodological divides are evident and have compelled us to take into consideration the pros and cons of different designs. Drawing on examples from the literature on educational attainment and health (Galama, Lleras-Muney, & van Kippersluis, 2018), in this paper we compare confounder-control and instrument-based approaches. Specifically, we apply Shadish, Cook, and Campbell's threats to validity framework to consider the tradeoffs in confounder-control versus instrument-based studies. We also provide simplified summaries of these two approaches, highlighting important distinctions, strengths, and limitations. Because inconsistent terminology is a persistent challenge for interdisciplinary research, we include informal definitions for how we use key terms in this paper in Box 1, Box 2, Box 3 (also see (Angrist & Pischke, 2008; Pearl, 2000; Rothman, Greenland, & Lash, 2008; Shadish, Cook, & Campbell, 2002)).
Box 2
Terminology for Study Designs and Causal Effects
1.
Confounder-control study: A study in which effects of a treatment are estimated by comparing outcomes of treated to untreated individuals and potential imbalances in confounding variables between treated and untreated groups are addressed with adjustment, stratification, weighting, or similar methods. Treatment in these settings may be determined by the individual's own preferences, behaviors, or other naturally occurring influences. This study type corresponds to causal inference by fulfilling the backdoor criterion (Box 3, definition 5) under Pearl's framework (Pearl, 2000).
2.
Instrument-based study: A study in which effects of a treatment are estimated by leveraging apparently random or arbitrary factors that alter the chances an individual will receive a treatment, e.g., due to external factors such as the timing of policy changes. This is analogous to randomization in a randomized controlled trial, in which random assignment affects the chances an individual will be treated but is otherwise unrelated to the outcome. The source of variation is often called an instrumental variable (Box 2, definition 3). This study type corresponds to causal inference by leveraging an instrumental variable under Pearl's framework (Pearl, 2000).
3.
Instrument or instrumental variable: An external factor that induces differences in the chance an individual will be exposed to the treatment of interest but has no other reason to be associated with the outcome. An instrument—for example, random assignment to treatment—can be used to estimate the effect of treatment on the outcome.
4.
Forcing variable: A variable with a threshold such that people just above the threshold are much more likely to be treated than people just below the threshold (or vice-versa). The threshold provides the discontinuity in regression discontinuity designs. The forcing variable, sometimes called the running variable, may also have a continuous, dose-response association with the outcome.
5.
Population average treatment effect (PATE): The difference in the average outcome if everyone in the population were treated compared to the average outcome if nobody in the population were treated. Because the effect of treatment might not be the same for everybody in the population, the PATE is distinguished from treatment effects in various subgroups.
6.
Average treatment effect among the treated or effect of treatment on the treated (ATT or ETT): The average treatment effect among those people who actually received treatment. This might differ from the PATE, for example, if the people most likely to benefit from treatment were also the most likely to be treated.
7.
Local average treatment effect (LATE): The average treatment effect among those whose treatment status was changed by the instrumental variable. This might differ from the PATE, for example, if the instrumental variable was a policy change that increased the chances of treatment for the people who were most likely to benefit from treatment.
Alt-text: Box 2
Box 3
Types of Bias and Assumptions for Causal Inference
1.
Confounding or omitted variable bias or bias from selection into treatment: The key bias introduced by lack of randomization. This bias occurs when the association between treatment and outcome is partially attributable to the influence of a third factor that affects both the treatment and the outcome (e.g., parental education may influence both a child's own education and that child's later health; if not accounted for, parental education confounds the association between the child's education and subsequent health). This bias is often referred to as omitted variables bias because it is a problem when the common cause is omitted from a regression model. Selection bias in this context specifically refers to selection into treatment and is distinct from biases due to selection into the study sample, which is the phenomenon typically referred to as selection bias in epidemiology.
2.
Information bias or measurement error: A bias arising from a flaw in measuring the treatment, outcome, or covariates. This error may result in differential or non-differential accuracy of information between comparison groups.
3.
Reverse causation: When the outcome causes the treatment, rather than the treatment causing the outcome.
4.
Exchangeability, ignorability, no confounding, or randomization assumption: The assumption that which treatment an individual receives is unrelated to her potential outcomes if given any particular treatment. This assumption is violated for example if people who are likely to have good outcomes regardless of treatment are more likely to actually be treated. In the context of instrumental variables analysis, exchangeability is the assumption that the instrument does not have shared causes with the outcome.
5.
Conditional exchangeability, conditional ignorability, or conditional randomization: The assumption that exchangeability, ignorability, or randomization is fulfilled after controlling for a set of measured covariates. When this assumption is met, we say that the set of covariates—known as a sufficient set—fulfills the backdoor criterion with respect to the treatment and outcome.
6.
Relevance: In the context of instrumental variables, the assumption that the instrument affects the treatment.
7.
Exclusion restriction: In the context of instrumental variables, the assumption that, conditional on measured covariates, the instrument only affects the outcome through the treatment.
8.
Monotonicity: In the context of instrumental variables, the assumption that the instrument does not have the opposite direction of effect on chances of treatment for different people in the population.
9.
Positivity or common support: All subgroups of individuals defined by covariate stratum (e.g., every combination of possible covariate values) must have a nonzero chance of experiencing every possible exposure level. Put another way, within every covariate subgroup, all exposure values of interest must be possible.
10.
Consistency: The assumption that an individual's potential outcome setting treatment to a particular value is that person's actual outcome if s/he actually has that particular value of treatment. This could be violated if the outcome might depend on how treatment was delivered or some other variation in the meaning or content of the treatment. Some researchers consider consistency a truism rather than an assumption.
11.
Stable unit treatment value assumption (SUTVA): The assumption that all versions of the treatment has the same effect (i.e., versions of the treatment with differences substantial enough to have different health effects are referred to as some other type of treatment), and each unit's outcomes are unaffected by the treatment values of other units.
2) The term controlled clinical trials and give examples.
The primary goal of observational studies, e.g., case-control studies and cohort studies, is to test hypotheses about the determinants of disease. In contrast, the goal of intervention studies is to test the efficacy of specific treatments or preventive measures by assigning individual subjects to one of two or more treatment or prevention options. Intervention studies often test the efficacy of drugs, but one might also use this design to test the efficacy of differing management strategies or regimens. There are two major types of intervention studies:
In many respects the design of a clinical trial is analogous to a prospective cohort study, except that the investigators assign or allocate the exposure (treatment) under study.
This provides clinical trials with a powerful advantage over observational studies, provided the assignment to a treatment group is done randomly with a sufficiently large sample size. Under these circumstances randomized clinical trials (RCTs) provide the best opportunity to control for confounding and avoid certain biases. Consequently, they provide the most effective way to detect small to moderate benefits of one treatment over another. However, in order to provide definitive answers, clinical trials must enroll a sufficient number of appropriate subjects and follow them for an adequate period of time. Consequently, clinical trials can be long and expensive.
Clinical research is medical research involving people. There are two types, observational studies and clinical trials.
Observational studies observe people in normal settings. Researchers gather information, group volunteers according to broad characteristics, and compare changes over time. For example, researchers may collect data through medical exams, tests, or questionnaires about a group of older adults over time to learn more about the effects of different lifestyles on cognitive health. These studies may help identify new possibilities for clinical trials.
Clinical trials are research studies performed in people that are aimed at evaluating a medical, surgical, or behavioral intervention. They are the primary way that researchers find out if a new treatment, like a new drug or diet or medical device (for example, a pacemaker) is safe and effective in people. Often a clinical trial is used to learn if a new treatment is more effective and/or has less harmful side effects than the standard treatment.
Other clinical trials test ways to find a disease early, sometimes before there are symptoms. Still others test ways to prevent a health problem. A clinical trial may also look at how to make life better for people living with a life-threatening disease or a chronic health problem. Clinical trials sometimes study the role of caregivers or support groups.
Before the U.S. Food and Drug Administration (FDA) approves a clinical trial to begin, scientists perform laboratory tests and studies in animals to test a potential therapy’s safety and efficacy. If these studies show favorable results, the FDA gives approval for the intervention to be tested in humans.
What Are the Four Phases of Clinical Trials?
Clinical trials advance through four phases to test a treatment, find the appropriate dosage, and look for side effects. If, after the first three phases, researchers find a drug or other intervention to be safe and effective, the FDA approves it for clinical use and continues to monitor its effects.
Clinical trials of drugs are usually described based on their phase. The FDA typically requires Phase I, II, and III trials to be conducted to determine if the drug can be approved for use.