Motivation
“One enemy of robust science is our humanity — our appetite for being right, and our tendency to find patterns in noise, to see supporting evidence for what we already believe is true, and to ignore the facts that do not fit.”
— (“Let’s Think About Cognitive Bias” 2015)
Scientific research is at a unique point in history. The need to improve rigor and reproducibility in our field is greater than ever; corroboration moves science forward, yet there is a growing alarm about results that cannot be reproduced and that report false discoveries (Baker 2016). Consequences of not meeting this need will result in further decline in the rate of scientific progression, the reputation of the sciences, and the public’s trust in its findings (Munafò et al. 2017; “How Scientists Fool Themselves – and How They Can Stop” 2015).
“The key question we want to answer when seeing the results of any scientific study is whether we can trust the data analysis.”
— Peng (2015)
Unfortunately, at its current state the culture of data analysis and statistics actually enables human bias through improper model selection. All hypothesis tests and estimators are derived from statistical models, so to obtain valid estimates and inference it is critical that the statistical model contains the process that generated the data. Perhaps treatment was randomized or only depended on a small number of baseline covariates; this knowledge should and can be incorporated in the model. Alternatively, maybe the data is observational, and there is no knowledge about the data-generating process (DGP). If this is the case, then the statistical model should contain all data distributions. In practice; however, models are not selected based on knowledge of the DGP, instead models are often selected based on (1) the p-values they yield, (2) their convenience of implementation, and/or (3) an analysts loyalty to a particular model. This practice of “cargo-cult statistics — the ritualistic miming of statistics rather than conscientious practice,” (Stark and Saltelli 2018) is characterized by arbitrary modeling choices, even though these choices often result in different answers to the same research question. That is, “increasingly often, [statistics] is used instead to aid and abet weak science, a role it can perform well when used mechanically or ritually,” as opposed to its original purpose of safeguarding against weak science (Stark and Saltelli 2018). This presents a fundamental drive behind the epidemic of false findings that scientific research is suffering from (???).
“We suggest that the weak statistical understanding is probably due to inadequate”statistics lite" education. This approach does not build up appropriate mathematical fundamentals and does not provide scientifically rigorous introduction into statistics. Hence, students’ knowledge may remain imprecise, patchy, and prone to serious misunderstandings. What this approach achieves, however, is providing students with false confidence of being able to use inferential tools whereas they usually only interpret the p-value provided by black box statistical software. While this educational problem remains unaddressed, poor statistical practices will prevail regardless of what procedures and measures may be favored and/or banned by editorials."
— Szucs and Ioannidis (2017)
Our team at The University of California, Berkeley, is uniquely positioned to provide such an education. Spearheaded by Professor Mark van der Laan, and spreading rapidly by many of his students and colleagues who have greatly enriched the field, the aptly named “Targeted Learning” methodology targets the scientific question at hand and is counter to the current culture of “convenience statistics” which opens the door to biased estimation, misleading results, and false discoveries. Targeted Learning restores the fundamentals that formalized the field of statistics, such as the that facts that a statistical model represents real knowledge about the experiment that generated the data, and a target parameter represents what we are seeking to learn from the data as a feature of the distribution that generated it (van der Laan and Starmans 2014). In this way, Targeted Learning defines a truth and establishes a principled standard for estimation, thereby inhibiting these all-too-human biases (e.g., hindsight bias, confirmation bias, and outcome bias) from infiltrating analysis.
“The key for effective classical [statistical] inference is to have well-defined questions and an analysis plan that tests those questions.”
— Nosek et al. (2018)
Our objective is to provide training to students, researchers, industry professionals, faculty in science, public health, statistics, and other fields to empower them with the necessary knowledge and skills to utilize the sound methodology of Targeted Learning — a technique that provides tailored pre-specified machines for answering queries, so that each data analysis is completely reproducible, and estimators are efficient, minimally biased, and provide formal statistical inference.
Just as the conscientious use of modern statistical methodology is necessary to ensure that scientific practice thrives, it remains critical to acknowledge the role that robust software plays in allowing practitioners direct access to published results. We recall that “an article…in a scientific publication is not the scholarship itself, it is merely advertising of the scholarship. The actual scholarship is the complete software development environment and the complete set of instructions which generated the figures,” thus making the availability and adoption of robust statistical software key to enhancing the transparency that is an inherent aspect of science (Buckheit and Donoho 1995).
For a statistical methodology to be readily accessible in practice, it is
crucial that it is accompanied by robust user-friendly software
(Pullenayegum et al. 2016; Stromberg and others 2004). The tlverse
software
ecosystem was developed to fulfill this need for the Targeted Learning
methodology. Not only does this software facilitate computationally reproducible
and efficient analyses, it is also a tool for Targeted Learning education since
its workflow mirrors that of the methodology. In particular, the tlverse
paradigm does not focus on implementing a specific estimator or a small set of
related estimators. Instead, the focus is on exposing the statistical framework
of Targeted Learning itself — all R
packages in the tlverse
ecosystem
directly model the key objects defined in the mathematical and theoretical
framework of Targeted Learning. What’s more, the tlverse
R
packages share a
core set of design principles centered on extensibility, allowing for them to be
used in conjunction with each other and built upon one other in a cohesive
fashion.
In this workshop, the reader will embark on a journey through the tlverse
ecosystem. Guided by R
programming exercises, case studies, and
intuitive explanation readers will build a toolbox for applying the Targeted
Learning statistical methodology, which will translate to real-world causal
inference analyses. Participants need not be a fully trained statistician to
begin understanding and applying these methods. However, it is highly
recommended for participants to have an understanding of basic statistical
concepts such as confounding, probability distributions, confidence intervals,
hypothesis tests, and regression. Advanced knowledge of mathematical statistics
may be useful but is not necessary. Familiarity with the R
programming
language will be essential. We also recommend an understanding of introductory
causal inference.
For introductory materials for learning the R
programming language we recommend the following free resources:
- Software Carpentry’s Programming with
R
- Software Carpentry’s
R
for Reproducible Scientific Analysis - Grolemund and Wickham’s
R
for Data Science
For causal inference learning materials we recommend the following resources:
References
Baker, Monya. 2016. “Is There a Reproducibility Crisis? A Nature Survey Lifts the Lid on How Researchers View the Crisis Rocking Science and What They Think Will Help.” Nature 533 (7604). Nature Publishing Group: 452–55.
Buckheit, Jonathan B, and David L Donoho. 1995. “Wavelab and Reproducible Research.” In Wavelets and Statistics, 55–81. Springer.
“How Scientists Fool Themselves – and How They Can Stop.” 2015. Nature 526 (7572). Springer Nature.
“Let’s Think About Cognitive Bias.” 2015. Nature 526 (7572). Springer Nature.
Munafò, Marcus R, Brian A Nosek, Dorothy VM Bishop, Katherine S Button, Christopher D Chambers, Nathalie Percie Du Sert, Uri Simonsohn, Eric-Jan Wagenmakers, Jennifer J Ware, and John PA Ioannidis. 2017. “A Manifesto for Reproducible Science.” Nature Human Behaviour 1 (1). Nature Publishing Group: 0021.
Nosek, Brian A, Charles R Ebersole, Alexander C DeHaven, and David T Mellor. 2018. “The Preregistration Revolution.” Proceedings of the National Academy of Sciences 115 (11). National Acad Sciences: 2600–2606.
Peng, Roger. 2015. “The Reproducibility Crisis in Science: A Statistical Counterattack.” Significance 12 (3). Wiley Online Library: 30–32.
Pullenayegum, Eleanor M, Robert W Platt, Melanie Barwick, Brian M Feldman, Martin Offringa, and Lehana Thabane. 2016. “Knowledge Translation in Biostatistics: A Survey of Current Practices, Preferences, and Barriers to the Dissemination and Uptake of New Statistical Methods.” Statistics in Medicine 35 (6). Wiley Online Library: 805–18.
Stark, Philip B, and Andrea Saltelli. 2018. “Cargo-Cult Statistics and Scientific Crisis.” Significance 15 (4). Wiley Online Library: 40–43.
Stromberg, Arnold, and others. 2004. “Why Write Statistical Software? The Case of Robust Statistical Methods.”
Szucs, Denes, and John Ioannidis. 2017. “When Null Hypothesis Significance Testing Is Unsuitable for Research: A Reassessment.” Frontiers in Human Neuroscience 11. Frontiers: 390.
van der Laan, Mark J, and Richard JCM Starmans. 2014. “Entering the Era of Data Science: Targeted Learning and the Integration of Statistics and Computational Data Analysis.” Advances in Statistics 2014. Hindawi.