
1 About the tlverse

1.1 What is the tlverse?

The tlverse is a new framework for Targeted Learning in R, inspired by the tidyverse ecosystem of R packages.

By analogy to the tidyverse:

The tidyverse is an opinionated collection of R packages designed for data science. All packages share an underlying design philosophy, grammar, and data structures.

So, the tlverse is

An opinionated collection of R packages for Targeted Learning sharing an underlying design philosophy, grammar, and core set of data structures. The tlverse aims to provide tools both for building Targeted Learning-based data analyses and for implementing novel, state-of-the-art Targeted Learning methods.

1.2 Anatomy of the tlverse

All Targeted Learning methods are targeted maximum likelihood (or minimum loss-based) estimators (i.e., TMLEs). The construction of any Targeted Learning estimator proceeds through a two-stage process:

1. Flexibly learning particular components of the data-generating distribution often through machine learning (e.g., Super Learning), resulting in initial estimates of nuisance parameters.
2. Use of a carefully constructed parametric model-based update, via maximum likelihood estimation (i.e., MLE), incorporating the initial estimates produced by the prior step to produce a TML estimator.

The packages making up the core components of the tlverse software ecosystem – sl3 and tmle3 – address the above two goals, respectively. Together, the general functionality exposed by both allows one to build specific TMLEs tailored exactly to a particular statistical estimation problem.

The software packages that make up the core of the tlverse are

• sl3: Modern Super Machine Learning
• What? A modern object-oriented implementation of the Super Learner algorithm, employing recently developed paradigms in R programming.
• Why? A design that leverages modern ideas for faster computation, is easily extensible and forward-looking, and forms one of the cornerstones of the tlverse.
• tmle3: An Engine for Targeted Learning
• What? A generalized framework that simplifies Targeted Learning by identifying and implementing a series of common statistical estimation procedures.
• Why? A common interface and engine that accommodates current algorithmic approaches to Targeted Learning and yet remains a flexible enough engine to power the implementation of emerging statistical techniques as they are developed.

Beyond these engines that provide the driving force behind the tlverse, there are a few supporting packages that play important roles in the background:

• origami: A Generalized Framework for Cross-Validation (Coyle and Hejazi 2018)
• What? A generalized framework for flexible cross-validation.
• Why? Cross-validation is a key part of ensuring error estimates are honest and in preventing overfitting. It is an essential part of both the Super Learner ensemble modeling algorithm and in the construction of TML estimators.
• delayed: Parallelization Framework for Dependent Tasks
• What? A framework for delayed computations (i.e., futures) based on task dependencies.
• Why? Efficient allocation of compute resources is essential when deploying computationally intensive algorithms at large scale.

A key principle of the tlverse is extensibility. That is, the software ecosystem aims to support the development of novel Targeted Learning estimators as they reach maturity. To achieve this degree of flexibility, we follow the model of implementing new classes of estimators, for distinct causal inference problems in separate packages, all of which rely upon the core machinery provided by sl3 and tmle3. There are currently three examples:

• tmle3mopttx: Optimal Treatments in the tlverse
• What? Learn an optimal rule and estimate the mean outcome under the rule.
• Why? Optimal treatments are a powerful tool in precision healthcare and other settings where a one-size-fits-all treatment approach is not appropriate.
• tmle3shift: Stochastic Shift Interventions based on Modified Treatment Policies in the tlverse
• What? Stochastic shift interventions for evaluating changes in continuous-valued treatments.
• Why? Not all treatment variables are binary or categorical. Estimating the total effects of intervening on continuous-valued treatments provides a way to probe how an effect changes with shifts in the treatment variable.
• tmle3mediate: Causal Mediation Analysis in the tlverse
• What? Techniques for evaluating the direct and indirect effects of treatments through mediating variables.
• Why? Evaluating the total effect of a treatment does not provide information about the pathways through which it may operate. When mediating variables have been collected, one can instead evaluate direct and indirect effect parameters that speak to the action mechanism of the treatment.

1.3 Primer on the R6 Class System

The tlverse is designed using basic object oriented programming (OOP) principles and the R6 OOP framework. While we’ve tried to make it easy to use the tlverse packages without worrying much about OOP, it is helpful to have some intuition about how the tlverse is structured. Here, we briefly outline some key concepts from OOP. Readers familiar with OOP basics are invited to skip this section.

1.3.1 Classes, Fields, and Methods

The key concept of OOP is that of an object, a collection of data and functions that corresponds to some conceptual unit. Objects have two main types of elements:

1. fields, which can be thought of as nouns, are information about an object, and
2. methods, which can be thought of as verbs, are actions an object can perform.

Objects are members of classes, which define what those specific fields and methods are. Classes can inherit elements from other classes (sometimes called base classes) – accordingly, classes that are similar, but not exactly the same, can share some parts of their definitions.

Many different implementations of OOP exist, with variations in how these concepts are implemented and used. R has several different implementations, including S3, S4, reference classes, and R6. The tlverse uses the R6 implementation. In R6, methods and fields of a class object are accessed using the \$ operator. For a more thorough introduction to R’s various OOP systems, see http://adv-r.had.co.nz/OO-essentials.html, from Hadley Wickham’s Advanced R (Wickham 2014).

Object Oriented Programming: Python and R

OO concepts (classes with inheritance) were baked into Python from the first published version (version 0.9 in 1991). In contrast, R gets its OO “approach” from its predecessor, S, first released in 1976. For the first 15 years, S had no support for classes, then, suddenly, S got two OO frameworks bolted on in rapid succession: informal classes with S3 in 1991, and formal classes with S4 in 1998. This process continues, with new OO frameworks being periodically released, to try to improve the lackluster OO support in R, with reference classes (R5, 2010) and R6 (2014). Of these, R6 behaves most like Python classes (and also most like OOP focused languages like C++ and Java), including having method definitions be part of class definitions, and allowing objects to be modified by reference.