Last updated: 2019-11-26
Checks: 6 1
Knit directory: simpamm/
This reproducible R Markdown analysis was created with workflowr (version 1.5.0). The Checks tab describes the reproducibility checks that were applied when the results were created. The Past versions tab lists the development history.
Great! Since the R Markdown file has been committed to the Git repository, you know the exact version of the code that produced these results.
Great job! The global environment was empty. Objects defined in the global environment can affect the analysis in your R Markdown file in unknown ways. For reproduciblity it’s best to always run the code in an empty environment.
The command set.seed(20180517)
was run prior to running the code in the R Markdown file. Setting a seed ensures that any results that rely on randomness, e.g. subsampling or permutations, are reproducible.
Great job! Recording the operating system, R version, and package versions is critical for reproducibility.
To ensure reproducibility of the results, delete the cache directory confidence-intervals_cache
and re-run the analysis. To have workflowr automatically delete the cache directory prior to building the file, set delete_cache = TRUE
when running wflow_build()
or wflow_publish()
.
Great job! Using relative paths to the files within your workflowr project makes it easier to run your code on other machines.
Great! You are using Git for version control. Tracking code development and connecting the code version to the results is critical for reproducibility. The version displayed above was the version of the Git repository at the time these results were generated.
Note that you need to be careful to ensure that all relevant files for the analysis have been committed to Git prior to generating the results (you can use wflow_publish
or wflow_git_commit
). workflowr only checks the R Markdown file, but you know if there are other scripts or data files that it depends on. Below is the status of the Git repository when the results were generated:
Ignored files:
Ignored: .Rhistory
Ignored: .Rproj.user/
Ignored: analysis/confidence-intervals_cache/
Ignored: analysis/pem_vs_pam_cache/
Ignored: analysis/time-varying-cumulative-effect_cache/
Untracked files:
Untracked: analysis/analysis.bib
Untracked: analysis/fit-vs-truth-ped.gif
Untracked: analysis/math_definitions.tex
Untracked: analysis/time-varying-cumulative-effect.Rmd
Untracked: output/sim-conf-int-registry/
Untracked: output/sim-lag-lead-registry/
Untracked: output/sim-pem-vs-pam-registry/
Untracked: output/tve-cumulative-registry/
Untracked: sandbox/
Untracked: simpamm.Rproj
Note that any generated files, e.g. HTML, png, CSS, etc., are not included in this status report because it is ok for generated content to have uncommitted changes.
These are the previous versions of the R Markdown and HTML files. If you’ve configured a remote Git repository (see ?wflow_git_remote
), click on the hyperlinks in the table below to view them.
File | Version | Author | Date | Message |
---|---|---|---|---|
Rmd | 9c4d35c | Andreas Bender | 2019-11-25 | Add math |
html | 0969c5d | Andreas Bender | 2019-11-25 | Build site. |
html | 8dc2ecc | Andreas Bender | 2019-11-25 | Build site. |
html | f266198 | Andreas Bender | 2019-11-25 | Build site. |
Rmd | a900481 | Andreas Bender | 2019-11-25 | wflow_publish(“analysis/confidence-intervals.Rmd”) |
html | 45c6520 | Andreas Bender | 2019-11-25 | Build site. |
Rmd | 4ea7082 | Andreas Bender | 2019-11-25 | Add sim study on confidence intervals |
library(dplyr)
library(ggplot2)
theme_set(theme_bw())
library(patchwork)
library(survival)
library(mgcv)
library(pammtools)
library(batchtools)
knitr::opts_chunk$set(autodep = TRUE)
# bibliography
library(RefManageR)
BibOptions(check.entries = FALSE, hyperlink=FALSE, style = "markdown",
max.names = 1)
bib <- ReadBib("analysis/analysis.bib", check = FALSE)
\[ \newcommand{\ra}{\rightarrow} \newcommand{\bs}[1]{\boldsymbol{#1}} \newcommand{\tn}[1]{\textnormal{#1}} \newcommand{\mbf}[1]{\mathbf{#1}} \newcommand{\nn}{\nonumber} \newcommand{\ub}{\underbrace} \newcommand{\tbf}[1]{\textbf{#1}} \newcommand{\E}{\mathbb{E}} \newcommand{\R}{\mathbb{R}} \newcommand{\Prob}{\mathbb{P}} \newcommand{\bfx}{\mathbf{x}} \newcommand{\bfX}{\mathbf{X}} \newcommand{\bfV}{\mathbf{V}} \newcommand{\bfB}{\mathbf{B}} \newcommand{\bfy}{\mathbf{y}} \newcommand{\bff}{\mathbf{f}} \newcommand{\bsbeta}{\boldsymbol{\beta}} \newcommand{\bsgamma}{\boldsymbol{\gamma}} \newcommand{\bslambda}{\boldsymbol{\lambda}} \newcommand{\bskappa}{\boldsymbol{\kappa}} \newcommand{\bsnu}{\boldsymbol{\nu}} \newcommand{\bfS}{\mathbf{S}} \newcommand{\bfz}{\mathbf{z}} \newcommand{\bfZ}{\mathbf{Z}} \newcommand{\drm}{\mathrm{d}} \newcommand{\tz}{\ensuremath{t_z}} \newcommand{\tlag}{t_{\text{lag}}} \newcommand{\tlead}{t_{\text{lead}}} \newcommand{\mcZ}{\mathcal{Z}} \newcommand{\tw}{\mathcal{T}_e(j)} \newcommand{\Tw}[1]{\mathcal{T}^{#1}} \newcommand{\tilt}{\tilde{t}} \newcommand{\Zi}{\ensuremath{\mathcal{Z}_i(t)}} \newcommand{\CI}{\ensuremath{C1}} \newcommand{\CII}{\ensuremath{C2}} \newcommand{\CIII}{\ensuremath{C3}} \newcommand{\gCII}{\ensuremath{g_{_{\CII}}}} \newcommand{\gCIII}{\ensuremath{g_{_{\CIII}}}} \newcommand{\gammaEst}{\hat{\gamma}_g^r} \newcommand{\hatEj}{\hat{e}_{j, r}} \newcommand{\diag}{\operatorname{diag}} \newcommand{\rpexp}{\operatorname{rpexp}} \newcommand{\Var}{\operatorname{Var}}\]
Here we consider three different ways to calculate confidence intervals for hazard rates estimated by PAMMs and derivatives thereof, i.e., the cumulative hazard and survival probability. The three methods compared here are
PAMMs model the (log-)hazard for which standard errors can be obtained straight forward from the theory of GAMMs (for details see Wood (2017)). Since \(\hat{\bsgamma} \sim N(\bsgamma, \bfV_{\bsgamma})\), it follows that linear transformations of \(\hat{\bsgamma}\) also follow a normal distribution. For example, the (approximate) distribution of the log-hazard for covariate specification \(\bfx\) (row vector) is given by
\[\label{eq:linear-trafo-nv} \bfx'\hat{\bsgamma} \sim N(\bfx'\bsgamma, \bfx' \bfV_{\bsgamma}\bfx). \]
However, often we are also interested in quantities derived from the (log)-hazard, most notably the cumulative hazard \(\Lambda(t|\bfx)\) and the survival probability \(S(t|\bfx)\) that are non-linear transformations of \(\bsgamma\). Three approaches to derive the standard errors or confidence intervals for such transformations are common in practice:
In the following, the three approaches are briefly described in more detail. Results of a simulation study with respect to the coverage of the CI obtained by the different approaches, are presented in section “Evaluation”.
where \(\nabla h(\bsgamma)\) is the Jacobi matrix.
Below, the variances \(\bfV_{h(\bsgamma)}\) are derived for the transformations
\(h(\bsgamma) := \Lambda(t; \bfX, \bsgamma)\) and
\(h(\bsgamma) := S(t; \bfX, \bsgamma) = \exp(-\Lambda(t|\bfX, \bsgamma))\)
In the following, let \(t \equiv \kappa_J\) (the end point of the last interval) without loss of generality and \(\bfX\) the \(J\times P\) design matrix, such that the log-hazard in intervals \(j=1,\ldots,J\) is given by \(\bs{\eta} = \bfx_1'\bsgamma,\ldots, \bfx_J'\bsgamma = \eta_1,\ldots \eta_J\). Let further \(\mbf{e}^{\mbf{v}}:=(\exp(v_1), \ldots,\exp(v_J))'\) and \(\mbf{E}^{\mbf{v}}\) the respective diagonal matrix with elements \(\mbf{E}^{\mbf{v}}_{j,j}=\exp(v_j), j=1,\ldots,J\) and \(\mbf{E}^{\mbf{v}}_{j,j'}=0\ \forall j'\neq j\).
These are known constants, thus it suffices to derive the Jacobi matrix for \(\mbf{e}^{\bs{\eta}}\):
\[\begin{align} \nabla \mbf{e}^{\bs{\eta}} &= \begin{pmatrix} \partial \frac{\exp(\bfx_1'\bsgamma)}{\partial \gamma_1} & \cdots &\frac{\exp(\bfx_1'\bsgamma)}{\partial \gamma_{P}}\\ \vdots &\ddots & \vdots\\ \partial \frac{\exp(\bfx_J'\bsgamma)}{\partial \gamma_1} & \cdots &\frac{\exp(\bfx_J'\bsgamma)}{\partial \gamma_{P}} \end{pmatrix}\label{eq:jacobi-hazard2}\\ & = \begin{pmatrix} \exp(\eta_1)\cdot x_{1,1} & \cdots & \exp(\eta_1)\cdot x_{1,P}\\ \vdots & \ddots & \vdots \\ \exp(\eta_J)\cdot x_{J,1} & \cdots & \exp(\eta_J)\cdot x_{J,P}\\ \end{pmatrix} \\ & = \mbf{E}^{\bs{\eta}}\bfX\label{eq:jacobi-hazard} \end{align}\]Thus the variance of the cumulative hazard is given by
\[\begin{align} \Var(\Lambda(t|\bfX, \bsgamma)) & = (\bs{\Delta} \mbf{E}^{\bs{\eta}}\bfX) \bfV_{\bsgamma} (\bs{\Delta}\mbf{E}^{\bs{\eta}}\bfX)'\label{eq:V-delta-cumu}\\ & = (\bs{\Delta} \mbf{E}^{\bs{\eta}}) (\bfX \bfV_{\bsgamma}\bfX') (\bs{\Delta}\mbf{E}^{\bs{\eta}})'\label{eq:V-delta-cumu2} \end{align}\]
This result was also stated by Carstensen (2005) in a slightly less general form. When \(t \neq \kappa_J\), \(\bfX\) is a \(j(t) \times P\) matrix and \(\bs{\Delta}\) is a \(j(t)\times j(t)\) matrix with elements \(\Delta_1,\ldots, \Delta_{j(t)}\), where \(j(t) = j:t\in (\kappa_{j-1},\kappa_j]\) and \(\Delta_{j(t)}=t-\kappa_{j(t)-1}\).
Results for the survivor function can be obtained similar to the cumulative hazard by defining \(h(\bsgamma) := S(t|\bfX, \bsgamma)=\exp(-\Lambda(t|\bfX,\bsgamma))\). The Jacobi matrix is then given by
\[\begin{align} \nabla h(\bsgamma) &= \begin{pmatrix} \frac{\partial e^{-\Delta_1e^{\eta_1}}}{\partial \gamma_1} & \cdots & \frac{\partial e^{-\Delta_1e^{\eta_1}}}{\partial \gamma_p}\\ \vdots & \ddots & \vdots\\ \frac{\partial e^{-\sum_{j=1}^J \Delta_je^{\eta_j}}}{\partial \gamma_1} & \cdots &\frac{\partial \ e^{-\sum_{j=1}^J \Delta_je^{\eta_j}}}{\partial \gamma_P} \end{pmatrix}\nn\\ &= \begin{pmatrix} e^{-\Delta_1e^{\eta_1}}\cdot(-\Delta_1e^{\eta_1})\cdot x_{1,1} & \cdots & e^{-\Delta_1e^{\eta_1}}\cdot(-\Delta_1e^{\eta_1})\cdot x_{1,P}\\ \vdots & \ddots & \vdots\\ e^{-\sum_{j=1}^J \Delta_je^{\eta_j}}\cdot(-\sum_{j=1}^{J}\Delta_j e^{\eta_j}\cdot x_{j,1}) & \cdots & e^{-\sum_{j=1}^J \Delta_je^{\eta_j}}\cdot(-\sum_{j=1}^{J}\Delta_j e^{\eta_j}\cdot x_{j,P}) \end{pmatrix}\\ & = -\mbf{E}^{-\bs{\Delta}\mbf{e}^{\bs{\eta}}}\bs{\Delta}\mbf{E}^{\bs{\eta}}\bfX\label{eq:jacobi-surv} \end{align} \]
The variance of the survival probability is given below:
\[ \begin{align} \Var(S(t|\bfX, \bsgamma)) & = (-\mbf{E}^{-\bs{\Delta}\mbf{e}^{\bs{\eta}}}\bs{\Delta}\mbf{E}^{\bs{\eta}}\bfX)\bfV_{\bsgamma} (-\mbf{E}^{-\bs{\Delta}\mbf{e}^{\bs{\eta}}}\bs{\Delta}\mbf{E}^{\bs{\eta}}\bfX)'\label{eq:V-delta-surv}\\ & = (-\mbf{E}^{-\bs{\Delta}\mbf{e}^{\bs{\eta}}}\bs{\Delta}\mbf{E}^{\bs{\eta}})(\bfX\bfV_{\bsgamma}\bfX') (-\mbf{E}^{-\bs{\Delta}\mbf{e}^{\bs{\eta}}}\bs{\Delta}\mbf{E}^{\bs{\eta}})'\label{eq:V-delta-surv3}\\ & = \mbf{E}^{-\bs{\Delta}\mbf{e}^{\bs{\eta}}} \Var(\Lambda(t|\bfX,\bsgamma))(\mbf{E}^{-\bs{\Delta}\mbf{e}^{\bs{\eta}}})'.\label{eq:V-delta-surv2} \end{align} \]
The second formulation can be used when the variance for \(\bfX\bsgamma\) is already available. The last expression is usefull when the variance of the cumulative hazard was obtained in a previous calculation.
When the estimation process of a GAMM is viewed from an empirical Bayes point of view, which, from a computational perspective, is equivalent to the REML based approach, it can be shown (e.g., Fahrmeir, Kneib, Lang, and Marx (2013, Ch. 7.6.1), Wood (2017, Ch. 4.2.4, 5.8, 6.10)), that the posterior distribution of regression parameters \(\bsgamma\) is given by
\[ \begin{equation}\label{eq:posterior-gamma} \bsgamma|\bfy,\bsnu \sim N(\hat{\bsgamma}, \bfV_{\bsgamma}) \end{equation} \]
with \(\hat{\bsgamma}\) and \(\bfV_{\bsgamma} = (\bfX'\tbf{W}\bfX + \tbf{S}_{\nu})^{-1}\) as before. This result can be used to compute Bayesian credible intervals for any quantity of interest that is a function of regression coefficients \(\bsgamma\). In the context of PAMMs, this approach was described by Argyropoulos and Unruh (2015). For example, 95% CIs for \(\hat{S}(t|\bfx_j)\) are obtained by drawing samples \(\bsgamma_r, r=1,\ldots,R\) from the posterior, calculating \(\hat{S}_r(t,\bfx_j)\). The lower and upper boarders of the CI is then obtained as the \(2.5\%\) and \(97.5\%\) quantiles of the \(R\) survival probabilities, respectively.
One simple approach, which is often used in practice to calculating confidence intervals (CI) for monotone transformations of the linear predictor, is to apply the transformation to the lower and upper bound of the CI for the linear predictor. Thus, when \(\hat{\eta}_j=\bfx_j'\hat{\bsgamma}\) is the log-hazard in the \(j\)-th interval with CI
\[ [\hat{l}_j=\hat{\eta}_j-\zeta_{1-\alpha/2}\hat{\sigma}_{\hat{\eta}_j},\hat{u}_j = \hat{\eta}_j+\zeta_{1-\alpha/2}\hat{\sigma}_{\hat{\eta}_j}] \], with \(\zeta_{1-\alpha/2}\) the \(1-\alpha/2\) quantile of the normal distribution, the CI for \(h(\hat{\eta}_j)\) is obtained by \([h(\hat{l}_j); h(\hat{u}_j)]\).
We simulate data with log-hazard rate \[ \log(\lambda(t|x)) = -3.5 + f(8,2) \cdot 6 - 0.5\cdot x_1 + \sqrt{x_2}, \] where \(f(8,2)\) is the gamma density function with respective parameters.
The below wrapper creates a data set with \(n = 500\) survival times based on above hazard. The simulation of survival times is performed using function pammtools::sim_pexp
.
sim_wrapper <- function(data, job, n = 500, time_grid = seq(0, 10, by = 0.05)) {
# create data set with covariates
df <- tibble::tibble(x1 = runif(n, -3, 3), x2 = runif(n, 0, 6))
# baseline hazard
f0 <- function(t) {dgamma(t, 8, 2) * 6}
ndf <- sim_pexp(
formula = ~ -3.5 + f0(t) -0.5*x1 + sqrt(x2),
data = df, cut = time_grid)
ndf
}
The below wrapper - transforms the simulated survival data into the PED format - fits the PAMM - returns a data frame that contains the coverage for each method - two types of splines are considered for the estimation of the baseline hazard, thinplate splines (tp)and P-Splines (ps)
Expand to see function source code
ci_wrapper <- function(
data,
job,
instance,
bs = "ps",
k = 10,
ci_type = "default") {
# instance <- sim_wrapper()
ped <- as_ped(
data = instance,
formula = Surv(time, status) ~ x1 + x2,
id = "id")
form <- paste0("ped_status ~ s(tend, bs='", bs, "', k=", k, ") + s(x1) + s(x2)")
mod <- gam(
formula = as.formula(form),
data = ped,
family = poisson(),
offset = offset,
method = "REML")
f0 <- function(t) {dgamma(t, 8, 2) * 6}
# create new data set
nd <- make_newdata(ped, tend = unique(tend), x1 = c(0), x2 = c(3)) %>%
mutate(
true_hazard = exp(-3.5 + f0(tend) -0.5 * x1 + sqrt(x2)),
true_cumu = cumsum(intlen * true_hazard),
true_surv = exp(-true_cumu))
# add hazard, cumulative hazard, survival probability with confidence intervals
# using the 3 different methods
nd_default <- nd %>%
add_hazard(mod, se_mult = qnorm(0.975), ci_type = "default") %>%
add_cumu_hazard(mod, se_mult = qnorm(0.975), ci_type = "default") %>%
add_surv_prob(mod, se_mult = qnorm(0.975), ci_type = "default") %>%
mutate(
hazard = (true_hazard >= ci_lower) & (true_hazard <= ci_upper),
cumu = (true_cumu >= cumu_lower) & (true_cumu <= cumu_upper),
surv = (true_surv >= surv_lower) & (true_surv <= surv_upper)) %>%
select(hazard, cumu, surv) %>%
summarize_all(mean) %>%
mutate(method = "direct")
nd_delta <- nd %>%
add_hazard(mod, se_mult = qnorm(0.975), ci_type = "delta", overwrite = TRUE) %>%
add_cumu_hazard(mod, se_mult = qnorm(0.975), ci_type = "delta", overwrite = TRUE) %>%
add_surv_prob(mod, se_mult = qnorm(0.975), ci_type = "delta", overwrite = TRUE) %>%
mutate(
hazard = (true_hazard >= ci_lower) & (true_hazard <= ci_upper),
cumu = (true_cumu >= cumu_lower) & (true_cumu <= cumu_upper),
surv = (true_surv >= surv_lower) & (true_surv <= surv_upper)) %>%
select(hazard, cumu, surv) %>%
summarize_all(mean) %>%
mutate(method = "delta")
nd_sim <- nd %>%
add_hazard(mod, se_mult = qnorm(0.975), ci_type = "sim", nsim = 500, overwrite = TRUE) %>%
add_cumu_hazard(mod, se_mult = qnorm(0.975), ci_type = "sim", nsim = 500, overwrite = TRUE) %>%
add_surv_prob(mod, se_mult = qnorm(0.975), ci_type = "sim", nsim = 500, overwrite = TRUE) %>%
mutate(
hazard = (true_hazard >= ci_lower) & (true_hazard <= ci_upper),
cumu = (true_cumu >= cumu_lower) & (true_cumu <= cumu_upper),
surv = (true_surv >= surv_lower) & (true_surv <= surv_upper)) %>%
select(hazard, cumu, surv) %>%
summarize_all(mean) %>%
mutate(method = "simulation")
rbind(nd_default, nd_delta, nd_sim)
}
We simulate 100 data sets and obtain respective estimates using package batchtools
:
Expand to see
batchtools
code
if (!checkmate::test_directory_exists("output/sim-conf-int-registry")) {
reg <- makeExperimentRegistry(
"output/sim-conf-int-registry",
packages = c("mgcv", "dplyr", "tidyr", "pammtools"),
seed = 20052018)
reg <- loadRegistry("output/sim-conf-int-registry", writeable = TRUE)
# reg$cluster.functions = makeClusterFunctionsInteractive()
addProblem(name = "ci", fun = sim_wrapper)
addAlgorithm(name = "ci", fun = ci_wrapper)
algo_df <- data.frame(bs = c("tp", "ps"), stringsAsFactors = FALSE)
addExperiments(algo.design = list(ci = algo_df), repls = 300)
submitJobs(findNotDone())
# waitForJobs()
}
Below the RMSE and coverage are calculated for the different methods to estimate the confidence intervals
reg <- loadRegistry("output/sim-conf-int-registry", writeable = TRUE)
ids_res <- findExperiments(prob.name = "ci", algo.name = "ci")
pars <- unwrap(getJobPars()) %>% as_tibble()
res <- reduceResultsDataTable(ids=findDone(ids_res)) %>%
as_tibble() %>%
tidyr::unnest(cols = c(result)) %>%
left_join(pars)
# RMSE and coverage hazard
res %>%
group_by(bs, method) %>%
summarize(
"coverage hazard" = mean(hazard),
"coverage cumulative hazard" = mean(cumu),
"coverage survival probability" = mean(surv)) %>%
ungroup() %>%
mutate_if(is.numeric, ~round(., 3)) %>%
rename("basis" = "bs") %>%
knitr::kable()
basis | method | coverage hazard | coverage cumulative hazard | coverage survival probability |
---|---|---|---|---|
ps | delta | 0.911 | 0.904 | 0.907 |
ps | direct | 0.914 | 0.965 | 0.965 |
ps | simulation | 0.891 | 0.904 | 0.907 |
tp | delta | 0.936 | 0.925 | 0.933 |
tp | direct | 0.938 | 0.978 | 0.978 |
tp | simulation | 0.918 | 0.933 | 0.933 |
sessionInfo()
R version 3.6.1 (2019-07-05)
Platform: x86_64-pc-linux-gnu (64-bit)
Running under: Ubuntu 18.04.3 LTS
Matrix products: default
BLAS: /usr/lib/x86_64-linux-gnu/blas/libblas.so.3.7.1
LAPACK: /usr/lib/x86_64-linux-gnu/lapack/liblapack.so.3.7.1
locale:
[1] LC_CTYPE=en_US.UTF-8 LC_NUMERIC=C
[3] LC_TIME=en_US.UTF-8 LC_COLLATE=en_US.UTF-8
[5] LC_MONETARY=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8
[7] LC_PAPER=en_US.UTF-8 LC_NAME=C
[9] LC_ADDRESS=C LC_TELEPHONE=C
[11] LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C
attached base packages:
[1] stats graphics grDevices utils datasets methods base
other attached packages:
[1] tidyr_1.0.0 RefManageR_1.2.12 batchtools_0.9.11
[4] data.table_1.12.6 pammtools_0.1.14 mgcv_1.8-31
[7] nlme_3.1-142 survival_3.1-7 patchwork_0.0.1.9000
[10] ggplot2_3.2.1 dplyr_0.8.3
loaded via a namespace (and not attached):
[1] progress_1.2.2 tidyselect_0.2.5 xfun_0.11 purrr_0.3.3
[5] splines_3.6.1 lattice_0.20-38 colorspace_1.4-1 vctrs_0.2.0
[9] expm_0.999-4 htmltools_0.4.0 yaml_2.2.0 rlang_0.4.2
[13] later_1.0.0 pillar_1.4.2 glue_1.3.1 withr_2.1.2
[17] rappdirs_0.3.1 plyr_1.8.4 lifecycle_0.1.0 stringr_1.4.0
[21] munsell_0.5.0 gtable_0.3.0 workflowr_1.5.0 mvtnorm_1.0-11
[25] evaluate_0.14 knitr_1.26 httpuv_1.5.2 highr_0.8
[29] Rcpp_1.0.3 promises_1.1.0 scales_1.1.0 backports_1.1.5
[33] checkmate_1.9.4 jsonlite_1.6 fs_1.3.1 brew_1.0-6
[37] hms_0.5.2 digest_0.6.23 stringi_1.4.3 msm_1.6.7
[41] bibtex_0.4.2 grid_3.6.1 rprojroot_1.3-2 tools_3.6.1
[45] magrittr_1.5 base64url_1.4 lazyeval_0.2.2 tibble_2.1.3
[49] Formula_1.2-3 crayon_1.3.4 whisker_0.4 pkgconfig_2.0.3
[53] zeallot_0.1.0 Matrix_1.2-17 xml2_1.2.2 prettyunits_1.0.2
[57] lubridate_1.7.4 httr_1.4.1 assertthat_0.2.1 rmarkdown_1.17
[61] R6_2.4.1 git2r_0.26.1 compiler_3.6.1