--- title: "Benchmarking the IncidencePrevalence R package" output: rmarkdown::html_vignette vignette: > %\VignetteIndexEntry{a07_benchmark} %\VignetteEngine{knitr::rmarkdown} %\VignetteEncoding{UTF-8} --- ```{r, include = FALSE} knitr::opts_chunk$set( collapse = TRUE, comment = "#>", fig.width = 7, fig.height = 5 ) ``` To check the performance of the IncidencePrevalence package we can use the benchmarkIncidencePrevalence(). This function generates some hypothetical study cohorts and the estimates incidence and prevalence using various settings and times how long these analyses take. We can start for example by benchmarking our example mock data which uses duckdb. ```{r, message=FALSE, warning=FALSE} library(IncidencePrevalence) library(visOmopResults) library(dplyr) library(ggplot2) library(stringr) cdm <- mockIncidencePrevalence( sampleSize = 100, earliestObservationStartDate = as.Date("2010-01-01"), latestObservationStartDate = as.Date("2010-01-01"), minDaysToObservationEnd = 364, maxDaysToObservationEnd = 364, outPre = 0.1 ) timings <- benchmarkIncidencePrevalence(cdm) timings |> glimpse() ``` We can see our results like so: ```{r} visOmopTable(timings, hide = c( "variable_name", "variable_level", "strata_name", "strata_level" ), groupColumn = "task" ) ``` # Results from test databases Here we can see the results from the running the benchmark on test datasets on different databases management systems. These benchmarks have already been run so we'll start by loading the results. ```{r} test_db <- IncidencePrevalenceBenchmarkResults |> filter(str_detect(cdm_name, "CPRD", negate = TRUE)) test_db |> glimpse() ``` ```{r} visOmopTable(bind(timings, test_db), settingsColumn = "package_version", hide = c( "variable_name", "variable_level", "strata_name", "strata_level" ), groupColumn = "task" ) ``` # Results from real databases Above we've seen performance on small test databases. However, more interesting is to know how the package performs on our actual patient-level data, which is often much larger. Below our results from running our benchmarking tasks against real patient datasets. ```{r} real_db <- IncidencePrevalenceBenchmarkResults |> filter(str_detect(cdm_name, "CPRD")) visOmopTable(real_db, settingsColumn = "package_version", hide = c( "variable_name", "variable_level", "strata_name", "strata_level" ), groupColumn = "task" ) ``` # Sharing your benchmarking results Sharing your benchmark results will help us improve the package. To run the benchmark, connect to your database and create your cdm reference. Then run the benchmark like below and export the results as a csv. ```{r, eval = FALSE} library(CDMConnector) library(IncidencePrevalence) cdm <- cdmFromCon("....") timings <- benchmarkIncidencePrevalence(cdm) exportSummarisedResult( timings, minCellCount = 5, fileName = "results_{cdm_name}_{date}.csv", path = getwd() ) ```