Title: | Uncertainty in Multiplex Panel Testing |
---|---|
Description: | Provides methods to support the estimation of epidemiological parameters based on the results of multiplex panel tests. |
Authors: | Robert Challen [aut, cre] |
Maintainer: | Robert Challen <[email protected]> |
License: | MIT + file LICENSE |
Version: | 0.1.0 |
Built: | 2024-12-28 06:06:15 UTC |
Source: | https://github.com/bristol-vaccine-centre/testerror |
A dataframe containing the following columns:
id (character) - the patient identifier
test (factor) - the test type
result (logical) - the test result
Ungrouped.
No default value.
.input_data
.input_data
An object of class iface
(inherits from tbl_df
, tbl
, data.frame
) with 3 rows and 3 columns.
A dataframe containing the following columns:
id (character) - the patient identifier
result (logical) - the panel result
Ungrouped.
No default value.
.input_panel_data
.input_panel_data
An object of class iface
(inherits from tbl_df
, tbl
, data.frame
) with 2 rows and 3 columns.
A dataframe containing the following columns:
test (character) - the name of the test or panel
prevalence.lower (numeric) - the lower estimate
prevalence.median (numeric) - the median estimate
prevalence.upper (numeric) - the upper estimate
prevalence.method (character) - the method of estimation
prevalence.label (character) - a fomatted label of the true prevalence estimate with CI
Ungrouped.
No default value.
.output_data
.output_data
An object of class iface
(inherits from tbl_df
, tbl
, data.frame
) with 6 rows and 3 columns.
The observed counts of disease is going to be a binomial but
with the apparent prevalence as a probability. This will never be less than
(1-specificity)
of the test (and never more than the sensitivity). When either
of those quantities are uncertain the shape of the distribution of observed counts
is not clear cut.
apparent_prevalence(p, sens, spec)
apparent_prevalence(p, sens, spec)
p |
the true value of the prevalence |
sens |
the sensitivity of the test |
spec |
the specificity of the test |
the expected value of apparent prevalence
apparent_prevalence(0, 0.75, 0.97) apparent_prevalence(1, 0.75, 0.97)
apparent_prevalence(0, 0.75, 0.97) apparent_prevalence(1, 0.75, 0.97)
convert a beta distribution to a tibble
## S3 method for class 'beta_dist' as_tibble(x, prefix = NULL, confint = 0.95, ...)
## S3 method for class 'beta_dist' as_tibble(x, prefix = NULL, confint = 0.95, ...)
x |
the beta distribution |
prefix |
name to output columns prefix.lower, prefix.upper etc |
confint |
confidence intervals |
... |
not used |
convert a list of betas to a tibble
## S3 method for class 'beta_dist_list' as_tibble(x, ...)
## S3 method for class 'beta_dist_list' as_tibble(x, ...)
x |
a beta dist list |
... |
Arguments passed on to
|
a tibble
Bayesian simpler model true prevalence for component
bayesian_component_logit_model( pos_obs, n_obs, false_pos_controls = NULL, n_controls = NULL, false_neg_diseased = NULL, n_diseased = NULL, ..., sens = sens_prior(), spec = spec_prior(), confint = 0.95, fmt = "%1.2f%% [%1.2f%% — %1.2f%%]", chains = 4, warmup = 1000, iter = 2000, cache_result = TRUE )
bayesian_component_logit_model( pos_obs, n_obs, false_pos_controls = NULL, n_controls = NULL, false_neg_diseased = NULL, n_diseased = NULL, ..., sens = sens_prior(), spec = spec_prior(), confint = 0.95, fmt = "%1.2f%% [%1.2f%% — %1.2f%%]", chains = 4, warmup = 1000, iter = 2000, cache_result = TRUE )
pos_obs |
the number of positive observations for a given test |
n_obs |
the number of observations for a given test |
false_pos_controls |
the number of positives that appeared in the specificity disease-free control group. These are by definition false positives. This is (1-specificity)*n_controls |
n_controls |
the number of controls in the specificity disease-free control group. |
false_neg_diseased |
the number of negatives that appeared in the sensitivity confirmed disease group. These are by definition false negatives. This is (1-sensitivity)*n_diseased |
n_diseased |
the number of confirmed disease cases in the sensitivity control group. |
... |
not used |
sens |
the prior sensitivity of the test as a |
spec |
the prior specificity of the test as a |
confint |
confidence interval limits |
fmt |
a |
chains |
A positive integer specifying the number of Markov chains. The default is 4. |
warmup |
A positive integer specifying the number of warmup (aka burnin)
iterations per chain. If step-size adaptation is on (which it is by default),
this also controls the number of iterations for which adaptation is run (and
hence these warmup samples should not be used for inference). The number of
warmup iterations should be smaller than |
iter |
A positive integer specifying the number of iterations for each chain (including warmup). The default is 2000. |
cache_result |
save the result of the sampling in memory for the current session |
a list of dataframes containing the prevalence, sensitivity, and
specificity estimates, and a stanfit
object with the raw fit data
Bayesian simpler model true prevalence for component
bayesian_component_simpler_model( pos_obs, n_obs, false_pos_controls = NULL, n_controls = NULL, false_neg_diseased = NULL, n_diseased = NULL, ..., sens = uniform_prior(), spec = uniform_prior(), confint = 0.95, fmt = "%1.2f%% [%1.2f%% — %1.2f%%]", chains = 4, warmup = 1000, iter = 2000, cache_result = TRUE )
bayesian_component_simpler_model( pos_obs, n_obs, false_pos_controls = NULL, n_controls = NULL, false_neg_diseased = NULL, n_diseased = NULL, ..., sens = uniform_prior(), spec = uniform_prior(), confint = 0.95, fmt = "%1.2f%% [%1.2f%% — %1.2f%%]", chains = 4, warmup = 1000, iter = 2000, cache_result = TRUE )
pos_obs |
the number of positive observations for a given test |
n_obs |
the number of observations for a given test |
false_pos_controls |
the number of positives that appeared in the specificity disease-free control group. These are by definition false positives. This is (1-specificity)*n_controls |
n_controls |
the number of controls in the specificity disease-free control group. |
false_neg_diseased |
the number of negatives that appeared in the sensitivity confirmed disease group. These are by definition false negatives. This is (1-sensitivity)*n_diseased |
n_diseased |
the number of confirmed disease cases in the sensitivity control group. |
... |
not used |
sens |
the prior sensitivity of the test as a |
spec |
the prior specificity of the test as a |
confint |
confidence interval limits |
fmt |
a |
chains |
A positive integer specifying the number of Markov chains. The default is 4. |
warmup |
A positive integer specifying the number of warmup (aka burnin)
iterations per chain. If step-size adaptation is on (which it is by default),
this also controls the number of iterations for which adaptation is run (and
hence these warmup samples should not be used for inference). The number of
warmup iterations should be smaller than |
iter |
A positive integer specifying the number of iterations for each chain (including warmup). The default is 2000. |
cache_result |
save the result of the sampling in memory for the current session |
a list of dataframes containing the prevalence, sensitivity, and
specificity estimates, and a stanfit
object with the raw fit data
Uses resampling to incorporate uncertainty of sensitivity and specificity into an estimate of true prevalence from a given value of apparent prevalence.
bayesian_panel_complex_model( test_results = testerror::.input_data, false_pos_controls = NULL, n_controls = NULL, false_neg_diseased = NULL, n_diseased = NULL, ..., sens = uniform_prior(), spec = uniform_prior(), panel_sens = uniform_prior(), panel_spec = uniform_prior(), panel_name = "Panel", confint = 0.95, fmt = "%1.2f%% [%1.2f%% — %1.2f%%]", chains = 4, warmup = 1000, iter = 2000, cache_result = TRUE )
bayesian_panel_complex_model( test_results = testerror::.input_data, false_pos_controls = NULL, n_controls = NULL, false_neg_diseased = NULL, n_diseased = NULL, ..., sens = uniform_prior(), spec = uniform_prior(), panel_sens = uniform_prior(), panel_spec = uniform_prior(), panel_name = "Panel", confint = 0.95, fmt = "%1.2f%% [%1.2f%% — %1.2f%%]", chains = 4, warmup = 1000, iter = 2000, cache_result = TRUE )
test_results |
A dataframe containing the following columns:
Ungrouped. No default value. |
false_pos_controls |
the number of positives that appeared in the specificity disease-free control group. These are by definition false positives. This is (1-specificity)*n_controls |
n_controls |
the number of controls in the specificity disease-free control group. |
false_neg_diseased |
the number of negatives that appeared in the sensitivity confirmed disease group. These are by definition false negatives. This is (1-sensitivity)*n_diseased |
n_diseased |
the number of confirmed disease cases in the sensitivity control group. |
... |
not used |
sens |
the prior sensitivity of the test as a |
spec |
the prior specificity of the test as a |
panel_sens |
the prior sensitivity of the panel as a |
panel_spec |
the prior specificity of the panel as a |
panel_name |
the name of the panel for combined result |
confint |
confidence interval limits |
fmt |
a |
chains |
A positive integer specifying the number of Markov chains. The default is 4. |
warmup |
A positive integer specifying the number of warmup (aka burnin)
iterations per chain. If step-size adaptation is on (which it is by default),
this also controls the number of iterations for which adaptation is run (and
hence these warmup samples should not be used for inference). The number of
warmup iterations should be smaller than |
iter |
A positive integer specifying the number of iterations for each chain (including warmup). The default is 2000. |
cache_result |
save the result of the sampling in memory for the current session |
This is not vectorised
a list of dataframes containing the prevalence, sensitivity, and
specificity estimates, and a stanfit
object with the raw fit data
The beta distribution priors in this model will actually be converted to logit_normal distributions
bayesian_panel_logit_model( panel_pos_obs, panel_n_obs, pos_obs, n_obs, test_names, false_pos_controls = NULL, n_controls = NULL, false_neg_diseased = NULL, n_diseased = NULL, ..., sens = sens_prior(), spec = spec_prior(), panel_sens = sens_prior(), panel_spec = spec_prior(), panel_name = "Panel", confint = 0.95, fmt = "%1.2f%% [%1.2f%% — %1.2f%%]", chains = 4, warmup = 1000, iter = 2000, cache_result = TRUE )
bayesian_panel_logit_model( panel_pos_obs, panel_n_obs, pos_obs, n_obs, test_names, false_pos_controls = NULL, n_controls = NULL, false_neg_diseased = NULL, n_diseased = NULL, ..., sens = sens_prior(), spec = spec_prior(), panel_sens = sens_prior(), panel_spec = spec_prior(), panel_name = "Panel", confint = 0.95, fmt = "%1.2f%% [%1.2f%% — %1.2f%%]", chains = 4, warmup = 1000, iter = 2000, cache_result = TRUE )
panel_pos_obs |
the number of positive observations for a given panel of tests |
panel_n_obs |
the number of observations for each component test |
pos_obs |
the number of positive observations for a given test |
n_obs |
the number of observations for a given test |
test_names |
a vector of the component test names in desired order |
false_pos_controls |
the number of positives that appeared in the specificity disease-free control group. These are by definition false positives. This is (1-specificity)*n_controls |
n_controls |
the number of controls in the specificity disease-free control group. |
false_neg_diseased |
the number of negatives that appeared in the sensitivity confirmed disease group. These are by definition false negatives. This is (1-sensitivity)*n_diseased |
n_diseased |
the number of confirmed disease cases in the sensitivity control group. |
... |
not used |
sens |
the prior sensitivity of the test as a |
spec |
the prior specificity of the test as a |
panel_sens |
the prior sensitivity of the panel as a |
panel_spec |
the prior specificity of the panel as a |
panel_name |
the name of the panel for combined result |
confint |
confidence interval limits |
fmt |
a |
chains |
A positive integer specifying the number of Markov chains. The default is 4. |
warmup |
A positive integer specifying the number of warmup (aka burnin)
iterations per chain. If step-size adaptation is on (which it is by default),
this also controls the number of iterations for which adaptation is run (and
hence these warmup samples should not be used for inference). The number of
warmup iterations should be smaller than |
iter |
A positive integer specifying the number of iterations for each chain (including warmup). The default is 2000. |
cache_result |
save the result of the sampling in memory for the current session |
a list of dataframes containing the prevalence, sensitivity, and
specificity estimates, and a stanfit
object with the raw fit data
Bayesian simpler model true prevalence for panel
bayesian_panel_simpler_model( panel_pos_obs, panel_n_obs, pos_obs, n_obs, test_names, false_pos_controls = NULL, n_controls = NULL, false_neg_diseased = NULL, n_diseased = NULL, ..., sens = uniform_prior(), spec = uniform_prior(), panel_sens = uniform_prior(), panel_spec = uniform_prior(), panel_name = "Panel", confint = 0.95, fmt = "%1.2f%% [%1.2f%% — %1.2f%%]", chains = 4, warmup = 1000, iter = 2000, cache_result = TRUE )
bayesian_panel_simpler_model( panel_pos_obs, panel_n_obs, pos_obs, n_obs, test_names, false_pos_controls = NULL, n_controls = NULL, false_neg_diseased = NULL, n_diseased = NULL, ..., sens = uniform_prior(), spec = uniform_prior(), panel_sens = uniform_prior(), panel_spec = uniform_prior(), panel_name = "Panel", confint = 0.95, fmt = "%1.2f%% [%1.2f%% — %1.2f%%]", chains = 4, warmup = 1000, iter = 2000, cache_result = TRUE )
panel_pos_obs |
the number of positive observations for a given panel of tests |
panel_n_obs |
the number of observations for each component test |
pos_obs |
the number of positive observations for a given test |
n_obs |
the number of observations for a given test |
test_names |
a vector of the component test names in desired order |
false_pos_controls |
the number of positives that appeared in the specificity disease-free control group. These are by definition false positives. This is (1-specificity)*n_controls |
n_controls |
the number of controls in the specificity disease-free control group. |
false_neg_diseased |
the number of negatives that appeared in the sensitivity confirmed disease group. These are by definition false negatives. This is (1-sensitivity)*n_diseased |
n_diseased |
the number of confirmed disease cases in the sensitivity control group. |
... |
not used |
sens |
the prior sensitivity of the test as a |
spec |
the prior specificity of the test as a |
panel_sens |
the prior sensitivity of the panel as a |
panel_spec |
the prior specificity of the panel as a |
panel_name |
the name of the panel for combined result |
confint |
confidence interval limits |
fmt |
a |
chains |
A positive integer specifying the number of Markov chains. The default is 4. |
warmup |
A positive integer specifying the number of warmup (aka burnin)
iterations per chain. If step-size adaptation is on (which it is by default),
this also controls the number of iterations for which adaptation is run (and
hence these warmup samples should not be used for inference). The number of
warmup iterations should be smaller than |
iter |
A positive integer specifying the number of iterations for each chain (including warmup). The default is 2000. |
cache_result |
save the result of the sampling in memory for the current session |
a list of dataframes containing the prevalence, sensitivity, and
specificity estimates, and a stanfit
object with the raw fit data
Execute one of a set of bayesian models
bayesian_panel_true_prevalence_model( ..., model_type = c("logit", "simpler", "complex") )
bayesian_panel_true_prevalence_model( ..., model_type = c("logit", "simpler", "complex") )
... |
Arguments passed on to
|
model_type |
The bayesian model used one of "logit", "simpler", "complex" |
A dataframe containing the following columns:
test (character) - the name of the test or panel
prevalence.lower (numeric) - the lower estimate
prevalence.median (numeric) - the median estimate
prevalence.upper (numeric) - the upper estimate
prevalence.method (character) - the method of estimation
prevalence.label (character) - a fomatted label of the true prevalence estimate with CI
Ungrouped.
No default value.
Execute one of a set of bayesian models
bayesian_true_prevalence_model(..., model_type = c("logit", "simpler"))
bayesian_true_prevalence_model(..., model_type = c("logit", "simpler"))
... |
Arguments passed on to
|
model_type |
The bayesian model used - one of "logit" or "simpler" |
A dataframe containing the following columns:
test (character) - the name of the test or panel
prevalence.lower (numeric) - the lower estimate
prevalence.median (numeric) - the median estimate
prevalence.upper (numeric) - the upper estimate
prevalence.method (character) - the method of estimation
prevalence.label (character) - a fomatted label of the true prevalence estimate with CI
Ungrouped.
No default value.
Generate a beta distribution out of probabilities, or positive and negative counts
beta_dist(..., p = NULL, q = NULL, n = NULL, shape1 = NULL, shape2 = NULL)
beta_dist(..., p = NULL, q = NULL, n = NULL, shape1 = NULL, shape2 = NULL)
... |
not used |
p |
the first shape / the probability or count of success |
q |
(optional) the second shape / the probability or count of failure |
n |
(optional) the number of trials. |
shape1 |
the first shape parameter (use this to force interpretation as shape) |
shape2 |
the second shape parameter (use this to force interpretation as shape) |
either a single beta_dist
object or a list of beta_dist
s
beta_dist(shape1 = c(1,2,3),shape2 = c(3,2,1)) beta_dist(p = 0.7, n = 2)
beta_dist(shape1 = c(1,2,3),shape2 = c(3,2,1)) beta_dist(p = 0.7, n = 2)
Fit a beta distribution to data using method of moments
beta_fit(samples, na.rm = FALSE)
beta_fit(samples, na.rm = FALSE)
samples |
a set of probabilities |
na.rm |
should we ignore NA values |
a beta_dist
S3 object fitted to the data.
beta_fit(stats::rbeta(10000,40,60)) beta_fit(stats::rbeta(10000,1,99))
beta_fit(stats::rbeta(10000,40,60)) beta_fit(stats::rbeta(10000,1,99))
Generate concave beta distribution parameters from mean and confidence intervals
beta_params(median, lower, upper, confint = 0.95, widen = 1, limit = 1, ...)
beta_params(median, lower, upper, confint = 0.95, widen = 1, limit = 1, ...)
median |
the median of the probability given |
lower |
the lower ci of the probability given |
upper |
the upper ci of the probability given |
confint |
the ci limits |
widen |
widen the spread of the final beta by this factor |
limit |
the lowest possible value for the shape parameters of the resulting
|
... |
not used |
a list with shape1, shape2 values, and d, p, q and r functions
beta = beta_params(0.25, 0.1, 0.3)
beta = beta_params(0.25, 0.1, 0.3)
The resulting logitnorm distribution will have a set median. The confidence intervals will not match those provided as they are used as a inter-quartile range.
ci_to_logitnorm(median, lower, upper, ci = 0.95, fix_median = TRUE, ...)
ci_to_logitnorm(median, lower, upper, ci = 0.95, fix_median = TRUE, ...)
median |
the median of the |
lower |
the lower CI |
upper |
the upper CI |
ci |
the confidence limits |
fix_median |
make the median of the logitnorm be the same as the median given. This can cause issues when very skewed distributions are used |
... |
not used |
a tibble with mu and sigma columns
Format a beta distribution
## S3 method for class 'beta_dist' format(x, glue = .default_beta_dist_format(), ...)
## S3 method for class 'beta_dist' format(x, glue = .default_beta_dist_format(), ...)
x |
the beta distribution |
glue |
a glue spec taking any of |
... |
not used |
nothing
format(beta_dist(shape1=3,shape2=6), "{format(mean*100, digits=3)}%")
format(beta_dist(shape1=3,shape2=6), "{format(mean*100, digits=3)}%")
Format a beta distribution list
## S3 method for class 'beta_dist_list' format(x, ...)
## S3 method for class 'beta_dist_list' format(x, ...)
x |
the beta distribution list |
... |
Arguments passed on to
|
nothing
Calculates a p-value for a count of positive test results based on false positive (specificity) controls. The null hypothesis is that the prevalence of the disease is zero.
fp_p_value( pos_obs, n_obs, false_pos_controls, n_controls, format = "%1.3g", lim = 1e-04, bonferroni = NULL, ... )
fp_p_value( pos_obs, n_obs, false_pos_controls, n_controls, format = "%1.3g", lim = 1e-04, bonferroni = NULL, ... )
pos_obs |
the number of positive observations for a given test |
n_obs |
the number of observations for a given test |
false_pos_controls |
the number of positives that appeared in the specificity
disease-free control group. These are by definition false positives. This
is |
n_controls |
the number of controls in the specificity disease-free control group. |
format |
a sprintf fmt string for the p-value |
lim |
a lower value to display |
bonferroni |
the number of simultaneous hypotheses that are being tested |
... |
not used |
This p_value does not tell you whether this count can be trusted only if the prevalence of this disease is significantly more than zero after this observation.
a vector of p-values for the count
# calculate p-values for counts derived from 300 samples # 10 observations is within noise of test # 20 observations is unlikely on 1200 observations fp_p_value(c(10,2,4,3,10,20), 1200, c(0,0,2,0,2,0)+2, 800) # if the same observations are made against a smaller group then we get # a positive result for 10 fp_p_value( c(10,2,4,3,10,20), 1000, c(2,2,4,2,4,2), 800) tibble::tibble( x = c(1,2,5,10,20,40,20,20,20,20,20), n = 1000, fp_controls = c(0,0,0,0,0,0,0,1,2,3,4)+2, n_controls = 800 ) %>% dplyr::mutate( p_value = fp_p_value(x, n, fp_controls, n_controls) ) %>% dplyr::glimpse()
# calculate p-values for counts derived from 300 samples # 10 observations is within noise of test # 20 observations is unlikely on 1200 observations fp_p_value(c(10,2,4,3,10,20), 1200, c(0,0,2,0,2,0)+2, 800) # if the same observations are made against a smaller group then we get # a positive result for 10 fp_p_value( c(10,2,4,3,10,20), 1000, c(2,2,4,2,4,2), 800) tibble::tibble( x = c(1,2,5,10,20,40,20,20,20,20,20), n = 1000, fp_controls = c(0,0,0,0,0,0,0,1,2,3,4)+2, n_controls = 800 ) %>% dplyr::mutate( p_value = fp_p_value(x, n, fp_controls, n_controls) ) %>% dplyr::glimpse()
Identify the minimum number of positive test result observations needed to be confident the disease has a non-zero prevalence.
fp_signif_level( n_obs, false_pos_controls, n_controls, bonferroni = NULL, ..., spec = NULL )
fp_signif_level( n_obs, false_pos_controls, n_controls, bonferroni = NULL, ..., spec = NULL )
n_obs |
the number of tests performed. |
false_pos_controls |
the number of positives that appeared in the specificity
disease-free control group. These are by definition false positives. This
is |
n_controls |
the number of controls in the specificity disease-free control group. |
bonferroni |
the number of simultaneous tests considered. |
... |
not used |
spec |
a prior value for specificity as a |
a vector of test positive counts which are the lowest significant value that could be regarded as not due to chance.
# lowest significant count of positives in 1000 tests fp_signif_level(1000, false_pos_controls = 0:5, n_controls=800) fp_signif_level(c(1000,800,600,400), false_pos_controls = 1:4, n_controls=800)
# lowest significant count of positives in 1000 tests fp_signif_level(1000, false_pos_controls = 0:5, n_controls=800) fp_signif_level(c(1000,800,600,400), false_pos_controls = 1:4, n_controls=800)
beta_dist
Get a parameter of the beta_dist
get_beta_shape(x, type = c("shape1", "shape2", "conc"))
get_beta_shape(x, type = c("shape1", "shape2", "conc"))
x |
a |
type |
the parameter to extract one of |
a vector of doubles
get_beta_shape(beta_dist(shape1=1,shape2=1)) get_beta_shape(beta_dist(shape1=2:5,shape2=1:4))
get_beta_shape(beta_dist(shape1=1,shape2=1)) get_beta_shape(beta_dist(shape1=2:5,shape2=1:4))
beta_dist
Get a parameter of the beta_dist
## S3 method for class 'beta_dist' get_beta_shape(x, type = c("shape1", "shape2", "conc"))
## S3 method for class 'beta_dist' get_beta_shape(x, type = c("shape1", "shape2", "conc"))
x |
a |
type |
the parameter to extract one of |
a vector of doubles
get_beta_shape(beta_dist(shape1=1,shape2=1)) get_beta_shape(beta_dist(shape1=2:5,shape2=1:4))
get_beta_shape(beta_dist(shape1=1,shape2=1)) get_beta_shape(beta_dist(shape1=2:5,shape2=1:4))
beta_dist
Get a parameter of the beta_dist
## S3 method for class 'beta_dist_list' get_beta_shape(x, type = c("shape1", "shape2", "conc"))
## S3 method for class 'beta_dist_list' get_beta_shape(x, type = c("shape1", "shape2", "conc"))
x |
a |
type |
the parameter to extract one of |
a vector of doubles
get_beta_shape(beta_dist(shape1=1,shape2=1)) get_beta_shape(beta_dist(shape1=2:5,shape2=1:4))
get_beta_shape(beta_dist(shape1=1,shape2=1)) get_beta_shape(beta_dist(shape1=2:5,shape2=1:4))
The inverse logit function
inv_logit(y)
inv_logit(y)
y |
a number between -Inf and Inf |
a number between 0 and 1
Detect the length of a beta distribution
## S3 method for class 'beta_dist' length(x, ...)
## S3 method for class 'beta_dist' length(x, ...)
x |
the beta distribution |
... |
not used |
always 1
Detect the length of a beta distribution list
## S3 method for class 'beta_dist_list' length(x, ...)
## S3 method for class 'beta_dist_list' length(x, ...)
x |
the beta distribution list |
... |
not used |
the length of the list
The logit function
logit(x)
logit(x)
x |
a number between 0 and 1 |
a number between -Inf and Inf
This assumes that OR ~ RR which is only true if controls >> cases The OR method can be used in test negative designs where disease positive relates to vaccine treatable disease and disease negative relates to non vaccine treatable disease
This assumes that OR ~ RR which is only true if controls >> cases The OR method can be used in test negative designs where disease positive relates to vaccine treatable disease and disease negative relates to non vaccine treatable disease
odds_ratio_ve( vaccinatedCase, unvaccinatedCase, vaccinatedControl, unvaccinatedControl, confint = c(0.025, 0.975) ) odds_ratio_ve( vaccinatedCase, unvaccinatedCase, vaccinatedControl, unvaccinatedControl, confint = c(0.025, 0.975) )
odds_ratio_ve( vaccinatedCase, unvaccinatedCase, vaccinatedControl, unvaccinatedControl, confint = c(0.025, 0.975) ) odds_ratio_ve( vaccinatedCase, unvaccinatedCase, vaccinatedControl, unvaccinatedControl, confint = c(0.025, 0.975) )
vaccinatedCase |
count of disease positive vaccine positive |
unvaccinatedCase |
count of disease positive vaccine negative |
vaccinatedControl |
count of disease negative vaccine positive |
unvaccinatedControl |
count of disease negative vaccine positive |
confint |
the confidence intervals |
a dataframe
a dataframe
tibble::tibble( N_vacc = 42240, N_unvacc = 42256, N_vacc_pn_pos = 49, N_unvacc_pn_pos = 90 ) %>% dplyr::mutate( odds_ratio_ve(N_vacc_pn_pos, N_unvacc_pn_pos, N_vacc-N_vacc_pn_pos, N_unvacc-N_unvacc_pn_pos) ) tibble::tibble( N_vacc = 42240, N_unvacc = 42256, N_vacc_pn_pos = 49, N_unvacc_pn_pos = 90 ) %>% dplyr::mutate( odds_ratio_ve(N_vacc_pn_pos, N_unvacc_pn_pos, N_vacc-N_vacc_pn_pos, N_unvacc-N_unvacc_pn_pos) )
tibble::tibble( N_vacc = 42240, N_unvacc = 42256, N_vacc_pn_pos = 49, N_unvacc_pn_pos = 90 ) %>% dplyr::mutate( odds_ratio_ve(N_vacc_pn_pos, N_unvacc_pn_pos, N_vacc-N_vacc_pn_pos, N_unvacc-N_unvacc_pn_pos) ) tibble::tibble( N_vacc = 42240, N_unvacc = 42256, N_vacc_pn_pos = 49, N_unvacc_pn_pos = 90 ) %>% dplyr::mutate( odds_ratio_ve(N_vacc_pn_pos, N_unvacc_pn_pos, N_vacc-N_vacc_pn_pos, N_unvacc-N_unvacc_pn_pos) )
For a given combination of prevalence, sensitivity and specificity this gives the critical threshold at which true prevalence equals apparent prevalence
optimal_performance(p = NULL, sens = NULL, spec = NULL)
optimal_performance(p = NULL, sens = NULL, spec = NULL)
p |
the prevalence or apparent prevalence |
sens |
the sensitivity of the test |
spec |
the specificity of the test |
the combination of sensitivity and specificity where apparent prevalence equals true prevalence
optimal_performance(p=0.1, sens=0.75) optimal_performance(p=0.005, spec=0.9975)
optimal_performance(p=0.1, sens=0.75) optimal_performance(p=0.005, spec=0.9975)
Expected test panel prevalence assuming independence
panel_prevalence(p, na.rm = FALSE)
panel_prevalence(p, na.rm = FALSE)
p |
a vector of prevalences of the component tests |
na.rm |
remove NA values? |
a single value for the effective specificity of the combination of the tests
panel_prevalence(p = rep(0.01,24))
panel_prevalence(p = rep(0.01,24))
Calculate the sensitivity of a combination of tests, where the tests are testing for different conditions and positive results are combined into a panel using a logical OR. Because false negatives from each component of a panel can be cancelled out by true positives, or false positives from other components of the test depending on the prevalence of the underlying conditions, the combined false negative rate is lower the more cases there arecombine the false positive rate for the panel is higher than the individual components (and hence the true negative rate a.k.a specificity is lower).
panel_sens(p, sens, spec, na.rm = FALSE)
panel_sens(p, sens, spec, na.rm = FALSE)
p |
the true prevalence (one of p or ap must be given) |
sens |
a vector of sensitivities of the component tests |
spec |
a vector of specificity of the component tests |
na.rm |
remove NA values? |
an effective specificity for the combination of the tests
#TODO
#TODO
Estimate the sensitivity of a combination of tests, where the tests are testing for different conditions and positive results are combined into a panel using a logical OR. Because false negatives from each component of a panel can be cancelled out by true positives, or false positives from other components of the test depending on the prevalence of the underlying conditions, the combined false negative rate is lower the more cases there arecombine the false positive rate for the panel is higher than the individual components (and hence the true negative rate a.k.a specificity is lower).
panel_sens_estimator(ap, sens, spec, na.rm = FALSE)
panel_sens_estimator(ap, sens, spec, na.rm = FALSE)
ap |
the apparent prevalence or test positivity (one of p or ap must be given) |
sens |
a vector of sensitivities of the component tests |
spec |
a vector of specificity of the component tests |
na.rm |
remove NA values? |
an effective specificity for the combination of the tests
#TODO
#TODO
Calculate the specificity of a combination of tests, where the tests are testing for different conditions and positive results are combined into a panel using a logical OR. Because false positives from each component of a panel combine the false positive rate for the panel is higher than the individual components (and hence the true negative rate a.k.a specificity is lower).
panel_spec(spec, na.rm = FALSE)
panel_spec(spec, na.rm = FALSE)
spec |
a vector of specificity of the component tests |
na.rm |
remove NA values? |
a single value for the effective specificity of the combination of the tests
panel_spec(spec = rep(0.9975,24))
panel_spec(spec = rep(0.9975,24))
Uses lang-reiczigel estimators to incorporate uncertainty of sensitivity and specificity into an estimate of true prevalence from a given value of apparent prevalence.
prevalence_lang_reiczigel( pos_obs, n_obs, false_pos_controls = NULL, n_controls = NULL, false_neg_diseased = NULL, n_diseased = NULL, ..., spec = spec_prior(), sens = sens_prior(), confint = 0.95, fmt = "%1.2f%% [%1.2f%% — %1.2f%%]" )
prevalence_lang_reiczigel( pos_obs, n_obs, false_pos_controls = NULL, n_controls = NULL, false_neg_diseased = NULL, n_diseased = NULL, ..., spec = spec_prior(), sens = sens_prior(), confint = 0.95, fmt = "%1.2f%% [%1.2f%% — %1.2f%%]" )
pos_obs |
the number of positive observations for a given test |
n_obs |
the number of observations for a given test |
false_pos_controls |
the number of positives that appeared in the specificity disease-free control group. These are by definition false positives. This is (1-specificity)*n_controls |
n_controls |
the number of controls in the specificity disease-free control group. |
false_neg_diseased |
the number of negatives that appeared in the sensitivity confirmed disease group. These are by definition false negatives. This is (1-sensitivity)*n_diseased |
n_diseased |
the number of confirmed disease cases in the sensitivity control group. |
... |
not used |
spec |
the prior specificity of the test as a |
sens |
the prior sensitivity of the test as a |
confint |
confidence interval limits |
fmt |
a |
the expected value of apparent prevalence
Uses resampling to incorporate uncertainty of sensitivity and specificity into an estimate of true prevalence from a given value of apparent prevalence.
prevalence_panel_lang_reiczigel( panel_pos_obs, panel_n_obs, pos_obs, n_obs, false_pos_controls = NULL, n_controls = NULL, false_neg_diseased = NULL, n_diseased = NULL, ..., spec = spec_prior(), sens = sens_prior(), confint = 0.95, fmt = "%1.2f%% [%1.2f%% — %1.2f%%]", samples = 1000 )
prevalence_panel_lang_reiczigel( panel_pos_obs, panel_n_obs, pos_obs, n_obs, false_pos_controls = NULL, n_controls = NULL, false_neg_diseased = NULL, n_diseased = NULL, ..., spec = spec_prior(), sens = sens_prior(), confint = 0.95, fmt = "%1.2f%% [%1.2f%% — %1.2f%%]", samples = 1000 )
panel_pos_obs |
the number of positive observations for a given panel of tests |
panel_n_obs |
a vector of the number of observations for each component test |
pos_obs |
the number of positive observations for a given test |
n_obs |
the number of observations for a given test |
false_pos_controls |
the number of positives that appeared in the specificity disease-free control group. These are by definition false positives. This is (1-specificity)*n_controls |
n_controls |
the number of controls in the specificity disease-free control group. |
false_neg_diseased |
the number of negatives that appeared in the sensitivity confirmed disease group. These are by definition false negatives. This is (1-sensitivity)*n_diseased |
n_diseased |
the number of confirmed disease cases in the sensitivity control group. |
... |
Arguments passed on to |
spec |
the prior specificity of the test as a |
sens |
the prior sensitivity of the test as a |
confint |
confidence interval limits |
fmt |
a |
samples |
number of random draws of sensitivity and specificity (optional - default 1000) |
This is not vectorised
the expected value of apparent prevalence
#TODO
#TODO
Print a beta distribution
## S3 method for class 'beta_dist' print(x, ...)
## S3 method for class 'beta_dist' print(x, ...)
x |
the beta distribution |
... |
not used |
nothing
Print a beta distribution
## S3 method for class 'beta_dist_list' print(x, ...)
## S3 method for class 'beta_dist_list' print(x, ...)
x |
the beta distribution |
... |
not used |
nothing
The RR method cannot be used in test negative designs where disease positive relates to vaccine treatable disease and disease negative relates to non vaccine treatable disease. It is only relevant in prospective designs with a vaccinated and unvaccinated group.
relative_risk_ve( vaccinatedCase, unvaccinatedCase, vaccinatedControl, unvaccinatedControl, confint = c(0.025, 0.975) )
relative_risk_ve( vaccinatedCase, unvaccinatedCase, vaccinatedControl, unvaccinatedControl, confint = c(0.025, 0.975) )
vaccinatedCase |
count of disease positive vaccine positive |
unvaccinatedCase |
count of disease positive vaccine negative |
vaccinatedControl |
count of disease negative vaccine positive |
unvaccinatedControl |
count of disease negative vaccine positive |
confint |
the confidence intervals |
a dataframe
tibble::tibble( N_vacc = 42240, N_unvacc = 42256, N_vacc_pn_pos = 49, N_unvacc_pn_pos = 90 ) %>% dplyr::mutate( relative_risk_ve(N_vacc_pn_pos, N_unvacc_pn_pos, N_vacc-N_vacc_pn_pos, N_unvacc-N_unvacc_pn_pos) ) # dplyr::bind_rows(lapply( # c("katz.log", "adj.log", "bailey", "koopman", "noether", "sinh-1", "boot"), # function(m) {tibble::as_tibble( # 1-DescTools::BinomRatioCI(N_vacc_pn_pos, N_vacc, N_unvacc_pn_pos, N_unvacc, method = m) # ) %>% dplyr::mutate( # method = m # )} # ))
tibble::tibble( N_vacc = 42240, N_unvacc = 42256, N_vacc_pn_pos = 49, N_unvacc_pn_pos = 90 ) %>% dplyr::mutate( relative_risk_ve(N_vacc_pn_pos, N_unvacc_pn_pos, N_vacc-N_vacc_pn_pos, N_unvacc-N_unvacc_pn_pos) ) # dplyr::bind_rows(lapply( # c("katz.log", "adj.log", "bailey", "koopman", "noether", "sinh-1", "boot"), # function(m) {tibble::as_tibble( # 1-DescTools::BinomRatioCI(N_vacc_pn_pos, N_vacc, N_unvacc_pn_pos, N_unvacc, method = m) # ) %>% dplyr::mutate( # method = m # )} # ))
beta_dist
Repeat a beta_dist
## S3 method for class 'beta_dist' rep(x, times, ...)
## S3 method for class 'beta_dist' rep(x, times, ...)
x |
a |
times |
n |
... |
not used |
a beta_dist_list
This estimator runs into problems with small AP as the Rogan-Gladen conversion is really using expected apparent prevalence. Getting the expected value of the AP distribution is complex and the expected value given a single observation is not in general the ratio of positives / count. The expected apparent prevalence is never less than the specificity but the observed value often is. To deal with this the R-G estimator truncates at zero.
rogan_gladen(ap, sens, spec)
rogan_gladen(ap, sens, spec)
ap |
the expected apparent prevalence. |
sens |
the sensitivity of the test |
spec |
the specificity of the test |
the estimate of 'true prevalence'
rogan_gladen(50/200, 0.75, 0.97)
rogan_gladen(50/200, 0.75, 0.97)
If undefined this is 0.70 (0.11 - 1.00). This can be set with options(testerror.sens_prior = beta_dist(p=??, n=??))
sens_prior()
sens_prior()
a beta_dist
If undefined this is 0.98 (0.71 - 1.00). This can be set with options(testerror.spec_prior = beta_dist(p=??, n=??))
spec_prior()
spec_prior()
a beta_dist
Uses apparent prevalence, and uncertain
estimates of test sensitivity and test specificity for the 3 methods
described in Supplementary 2. This function works for a single panel per
dataframe, multiple panels will need to call this function multiple times
in a group_modify
.
true_panel_prevalence( test_results = testerror::.input_data, false_pos_controls = NULL, n_controls = NULL, false_neg_diseased = NULL, n_diseased = NULL, ..., sens = NULL, spec = NULL, panel_name = "Panel", confint = 0.95, method = c("rogan-gladen", "lang-reiczigel", "bayes"), na.rm = TRUE )
true_panel_prevalence( test_results = testerror::.input_data, false_pos_controls = NULL, n_controls = NULL, false_neg_diseased = NULL, n_diseased = NULL, ..., sens = NULL, spec = NULL, panel_name = "Panel", confint = 0.95, method = c("rogan-gladen", "lang-reiczigel", "bayes"), na.rm = TRUE )
test_results |
A dataframe containing the following columns:
Ungrouped. No default value. |
false_pos_controls |
the number of positives that appeared in the specificity disease-free control group. These are by definition false positives. This is (1-specificity)*n_controls |
n_controls |
the number of controls in the specificity disease-free control group. |
false_neg_diseased |
the number of negatives that appeared in the sensitivity confirmed disease group. These are by definition false negatives. This is (1-sensitivity)*n_diseased |
n_diseased |
the number of confirmed disease cases in the sensitivity control group. |
... |
Arguments passed on to
|
sens |
the prior sensitivity of the test as a |
spec |
the prior specificity of the test as a |
panel_name |
the name of the panel for combined result |
confint |
confidence interval limits |
method |
one of:
|
na.rm |
exclude patients with missing results |
A dataframe containing the following columns:
test (character) - the name of the test or panel
prevalence.lower (numeric) - the lower estimate
prevalence.median (numeric) - the median estimate
prevalence.upper (numeric) - the upper estimate
prevalence.method (character) - the method of estimation
prevalence.label (character) - a fomatted label of the true prevalence estimate with CI
Ungrouped.
No default value.
tmp = testerror:::panel_example() true_panel_prevalence( test_results = tmp$samples %>% dplyr::select(id,test,result = observed), false_pos_controls = tmp$performance$false_pos_controls, n_controls = tmp$performance$n_controls, false_neg_diseased = tmp$performance$false_neg_diseased, n_diseased = tmp$performance$n_diseased, method = "rogan-gladen" )
tmp = testerror:::panel_example() true_panel_prevalence( test_results = tmp$samples %>% dplyr::select(id,test,result = observed), false_pos_controls = tmp$performance$false_pos_controls, n_controls = tmp$performance$n_controls, false_neg_diseased = tmp$performance$false_neg_diseased, n_diseased = tmp$performance$n_diseased, method = "rogan-gladen" )
Calculate an estimate of true prevalence from apparent prevalence, and uncertain estimates of test sensitivity and test specificity, using one of 3 methods.
true_prevalence( pos_obs, n_obs, false_pos_controls = NULL, n_controls = NULL, false_neg_diseased = NULL, n_diseased = NULL, confint = 0.95, method = c("lang-reiczigel", "rogan-gladen", "bayes"), ..., spec = NULL, sens = NULL )
true_prevalence( pos_obs, n_obs, false_pos_controls = NULL, n_controls = NULL, false_neg_diseased = NULL, n_diseased = NULL, confint = 0.95, method = c("lang-reiczigel", "rogan-gladen", "bayes"), ..., spec = NULL, sens = NULL )
pos_obs |
the number of positive observations for a given test |
n_obs |
the number of observations for a given test |
false_pos_controls |
the number of positives that appeared in the specificity disease-free control group. These are by definition false positives. This is (1-specificity)*n_controls |
n_controls |
the number of controls in the specificity disease-free control group. |
false_neg_diseased |
the number of negatives that appeared in the sensitivity confirmed disease group. These are by definition false negatives. This is (1-sensitivity)*n_diseased |
n_diseased |
the number of confirmed disease cases in the sensitivity control group. |
confint |
confidence interval limits |
method |
one of:
|
... |
Arguments passed on to
|
spec |
the prior specificity of the test as a |
sens |
the prior sensitivity of the test as a |
A dataframe containing the following columns:
test (character) - the name of the test or panel
prevalence.lower (numeric) - the lower estimate
prevalence.median (numeric) - the median estimate
prevalence.upper (numeric) - the upper estimate
prevalence.method (character) - the method of estimation
prevalence.label (character) - a fomatted label of the true prevalence estimate with CI
Ungrouped.
No default value.
true_prevalence(c(1:50), 200, 2, 800, 25, 75) true_prevalence(c(1:10)*2, 200, 25, 800, 1, 6, method="rogan-gladen") true_prevalence(c(1:10)*2, 200, 5, 800, 1, 6, method="bayes")
true_prevalence(c(1:50), 200, 2, 800, 25, 75) true_prevalence(c(1:10)*2, 200, 25, 800, 1, 6, method="rogan-gladen") true_prevalence(c(1:10)*2, 200, 5, 800, 1, 6, method="bayes")
Uses resampling to incorporate uncertainty of sensitivity and specificity into an estimate of true prevalence from a given value of apparent prevalence.
uncertain_panel_rogan_gladen( panel_pos_obs, panel_n_obs, pos_obs, n_obs, false_pos_controls = NULL, n_controls = NULL, false_neg_diseased = NULL, n_diseased = NULL, ..., sens = sens_prior(), spec = spec_prior(), confint = 0.95, fmt = "%1.2f%% [%1.2f%% — %1.2f%%]", samples = 1000 )
uncertain_panel_rogan_gladen( panel_pos_obs, panel_n_obs, pos_obs, n_obs, false_pos_controls = NULL, n_controls = NULL, false_neg_diseased = NULL, n_diseased = NULL, ..., sens = sens_prior(), spec = spec_prior(), confint = 0.95, fmt = "%1.2f%% [%1.2f%% — %1.2f%%]", samples = 1000 )
panel_pos_obs |
the number of positive observations for a given panel of tests |
panel_n_obs |
the number of observations for each component test |
pos_obs |
the number of positive observations for a given test |
n_obs |
the number of observations for a given test |
false_pos_controls |
the number of positives that appeared in the specificity disease-free control group. These are by definition false positives. This is (1-specificity)*n_controls |
n_controls |
the number of controls in the specificity disease-free control group. |
false_neg_diseased |
the number of negatives that appeared in the sensitivity confirmed disease group. These are by definition false negatives. This is (1-sensitivity)*n_diseased |
n_diseased |
the number of confirmed disease cases in the sensitivity control group. |
... |
Arguments passed on to
|
sens |
the prior sensitivity of the test as a |
spec |
the prior specificity of the test as a |
confint |
confidence interval limits |
fmt |
a |
samples |
number fo random draws of sensitivity and specificity |
This is not vectorised
the expected value of apparent prevalence
Propagate component test sensitivity and specificity into panel specificity assuming a known set of observations of component apparent prevalence
uncertain_panel_sens_estimator( pos_obs, n_obs, false_pos_controls = NULL, n_controls = NULL, false_neg_diseased = NULL, n_diseased = NULL, ..., sens = sens_prior(), spec = spec_prior(), samples = 1000, fit_beta = FALSE )
uncertain_panel_sens_estimator( pos_obs, n_obs, false_pos_controls = NULL, n_controls = NULL, false_neg_diseased = NULL, n_diseased = NULL, ..., sens = sens_prior(), spec = spec_prior(), samples = 1000, fit_beta = FALSE )
pos_obs |
the number of positive observations for a given test |
n_obs |
the number of observations for a given test |
false_pos_controls |
the number of positives that appeared in the specificity disease-free control group. These are by definition false positives. This is (1-specificity)*n_controls |
n_controls |
the number of controls in the specificity disease-free control group. |
false_neg_diseased |
the number of negatives that appeared in the sensitivity confirmed disease group. These are by definition false negatives. This is (1-sensitivity)*n_diseased |
n_diseased |
the number of confirmed disease cases in the sensitivity control group. |
... |
not used |
sens |
the prior sensitivity of the test as a |
spec |
the prior specificity of the test as a |
samples |
number fo random draws of sensitivity and specificity |
fit_beta |
return the result as a |
a vector of possible sensitivity values
uncertain_panel_sens_estimator( pos_obs = c(30,10,20,10,5), n_obs=1000, false_pos_controls = c(20,15,15,15,15), n_controls = c(800,800,800,800,800), false_neg_diseased = c(20,25,20,20,15), n_diseased = c(100,100,100,100,100), fit_beta = TRUE)
uncertain_panel_sens_estimator( pos_obs = c(30,10,20,10,5), n_obs=1000, false_pos_controls = c(20,15,15,15,15), n_controls = c(800,800,800,800,800), false_neg_diseased = c(20,25,20,20,15), n_diseased = c(100,100,100,100,100), fit_beta = TRUE)
Propagate component test specificity into panel specificity
uncertain_panel_spec( false_pos_controls = NULL, n_controls = NULL, ..., spec = spec_prior(), samples = 1000, na.rm = FALSE, fit_beta = FALSE )
uncertain_panel_spec( false_pos_controls = NULL, n_controls = NULL, ..., spec = spec_prior(), samples = 1000, na.rm = FALSE, fit_beta = FALSE )
false_pos_controls |
the number of positives that appeared in the specificity disease-free control group. These are by definition false positives. This is (1-specificity)*n_controls |
n_controls |
the number of controls in the specificity disease-free control group. |
... |
not used |
spec |
the prior specificity of the test as a |
samples |
number fo random draws of sensitivity and specificity |
na.rm |
remove missing values |
fit_beta |
return the result as a |
a vector of possible specificities for the panel or a fitted beta_dist
uncertain_panel_spec(c(2,3,4,2,2), c(800,800,800,800,800), fit_beta=TRUE)
uncertain_panel_spec(c(2,3,4,2,2), c(800,800,800,800,800), fit_beta=TRUE)
Uses resampling to incorporate uncertainty of sensitivity and specificity into an estimate of true prevalence from a given value of apparent prevalence.
uncertain_rogan_gladen( pos_obs, n_obs, false_pos_controls = NULL, n_controls = NULL, false_neg_diseased = NULL, n_diseased = NULL, ..., spec = spec_prior(), sens = sens_prior(), samples = 1000, confint = 0.95, fmt = "%1.2f%% [%1.2f%% — %1.2f%%]", seed = NA )
uncertain_rogan_gladen( pos_obs, n_obs, false_pos_controls = NULL, n_controls = NULL, false_neg_diseased = NULL, n_diseased = NULL, ..., spec = spec_prior(), sens = sens_prior(), samples = 1000, confint = 0.95, fmt = "%1.2f%% [%1.2f%% — %1.2f%%]", seed = NA )
pos_obs |
the number of positive observations for a given test |
n_obs |
the number of observations for a given test |
false_pos_controls |
the number of positives that appeared in the specificity disease-free control group. These are by definition false positives. This is (1-specificity)*n_controls |
n_controls |
the number of controls in the specificity disease-free control group. |
false_neg_diseased |
the number of negatives that appeared in the sensitivity confirmed disease group. These are by definition false negatives. This is (1-sensitivity)*n_diseased |
n_diseased |
the number of confirmed disease cases in the sensitivity control group. |
... |
not used |
spec |
the prior specificity of the test as a |
sens |
the prior sensitivity of the test as a |
samples |
number fo random draws of sensitivity and specificity |
confint |
confidence interval limits |
fmt |
a |
seed |
set seed for reproducibility |
the expected value of apparent prevalence
uncertain_rogan_gladen( pos_obs = 20, n_obs = 1000, false_pos_controls = 10, n_controls = 800, false_neg_diseased = 20, n_diseased = 100) uncertain_rogan_gladen( pos_obs = 5, n_obs = 1000, sens = beta_dist(p=0.75,n=200), spec = beta_dist(p=0.9975, n=800)) uncertain_rogan_gladen( pos_obs = c(5,10), n_obs = c(1000,1000), false_pos_controls = c(2,1), n_controls = c(800,800), false_neg_diseased = c(25,20),n_diseased = c(100,100))
uncertain_rogan_gladen( pos_obs = 20, n_obs = 1000, false_pos_controls = 10, n_controls = 800, false_neg_diseased = 20, n_diseased = 100) uncertain_rogan_gladen( pos_obs = 5, n_obs = 1000, sens = beta_dist(p=0.75,n=200), spec = beta_dist(p=0.9975, n=800)) uncertain_rogan_gladen( pos_obs = c(5,10), n_obs = c(1000,1000), false_pos_controls = c(2,1), n_controls = c(800,800), false_neg_diseased = c(25,20),n_diseased = c(100,100))
For a given sensitivity and specificity this give the critical threshold after which test error introduces underestimation rather than over estimation
underestimation_threshold(sens, spec)
underestimation_threshold(sens, spec)
sens |
the sensitivity of the test |
spec |
the specificity of the test |
the value where apparent prevalence equals true prevalence
tmp1 = underestimation_threshold(0.75, 0.97) tmp2 = rogan_gladen(tmp1, 0.75, 0.97) if (abs(tmp1-tmp2) > 0.0000001) stop("error")
tmp1 = underestimation_threshold(0.75, 0.97) tmp2 = rogan_gladen(tmp1, 0.75, 0.97) if (abs(tmp1-tmp2) > 0.0000001) stop("error")
Uninformative prior
uninformed_prior()
uninformed_prior()
a beta_dist
beta_dist
Update the posterior of a beta_dist
update_posterior(x, ..., pos = NULL, neg = NULL, n = NULL)
update_posterior(x, ..., pos = NULL, neg = NULL, n = NULL)
x |
a |
... |
not used |
pos |
positive observation(s) |
neg |
negative observation(s) |
n |
number observations |
a new beta_dist
o beta_dist_list
update_posterior(beta_dist(shape1=1,shape2=1), neg=10, n=30)
update_posterior(beta_dist(shape1=1,shape2=1), neg=10, n=30)
beta_dist
Update the posterior of a beta_dist
## S3 method for class 'beta_dist' update_posterior(x, ..., pos = NULL, neg = NULL, n = NULL)
## S3 method for class 'beta_dist' update_posterior(x, ..., pos = NULL, neg = NULL, n = NULL)
x |
a |
... |
not used |
pos |
positive observation(s) |
neg |
negative observation(s) |
n |
number observations |
a new beta_dist
o beta_dist_list
update_posterior(beta_dist(shape1=1,shape2=1), neg=10, n=30)
update_posterior(beta_dist(shape1=1,shape2=1), neg=10, n=30)
beta_dist
Update the posterior of a beta_dist
## S3 method for class 'beta_dist_list' update_posterior(x, ..., pos = NULL, neg = NULL, n = NULL)
## S3 method for class 'beta_dist_list' update_posterior(x, ..., pos = NULL, neg = NULL, n = NULL)
x |
a |
... |
not used |
pos |
positive observation(s) |
neg |
negative observation(s) |
n |
number observations |
a new beta_dist
o beta_dist_list
update_posterior(beta_dist(shape1=1,shape2=1), neg=10, n=30)
update_posterior(beta_dist(shape1=1,shape2=1), neg=10, n=30)