Contents

weitrix is a jack of all trades. This vignette demonstrates the use of weitrix with proportion data. One difficulty is that when a proportion is exactly zero its variance should be exactly zero as well, leading to an infinite weight. To get around this, we slightly inflate the estimate of the variance for proportions near zero. This is not perfect, but calibration plots allow us to do this with our eyes open, and provide reassurance that it will not greatly interfere with downstream analysis.

We look at GSE99970, a SLAM-Seq experiment. In SLAM-Seq a portion of uracils are replaced with 4-thiouridine (s4U) during transcription, which by some clever chemistry leads to “T”s becoming “C”s in resultant RNA-Seq reads. The proportion of converted “T”s is an indication of the proportion of new transcripts produced while s4U was applied. In this experiment mouse embryonic stem cells were exposed to s4U for 24 hours, then washed out and sampled at different time points. The experiment lets us track the decay rates of transcripts.

library(tidyverse)
library(ComplexHeatmap)
library(weitrix)

# BiocParallel supports multiple backends. 
# If the default hangs or errors, try others.
# The most reliable backed is to use serial processing
BiocParallel::register( BiocParallel::SerialParam() )

1 Load the data

The quantity of interest here is the proportion of “T”s converted to “C”s. We load the coverage and conversions, and calculate this ratio.

As an initial weighting, we use the coverage. Notionally, each proportion is an average of this many 1s and 0s. The more values averaged, the more accurate this is.

coverage <- system.file("GSE99970", "GSE99970_T_coverage.csv.gz", package="weitrix") %>%
    read_csv() %>%
    column_to_rownames("gene") %>%
    as.matrix()

conversions <- system.file("GSE99970", "GSE99970_T_C_conversions.csv.gz", package="weitrix") %>%
    read_csv() %>%
    column_to_rownames("gene") %>%
    as.matrix()

# Calculate proportions, create weitrix
wei <- as_weitrix( conversions/coverage, coverage )
dim(wei)
## [1] 22281    27
# We will only use genes where at least 30 conversions were observed
good <- rowSums(conversions) >= 30
wei <- wei[good,]

# Add some column data from the names
parts <- str_match(colnames(wei), "(.*)_(Rep_.*)")
colData(wei)$group <- fct_inorder(parts[,2])
colData(wei)$rep <- fct_inorder(paste0("Rep_",parts[,3]))
rowData(wei)$mean_coverage <- rowMeans(weitrix_weights(wei))

wei
## class: SummarizedExperiment 
## dim: 11059 27 
## metadata(1): weitrix
## assays(2): x weights
## rownames(11059): 0610005C13Rik 0610007P14Rik ... Zzef1 Zzz3
## rowData names(1): mean_coverage
## colnames(27): no_s4U_Rep_1 no_s4U_Rep_2 ... 24h_chase_Rep_2
##   24h_chase_Rep_3
## colData names(2): group rep
colMeans(weitrix_x(wei), na.rm=TRUE)
##     no_s4U_Rep_1     no_s4U_Rep_2     no_s4U_Rep_3    24h_s4U_Rep_1 
##     0.0009467780     0.0008692730     0.0009657405     0.0228616995 
##    24h_s4U_Rep_2    24h_s4U_Rep_3   0h_chase_Rep_1   0h_chase_Rep_2 
##     0.0227623930     0.0224932745     0.0238126807     0.0233169898 
##   0h_chase_Rep_3 0.5h_chase_Rep_1 0.5h_chase_Rep_2 0.5h_chase_Rep_3 
##     0.0232719043     0.0223200231     0.0235324380     0.0231107497 
##   1h_chase_Rep_1   1h_chase_Rep_2   1h_chase_Rep_3   3h_chase_Rep_1 
##     0.0211553204     0.0216421689     0.0212003785     0.0138988066 
##   3h_chase_Rep_2   3h_chase_Rep_3   6h_chase_Rep_1   6h_chase_Rep_2 
##     0.0150091659     0.0149480630     0.0068880708     0.0072561156 
##   6h_chase_Rep_3  12h_chase_Rep_1  12h_chase_Rep_2  12h_chase_Rep_3 
##     0.0072943737     0.0022597908     0.0021795891     0.0021219205 
##  24h_chase_Rep_1  24h_chase_Rep_2  24h_chase_Rep_3 
##     0.0012122873     0.0010844372     0.0010793906

2 Calibrate

We want to estimate the variance of each observation. We could model this exactly as each observed “T” encoded as 0 for unconverted and 1 for converted, having a Bernoulli distribution with a variance of \(\mu(1-\mu)\) for mean \(\mu\). The observed proportions are then an average of such values. For \(n\) such values, the variance of this average would be

\[ \sigma^2 = \frac{\mu(1-\mu)}{n} \]

However if our estimate of \(\mu\) is exactly zero, the variance would become zero and so the weight would become infinite. To avoid infinities:

This is achieved using the mu_min argument to weitrix_calibrate_all. A natural choice to clip at is 0.001, the background rate of apparent T to C conversions seen due to sequencing errors.

A further possible problem is that biological variation does not disappear with greater and greater \(n\), so dividing by \(n\) may be over-optimistic. We will supply \(n\) (stored in weights) to a gamma GLM on the squared residuals with log link, using the Bernoulli variance as an offset. This GLM is then used to assign calibrated weights.

# Compute an initial fit to provide residuals
fit <- weitrix_components(wei, design=~group)

cal <- weitrix_calibrate_all(wei, 
    design = fit,
    trend_formula = 
        ~ log(weight) + offset(log(mu*(1-mu))), 
    mu_min=0.001, mu_max=0.999)

metadata(cal)$weitrix$all_coef
## (Intercept) log(weight) 
##   0.3391974  -0.9154304

This trend formula was validated as adequate (although not perfect) by examining calibration plots, as demonstrated below.

The amount of conversion differs a great deal between timepoints, so we examine them individually.

weitrix_calplot(wei, fit, cat=group, covar=mu, guides=FALSE) + 
    coord_cartesian(xlim=c(0,0.1)) + labs(title="Before calibration")

weitrix_calplot(cal, fit, cat=group, covar=mu) + 
    coord_cartesian(xlim=c(0,0.1)) + labs(title="After calibration")

Ideally the red lines would all be horizontal. This isn’t possible for very small proportions, since this becomes a matter of multiplying zero by infinity.

We can also examine the weighted residuals vs the original weights (the coverage of “T”s).

weitrix_calplot(wei, fit, cat=group, covar=log(weitrix_weights(wei)), guides=FALSE) + 
    labs(title="Before calibration")