systemPipeR 2.16.3
This workflow template is for analyzing VAR-Seq data. It is provided by
systemPipeRdata,
a companion package to systemPipeR (H Backman and Girke 2016).
Similar to other systemPipeR workflow templates, a single command generates
the necessary working environment. This includes the expected directory
structure for executing systemPipeR workflows and parameter files for running
command-line (CL) software utilized in specific analysis steps. For learning
and testing purposes, the template now ships with eight paired-end human
whole-exome FASTQ files curated from the 1000 Genomes Project (HG00152,
HG00234, HG00188, HG00190, HG00183, HG00099, HG00102, HG00359) together with
the hg38 reference sequence and annotations.
This enables users to seamlessly run the numerous analysis steps of
this workflow from start to finish if all required data is downloaded.
After testing the workflow, users have the flexibility to employ the
template as is with their own data or modify it to suit their specific needs.
For more comprehensive information on designing and executing workflows, users
want to refer to the main vignettes of
systemPipeR
and
systemPipeRdata.
The Rmd file (systemPipeVARseq.Rmd) associated with this vignette serves a dual purpose. It acts
both as a template for executing the workflow and as a template for generating
a reproducible scientific analysis report. Thus, users want to customize the text
(and/or code) of this vignette to describe their experimental design and
analysis results. This typically involves deleting the instructions how to work
with this workflow, and customizing the text describing experimental designs,
other metadata and analysis results.
Typically, the user wants to describe here the sources and versions of the
reference genome sequence along with the corresponding annotations. The standard
directory structure of systemPipeR (see here),
expects the input data in a subdirectory named data
and all results will be written to a separate results directory. The Rmd source file
for executing the workflow and rendering its report (here systemPipeVARseq.Rmd) is
expected to be located in the parent directory.
The dataset used by this template is derived from the 1000 Genomes Project (Phase 3) and contains eight paired-end human whole-exome libraries (four male and four female samples). Using paired-end reads maintains flexibility because the same files can exercise both single- and paired-end analysis routines.
Data and resources can be downloaded following commands.
Run the following R code chunk to and copy the outputted commands into your terminal to download the data files, reference genome and annotation tools and other resources needed to run the workflow.
source(system.file("extdata", "workflows", "varseq", "VARseq_helper.R",
package = "systemPipeRdata"))
varseq_example_dataset_download_cmd()
To use one’s own VAR-Seq and reference genome data, users want to move or link the
data to the designated data directory and execute the workflow from the parent directory
using their customized Rmd file. Beginning with this template, users should delete the provided test
data and move or link their custom data to the designated locations.
Alternatively, users can create an environment skeleton (named new here) or
build one from scratch. To perform a VAR-Seq analysis with new FASTQ files
from the same reference genome, users only need to provide the FASTQ files and
an experimental design file called ‘targets’ file that outlines the experimental
design. The structure and utility of targets files is described in systemPipeR's
main vignette here.
The default analysis steps included in this VAR-Seq workflow template are listed below. Users can modify the existing steps, add new ones or remove steps as needed.
Default analysis steps
BWA (or any other DNA aligner)GATK, substeps 1-10GATKSnpEffThe environment for this VAR-Seq workflow is auto-generated below with the
genWorkenvir function (selected under workflow="varseq"). It is fully populated
with a small test data set, including FASTQ files, reference genome and annotation data. The name of the
resulting workflow directory can be specified under the mydirname argument.
The default NULL uses the name of the chosen workflow. An error is issued if
a directory of the same name and path exists already. After this, the user’s R
session needs to be directed into the resulting varseq directory (here with
setwd).
library(systemPipeRdata)
genWorkenvir(workflow = "varseq", mydirname = "varseq")
setwd("varseq")
targets fileThe targets file defines the input files (e.g. FASTQ or BAM) and sample
comparisons used in a data analysis workflow. It can also store any number of
additional descriptive information for each sample. The following shows the first
four lines of the targets file used in this workflow template.
targetspath <- system.file("extdata", "workflows", "varseq",
"targetsPE_varseq.txt", package = "systemPipeRdata")
targets <- read.delim(targetspath, comment.char = "#")
targets[1:4, -(5:8)]
## FileName1 FileName2
## 1 ./data/SRR769545_1.fastq.gz ./data/SRR769545_2.fastq.gz
## 2 ./data/SRR768527_1.fastq.gz ./data/SRR768527_2.fastq.gz
## 3 ./data/SRR070795_1.fastq.gz ./data/SRR070795_2.fastq.gz
## 4 ./data/SRR070790_1.fastq.gz ./data/SRR070790_2.fastq.gz
## SampleName Factor
## 1 HG00152 male
## 2 HG00234 male
## 3 HG00188 male
## 4 HG00190 male
To work with custom data, users need to generate a targets file containing
the paths to their own FASTQ files. Here is a detailed description of the structure and
utility of targets files.
After a workflow environment has been created with the above genWorkenvir
function call and the corresponding R session directed into the resulting directory (here varseq),
the SPRproject function is used to initialize a new workflow project instance. The latter
creates an empty SAL workflow container (below sal) and at the same time a
linked project log directory (default name .SPRproject) that acts as a
flat-file database of a workflow. Additional details about this process and
the SAL workflow control class are provided in systemPipeR's main vignette
here
and here.
Next, the importWF function imports all the workflow steps outlined in the
source Rmd file of this vignette (here systemPipeVARseq.Rmd) into the SAL workflow container.
An overview of the workflow steps and their status information can be returned
at any stage of the loading or run process by typing sal.
library(systemPipeR)
sal <- SPRproject()
sal <- importWF(sal, file_path = "systemPipeVARseq.Rmd", verbose = FALSE)
sal
After loading the workflow into sal, it can be executed from start to finish
(or partially) with the runWF command. Running the workflow will only be
possible if all dependent CL software is installed on a user’s system. Their
names and availability on a system can be listed with listCmdTools(sal, check_path=TRUE). For more information about the runWF command, refer to the
help file and the corresponding section in the main vignette
here.
Running workflows in parallel mode on computer clusters is a straightforward
process in systemPipeR. Users can simply append the resource parameters (such
as the number of CPUs) for a cluster run to the sal object after importing
the workflow steps with importWF using the addResources function. More
information about parallelization can be found in the corresponding section at
the end of this vignette here and in the main vignette
here.
sal <- runWF(sal)
Workflows can be visualized as topology graphs using the plotWF function.
plotWF(sal)
Figure 1: Toplogy graph of VAR-Seq workflow
Scientific and technical reports can be generated with the renderReport and
renderLogs functions, respectively. Scientific reports can also be generated
with the render function of the rmarkdown package. The technical reports are
based on log information that systemPipeR collects during workflow runs.
# Scientific report
sal <- renderReport(sal)
rmarkdown::render("systemPipeVARseq.Rmd", clean = TRUE, output_format = "BiocStyle::html_document")
# Technical (log) report
sal <- renderLogs(sal)
The statusWF function returns a status summary for each step in a SAL workflow instance.
statusWF(sal)
The data analysis steps of this workflow are defined by the following workflow code chunks.
They can be loaded into SAL interactively, by executing the code of each step in the
R console, or all at once with the importWF function used under the Quick start section.
R and CL workflow steps are declared in the code chunks of Rmd files with the
LineWise and SYSargsList functions, respectively, and then added to the SAL workflow
container with appendStep<-. Their syntax and usage is described
here.
The first step loads the systemPipeR package.
cat(crayon::blue$bold("To use this workflow, following R packages are expected:\n"))
cat(c("'ggplot2', 'dplyr'\n"), sep = "', '")
### pre-end
appendStep(sal) <- LineWise(code = {
library(systemPipeR)
}, step_name = "load_SPR")
systemPipeR supports wrapping the canonical FastQC
command-line utility so the familiar per-sample HTML and ZIP outputs are generated
alongside the R-based plots. Running FastQC captures quality diagnostics (adapter content, sequence
duplication, over-represented sequences) of the reads.
appendStep(sal) <- SYSargsList(step_name = "fastqc", targets = "targetsPE_varseq.txt",
wf_file = "fastqc/workflow_fastqc.cwl", input_file = "fastqc/fastqc.yml",
dir_path = "param/cwl", inputvars = c(FileName1 = "_FASTQ_PATH1_",
FileName2 = "_FASTQ_PATH2_"), dependency = "load_SPR")
FastQC writes one HTML and one zipped report per input FASTQ into
results/fastqc (configurable in fastqc.yml). Review these outputs together
with the seeFastq plots to decide whether adapter trimming or additional
filtering is required.
The following seeFastq and seeFastqPlot functions generate and plot a
series of useful quality statistics for a set of FASTQ files, including per
cycle quality box plots, base proportions, base-level quality trends, relative
k-mer diversity, length, and occurrence distribution of reads, number of reads
above quality cutoffs and mean quality distribution. The results can be
exported to different graphics formats, such as a PNG file, here named
fastqReport_varseq.png. Detailed information about the usage and visual components
in the quality plots can be found in the corresponding help file (see
?seeFastq or ?seeFastqPlot).
appendStep(sal) <- LineWise(code = {
fastq1 <- getColumn(sal, step = "fastqc", "targetsWF", column = 1)
fastq2 <- getColumn(sal, step = "fastqc", "targetsWF", column = 2)
fastq <- setNames(c(rbind(fastq1, fastq2)), c(rbind(names(fastq1),
names(fastq2))))
fqlist <- seeFastq(fastq = fastq, batchsize = 1000, klength = 8)
png("./results/fastqReport_varseq.png", height = 650, width = 288 *
length(fqlist))
seeFastqPlot(fqlist)
dev.off()
}, step_name = "fastq_report", dependency = "fastqc")
Next, we need to populate the object created with the first step in the
workflow. Here, an example of how to perform this task using parameters template
files for trimming FASTQ files with Trimmomatic
software (Bolger, Lohse, and Usadel 2014).
For this step, the SYSargsList function has been used to build the command-line
and append to sal object. For more details of all the features and utilities,
please consult the main vignette.
If GATK (default) is used for variant calling, any type of fastq trimming is strongly depreciated. GATK have internal function to handle low quality posistions.
appendStep(sal) <- SYSargsList(step_name = "trimmomatic", targets = "targetsPE_varseq.txt",
wf_file = "trimmomatic/trimmomatic-pe.cwl", input_file = "trimmomatic/trimmomatic-pe.yml",
dir_path = "param/cwl", inputvars = c(FileName1 = "_FASTQ_PATH1_",
FileName2 = "_FASTQ_PATH2_", SampleName = "_SampleName_"),
dependency = c("load_SPR"), run_step = "optional")
preprocessReads functionThe function preprocessReads allows to apply predefined or custom
read preprocessing functions to all FASTQ files referenced in a
SYSargsList container, such as quality filtering or adaptor trimming
routines. Internally, preprocessReads uses the FastqStreamer function from
the ShortRead package to stream through large FASTQ files in a
memory-efficient manner. The following example performs adaptor trimming with
the trimLRPatterns function from the Biostrings package.
Here, we are appending this step at the SYSargsList object created previously.
All the parameters are defined on the preprocessReads/preprocessReads-pe.yml file.
appendStep(sal) <- SYSargsList(step_name = "preprocessing", targets = "targetsPE_varseq.txt",
dir = TRUE, wf_file = "preprocessReads/preprocessReads-pe.cwl",
input_file = "preprocessReads/preprocessReads-pe.yml", dir_path = "param/cwl",
inputvars = c(FileName1 = "_FASTQ_PATH1_", FileName2 = "_FASTQ_PATH2_",
SampleName = "_SampleName_"), dependency = c("load_SPR"),
run_step = "optional")
After the trimming step, the outfiles files can be used to generate the new
targets files containing the paths to the trimmed FASTQ files. The new targets
information can be used for the next workflow step instance, e.g. running the
NGS alignments with the trimmed FASTQ files.
The following example shows how one can design a custom read ‘preprocessReads’
function using utilities provided by the ShortRead package, and then run it
in batch mode with the ‘preprocessReads’ function. For here, it is possible to
replace the function used on the preprocessing step and modify the sal object.
Because it is a custom function, it is necessary to save the part in the R object,
and internally the preprocessReads.doc.R is loading the function. If the R object
is saved with a different name (here "param/customFCT.RData"), please replace
that accordingly in the preprocessReads.doc.R.
Please, note that this step is not added to the workflow, here just for demonstration.
First, we defined the function in the workflow:
appendStep(sal) <- LineWise(code = {
filterFct <- function(fq, cutoff = 20, Nexceptions = 0) {
qcount <- rowSums(as(quality(fq), "matrix") <= cutoff,
na.rm = TRUE)
# Retains reads where Phred scores are >= cutoff
# with N exceptions
fq[qcount <= Nexceptions]
}
save(list = ls(), file = "param/customFCT.RData")
}, step_name = "custom_preprocessing_function", dependency = "preprocessing")
After, we can edit the input parameter:
yamlinput(sal, "preprocessing")$Fct
yamlinput(sal, "preprocessing", "Fct") <- "'filterFct(fq, cutoff=20, Nexceptions=0)'"
yamlinput(sal, "preprocessing")$Fct ## check the new function
cmdlist(sal, "preprocessing", targets = 1) ## check if the command line was updated with success
BWA-MEMThe NGS reads of this project are aligned against the reference genome
sequence using the highly variant tolerant short read aligner BWA-MEM
(Li 2013; Li and Durbin 2009). The parameter settings of the aligner are
defined in the param/cwl/gatk/bwa-pe.cwl.
This test code uses untrimmed fastq files since the demo data is minimal and
limited. However, it is best to test with FASTQ quality report function provided
above to verify your real data first.
Build the index and dictionary files for BWA and GATK to run.
appendStep(sal) <- SYSargsList(step_name = "bwa_index", dir = FALSE,
targets = NULL, wf_file = "gatk/workflow_bwa-index.cwl",
input_file = "gatk/gatk.yaml", dir_path = "param/cwl", dependency = "load_SPR")
Create reference fasta dictionary.
appendStep(sal) <- SYSargsList(step_name = "fasta_index", dir = FALSE,
targets = NULL, wf_file = "gatk/workflow_fasta_dict.cwl",
input_file = "gatk/gatk.yaml", dir_path = "param/cwl", dependency = "bwa_index")
Create dictionary index.
appendStep(sal) <- SYSargsList(step_name = "faidx_index", dir = FALSE,
targets = NULL, wf_file = "gatk/workflow_fasta_faidx.cwl",
input_file = "gatk/gatk.yaml", dir_path = "param/cwl", dependency = "fasta_index")
appendStep(sal) <- SYSargsList(step_name = "bwa_alignment", targets = "targetsPE_varseq.txt",
wf_file = "gatk/workflow_bwa-pe.cwl", input_file = "gatk/gatk.yaml",
dir_path = "param/cwl", inputvars = c(FileName1 = "_FASTQ_PATH1_",
FileName2 = "_FASTQ_PATH2_", SampleName = "_SampleName_"),
dependency = c("faidx_index"))
The following provides an overview of the number of reads in each sample and how many of them aligned to the reference.
appendStep(sal) <- LineWise(code = {
bampaths <- getColumn(sal, step = "bwa_alignment", "outfiles",
column = "samtools_sort_bam")
fqpaths <- getColumn(sal, step = "bwa_alignment", "targetsWF",
column = "FileName1")
read_statsDF <- alignStats(args = bampaths, fqpaths = fqpaths,
pairEnd = TRUE)
write.table(read_statsDF, "results/alignStats.xls", row.names = FALSE,
quote = FALSE, sep = "\t")
}, step_name = "align_stats", dependency = "bwa_alignment", run_step = "optional")
The symLink2bam function creates symbolic links to view the BAM alignment files in a
genome browser such as IGV. The corresponding URLs are written to a file
with a path specified under urlfile in the results directory.
appendStep(sal) <- LineWise(code = {
bampaths <- getColumn(sal, step = "bwa_alignment", "outfiles",
column = "samtools_sort_bam")
symLink2bam(sysargs = bampaths, htmldir = c("~/.html/", "somedir/"),
urlbase = "http://cluster.hpcc.ucr.edu/~tgirke/", urlfile = "./results/IGVurl.txt")
}, step_name = "bam_urls", dependency = "bwa_alignment", run_step = "optional")
The following performs variant calling with GATK on a single
machine by runWF function for each sample sequentially. If a cluster compute
is available, running in parallel mode on a compute cluster can be performed by
runWF, making available the resources and choose run_session = "compute".
Not all users have a cluster system, so here to demonstrate an example of variant calling workflow, only single-machine commands are shown. For cluster jobs, please refer to our main vignette.
In addition, the user would choose only one variant caller here rather than running several ones. However, the workflow manager allows keeping multiple options available for running the analysis.
GATKThe following steps are based on GATK 4.1.1.0 Best Practice.
There are 10 individual steps where the user can choose where to jump in and where to skip.
All scripts are located at param/cwl/gatk. BQSR (Base Quality Score Recalibration)
and VQSR (Variant Quality Score Recalibration) are very specific
to a limited species like human, so this workflow does not support these steps.
fastq to ubamConvert fastq files to bam files to prepare for the following step. It is very
important to specific your sequencing platform, default is illumina. User need
to change param/cwl/gatk/gatk_fastq2ubam.cwl if the platform is different. Platform information
is needed for the variant caller in later steps to correct calling parameters.
appendStep(sal) <- SYSargsList(step_name = "fastq2ubam", targets = "targetsPE_varseq.txt",
wf_file = "gatk/workflow_gatk_fastq2ubam.cwl", input_file = "gatk/gatk.yaml",
dir_path = "param/cwl", inputvars = c(FileName1 = "_FASTQ_PATH1_",
FileName2 = "_FASTQ_PATH2_", SampleName = "_SampleName_"),
dependency = c("faidx_index"))
bam and ubamThis step merges a bam and ubam and creates a third bam file that contains
alignment information and remaining information that was removed by the aligner like BWA.
The removed information is essential for variant statistics calculation. Previous steps are
recommended, but variant calling can still be performed without these steps.
appendStep(sal) <- SYSargsList(step_name = "merge_bam", targets = c("bwa_alignment",
"fastq2ubam"), wf_file = "gatk/workflow_gatk_mergebams.cwl",
input_file = "gatk/gatk.yaml", dir_path = "param/cwl", inputvars = c(bwa_men_sam = "_bwasam_",
ubam = "_ubam_", SampleName = "_SampleName_"), rm_targets_col = c("preprocessReads_1",
"preprocessReads_2"), dependency = c("bwa_alignment",
"fastq2ubam"))
bam files by genomic coordinatesSort bam files by genomic coordinates.
appendStep(sal) <- SYSargsList(step_name = "sort", targets = "merge_bam",
wf_file = "gatk/workflow_gatk_sort.cwl", input_file = "gatk/gatk.yaml",
dir_path = "param/cwl", inputvars = c(merge_bam = "_mergebam_",
SampleName = "_SampleName_"), rm_targets_col = c("bwa_men_sam",
"ubam", "SampleName_fastq2ubam", "Factor_fastq2ubam",
"SampleLong_fastq2ubam", "Experiment_fastq2ubam", "Date_fastq2ubam"),
dependency = c("merge_bam"))
Mark PCR artifacts in sequencing. A duplicate_metrics file will also be produced
by this step, but will not be used for the next step. This file is just for the user
to check duplicates status summary.
appendStep(sal) <- SYSargsList(step_name = "mark_dup", targets = "sort",
wf_file = "gatk/workflow_gatk_markduplicates.cwl", input_file = "gatk/gatk.yaml",
dir_path = "param/cwl", inputvars = c(sort_bam = "_sort_",
SampleName = "_SampleName_"), rm_targets_col = c("merge_bam"),
dependency = c("sort"))
gvcfThe HaplotypeCaller is running a gvcf mode in this step. G stands for ‘genomic’.
The file not only contains variant sites information but also non-variant sites information;
thus, at the following step, the cohort caller can use this information to validate the true variants.
appendStep(sal) <- SYSargsList(step_name = "hap_caller", targets = "fix_tag",
wf_file = "gatk/workflow_gatk_haplotypecaller.cwl", input_file = "gatk/gatk.yaml",
dir_path = "param/cwl", inputvars = c(fixtag_bam = "_fixed_",
SampleName = "_SampleName_"), rm_targets_col = c("mark_bam"),
dependency = c("fix_tag"))
gvcfsIt is recommended to import all gvcfs to a
TileDB database for fast cohort
variant calling at the following step. Note: if you are working with non-diploid data,
use CombineGVCFs function from GATK and change the gvcf_db_folder parameter
in param/cwl/gatk/gatk.yaml to be your combined gvcf file path.
Important: Make sure all samples’ *.g.vcf.gz files are in the results folder,
also the tbi index files also should be there.
appendStep(sal) <- SYSargsList(step_name = "import", targets = NULL,
dir = FALSE, wf_file = "gatk/workflow_gatk_genomicsDBImport.cwl",
input_file = "gatk/gatk.yaml", dir_path = "param/cwl", dependency = c("hap_caller"))
gvcfAssess variants by information from all gvcfs. A collective vcf called
samples.vcf.gz is created by default naming.
appendStep(sal) <- SYSargsList(step_name = "call_variants", targets = NULL,
dir = FALSE, wf_file = "gatk/workflow_gatk_genotypeGVCFs.cwl",
input_file = "gatk/gatk.yaml", dir_path = "param/cwl", dependency = c("import"))
Variant Quality Score Recalibration (VQSR) is not included in this workflow.
Variants are hard filtered together.
See this Post for parameters for hard filtering. Change these settings in param/cwl/gak/gatk_variantFiltration.sh if needed. VQSR requires a large quantity of
samples to be training data before you can do filtering. Read this
post for more information.
appendStep(sal) <- SYSargsList(step_name = "filter", targets = NULL,
dir = FALSE, wf_file = "gatk/workflow_gatk_variantFiltration.cwl",
input_file = "gatk/gatk.yaml", dir_path = "param/cwl", dependency = c("call_variants"))
After cohort calling, filtering, all variants for all samples are stored in one big file. Extract variants for each sample and save them separately (only variants that have passed the filters are stored).
appendStep(sal) <- SYSargsList(step_name = "create_vcf", targets = "hap_caller",
wf_file = "gatk/workflow_gatk_select_variant.cwl", input_file = "gatk/gatk.yaml",
dir_path = "param/cwl", inputvars = c(SampleName = "_SampleName_"),
dependency = c("hap_caller", "filter"))
Variant calling ends here. Downstream analysis starts from the next section.
Scripts of downstream analysis are stored in param/cwl/varseq_downstream.
optional: This step is not included in the default workflow. After successfully execute the entire workflow, users may load individual vcf files to R for other analysis like below.
VCF files can be imported into R with the readVcf function.
Both VCF and VRanges objects provide convenient data structure for
working with variant data (e.g. SNP quality filtering).
This step is not included in the default workflow steps, but can be useful to inspect individual sample’s raw variants.
library(VariantAnnotation)
vcf_raw <- getColumn(sal, "create_vcf")
vcf <- readVcf(vcf_raw[1], "Homo sapiens")
vcf
vr <- as(vcf, "VRanges")
vr
The function filterVars filters VCF files based on user definable
quality parameters. It sequentially imports each VCF file into R, applies the
filtering on an internally generated VRanges object and then writes
the results to a new subsetted VCF file. The filter parameters are passed on to
the corresponding argument as a character string. The function applies this
filter to the internally generated VRanges object using the standard
subsetting syntax for two dimensional objects such as: vr[filter, ].
GATKThe below example filters for variants that are supported by >=x
reads and >=80% of them support the called variants. In addition, all
variants need to pass >=x of the soft filters recorded in the VCF
files generated by GATK.
There is already some cohort filtering in GATK step 10. Some additional hard filtering is provided here. This step is included here, but in a real analysis, you may skip this step.
appendStep(sal) <- LineWise(code = {
vcf_raw <- getColumn(sal, "create_vcf")
library(VariantAnnotation)
filter <- "totalDepth(vr) >= 20 & (altDepth(vr) / totalDepth(vr) >= 0.8)"
vcf_filter <- suppressWarnings(filterVars(vcf_raw, filter,
organism = "Homo sapiens", out_dir = "results/vcf_filter"))
}, step_name = "filter_vcf", dependency = "create_vcf", run_step = "optional")
Check filtering outcome for one sample
This mini step can be used to compare vcfs files before and after filtering.
This can be used once the workflow has been run, and make sure “filter_vcf” is
done, since it is an optional step.
copyEnvir(sal, "vcf_raw", globalenv())
copyEnvir(sal, "vcf_filter", globalenv())
length(as(readVcf(vcf_raw[1], genome = "Homo sapiens"), "VRanges")[,
1])
length(as(readVcf(vcf_filter[1], genome = "Homo sapiens"), "VRanges")[,
1])
Optional but included in the default
After variant filtering, raw variants and filtered variants can be compared. This step uses the filtered variants from GATK step 9 to generate summary statistics.
appendStep(sal) <- LineWise(code = {
# read in the cohort VCF file
vcf_all <- suppressWarnings(VariantAnnotation::readVcf("./results/samples_filter.vcf.gz",
"Homo sapiens"))
filter_values <- VariantAnnotation::filt(vcf_all)
filter_values[is.na(filter_values)] <- "" # ensure character comparisons work
overall_counts <- table(filter_values) |>
dplyr::as_tibble() |>
dplyr::arrange(dplyr::desc(n))
vcf_all_ft <- VariantAnnotation::geno(vcf_all)$FT
passes_per_sample <- apply(vcf_all_ft, 2, function(x) sum(x ==
"PASS", na.rm = TRUE))
fails_per_sample <- apply(vcf_all_ft, 2, function(x) sum(x !=
"PASS", na.rm = TRUE))
sample_filter_summary <- dplyr::tibble(sample = names(passes_per_sample),
passed = passes_per_sample, filtered = fails_per_sample)
write.table(overall_counts, file = "results/summary_filter_overall.tsv",
sep = "\t", quote = FALSE, row.names = FALSE)
write.table(sample_filter_summary, file = "results/summary_filter_per_sample.tsv",
sep = "\t", quote = FALSE, row.names = FALSE)
library(ggplot2)
source("VARseq_help.R")
png("results/summary_filter_plot.png", width = 800, height = 600)
p_filter <- plot_summary_filter_plot(sample_filter_summary)
print(p_filter)
dev.off()
# clean up RAM
try(rm(vcf_all, vcf_all_ft, filter_values, passes_per_sample,
fails_per_sample), silent = TRUE)
invisible(gc())
}, step_name = "summary_filter", dependency = "filter")
GATKIn this step, we use SnpEff to annotate the filtered variants. This tool provides a powerful way to predict the effects of genetic variants on genes and proteins.
required
appendStep(sal) <- SYSargsList(step_name = "annotate_vcf", targets = "create_vcf",
dir = TRUE, wf_file = "gatk/snpeff.cwl", input_file = "gatk/gatk.yaml",
dir_path = "param/cwl", inputvars = c(SampleName = "_SampleName_",
vcf_raw = "_vcf_raw_"), dependency = c("create_vcf"))
When the annotation is done on each vcf file, next, all annotated vcfs can be imported as single lists with large vranges.
required
appendStep(sal) <- LineWise(code = {
vcf_anno <- getColumn(sal, "annotate_vcf", position = "outfiles",
column = "ann_vcf")
clean_vcf_file <- function(path) {
lines <- readLines(path, warn = FALSE)
if (!length(lines))
return(path)
header_start <- which(grepl("^##fileformat=", lines,
perl = TRUE))[1]
if (is.na(header_start) || header_start <= 1)
return(path)
writeLines(lines[header_start:length(lines)], path)
path
}
vcf_anno <- vapply(vcf_anno, clean_vcf_file, FUN.VALUE = character(1))
vr_from_vcf <- function(path) {
message("Importing annotated VCF: ", path)
vcf <- suppressWarnings(VariantAnnotation::readVcf(path,
"Homo sapiens"))
ft <- VariantAnnotation::geno(vcf)$FT
if (!is.null(ft) && !isTRUE(is.character(ft))) {
ft_vec <- as.character(ft)
ft_clean <- matrix(ft_vec, ncol = 1)
if (!is.null(dim(ft))) {
ft_clean <- matrix(ft_vec, nrow = nrow(ft), dimnames = dimnames(ft))
}
VariantAnnotation::geno(vcf)$FT <- ft_clean
}
suppressWarnings(as(vcf, "VRanges"))
}
vcf_vranges <- lapply(vcf_anno, vr_from_vcf)
}, step_name = "combine_var", dependency = "annotate_vcf")
To clean up the vranges objects into more human-readable summaries, we extract key annotation fields along with other key variant information into a long format table, where each row corresponds to a single variant annotation of a sample.
required
appendStep(sal) <- LineWise(code = {
source("VARseq_help.R")
summary_var <- extract_var_table(vcf_vranges)
utils::write.table(summary_var, file = "results/variant_summary_long.tsv",
sep = "\t", quote = FALSE, row.names = FALSE)
}, step_name = "summary_var", dependency = "combine_var")
Optional but included in the default
The following step plots the number of variants by consequence type for each sample.
appendStep(sal) <- LineWise(code = {
library(ggplot2)
effect_counts <- summary_var |>
dplyr::count(sample, effect, name = "n")
png("./results/var_consequence_log.png", width = 1000, height = 600)
p_var_consequence_log <- ggplot(effect_counts, aes(sample,
n, fill = effect)) + geom_col(position = "dodge", alpha = 0.8) +
geom_text(aes(label = scales::comma(n)), position = position_dodge(width = 0.9),
vjust = -0.2, size = 3) + scale_y_continuous(trans = "log10",
labels = scales::comma_format()) + labs(y = "Variant count (log10 scale)",
title = "Variant consequences (log scale)") + theme_minimal() +
theme(axis.text.x = element_text(angle = 45, hjust = 1))
print(p_var_consequence_log)
dev.off()
}, step_name = "plot_var_consequence", dependency = "summary_var")
Optional but included in the default
After the summary table is created, we can plot some statistics of the variants. Here this step plots the number of nonsynonymous and HIGH impact variants for each sample.
appendStep(sal) <- LineWise(code = {
library(ggplot2)
plot_summary_data <- summary_var |>
dplyr::filter(consequence %in% c("nonsynonymous_variant",
"stop_gained", "frameshift_variant", "splice_acceptor_variant",
"splice_donor_variant", "start_lost", "stop_lost"),
effect == "HIGH") |>
dplyr::mutate(sample = factor(sample, levels = getColumn(sal,
step = "annotate_vcf", position = "targetsWF", column = "SampleName")),
sex = getColumn(sal, step = "annotate_vcf", position = "targetsWF",
column = "Factor")[sample])
png("./results/var_summary.png")
p_var_summary <- ggplot(plot_summary_data) + geom_bar(aes(x = sample,
fill = sex), alpha = 0.75) + scale_fill_brewer(palette = "Set2") +
theme_minimal()
print(p_var_summary)
dev.off()
}, step_name = "plot_var_stats", dependency = "summary_var")
Optional but included in the default
To visualize whether male and female samples carry different burdens of
high-impact variants, the following step reuses the plot_summary_data
object, produces a simple boxplot across the two groups, and overlays a
Wilcoxon rank-sum p-value for a quick statistical comparison.
appendStep(sal) <- LineWise(code = {
source("VARseq_help.R")
# change the sample and group columns as needed
p_summary_boxplot <- plot_summary_boxplot(plot_summary_data,
sample_col = "sample", group_col = "sex")
library(ggplot2)
png("./results/var_summary_boxplot.png")
print(p_summary_boxplot)
dev.off()
}, step_name = "plot_var_boxplot", dependency = "plot_var_stats")
Optional but included in the default
The venn diagram utilities defined by the systemPipeR package can be used to
identify common and unique variants reported for different samples
and/or variant callers. The below generates a 3-way venn diagram
comparing 3 samples.
appendStep(sal) <- LineWise(code = {
top_n <- min(3, length(variant_tables))
selected_tables <- variant_tables[seq_len(top_n)]
if (is.null(names(selected_tables)) || any(!nzchar(names(selected_tables)))) {
names(selected_tables) <- paste0("Sample_", seq_along(selected_tables))
}
variant_sets <- lapply(selected_tables, function(df) {
if (!nrow(df))
return(character(0))
unique(paste0(df$seqnames, ":", df$start, "_", df$ref,
"/", df$alt))
})
vennset <- overLapper(variant_sets, type = "vennsets")
png("./results/vennplot_var.png")
vennPlot(vennset, mymain = "Venn Plot of First 3 Samples",
mysub = "", colmode = 2, ccol = c("red", "blue"))
dev.off()
}, step_name = "venn_diagram", dependency = "summary_var")
Optional but included in default
The following plots a selected variant with ggbio.
In this example, the input BAM file is from the GATK step 5, analysis ready bam.
You can use other aligned BAMs as well, but make sure they are indexed. The VCF
file is taken from create_vcf step or you can load your own vcf.
appendStep(sal) <- LineWise(code = {
source("VARseq_help.R")
first_high <- summary_var |>
dplyr::filter(effect == "HIGH") |>
head(n = 1)
library(ggbio)
library(VariantAnnotation)
mychr <- as.character(first_high$seqnames)
mystart <- as.numeric(first_high$start) - 500
myend <- as.numeric(first_high$end) + 500
bam_path <- getColumn(sal, "fix_tag")[first_high$sample]
vcf_path <- getColumn(sal, step = "create_vcf")[first_high$sample]
vcf <- suppressWarnings(readVcf(vcf_path, "Homo sapiens"))
ga <- readGAlignments(bam_path, use.names = TRUE, param = ScanBamParam(which = GRanges(mychr,
IRanges(mystart, myend))))
vcf_chr <- normalize_ft(simplify_info(vcf[seqnames(vcf) ==
mychr]))
vr <- suppressWarnings(as(vcf_chr, "VRanges"))
vr_region <- vr[start(vr) >= mystart & end(vr) <= myend]
if (!length(vr_region)) {
vr_region <- vr
}
p1 <- autoplot(ga, geom = "rect")
p2 <- autoplot(ga, geom = "line", stat = "coverage")
p3 <- autoplot(vr_region, type = "fixed") + xlim(mystart,
myend) + theme(legend.position = "none", axis.text.y = element_blank(),
axis.ticks.y = element_blank())
p1_3 <- tracks(place_holder = ggplot2::ggplot(), Reads = p1,
Coverage = p2, Variant = p3, heights = c(0, 0.3, 0.2,
0.1)) + ylab("")
ggbio::ggsave(p1_3, filename = "./results/plot_variant.png",
units = "in")
}, step_name = "plot_variant", dependency = "summary_var")
appendStep(sal) <- LineWise(code = {
sessionInfo()
}, step_name = "sessionInfo", dependency = "plot_variant")
For running the workflow, runWF function will execute all the steps store in
the workflow container. The execution will be on a single machine without
submitting to a queuing system of a computer cluster.
sal <- runWF(sal)
Alternatively, the computation can be greatly accelerated by processing many files in parallel using several compute nodes of a cluster, where a scheduling/queuing system is used for load balancing.
The resources list object provides the number of independent parallel cluster
processes defined under the Njobs element in the list. The following example
will run 8 processes in parallel using 4 CPU cores per sample.
If the resources available on a cluster allow running all 8 processes at the
same time, then the shown sample submission will utilize a total of 32 CPU cores.
Note, runWF can be used with most queueing systems as it is based on utilities
from the batchtools package, which supports the use of template files (*.tmpl)
for defining the run parameters of different schedulers. To run the following
code, one needs to have both a conffile (see .batchtools.conf.R samples here)
and a template file (see *.tmpl samples here)
for the queueing available on a system. The following example uses the sample
conffile and template files for the Slurm scheduler provided by this package.
The resources can be appended when the step is generated, or it is possible to
add these resources later, as the following example using the addResources
function:
# wall time in mins, memory in MB
resources <- list(conffile = ".batchtools.conf.R", template = "batchtools.slurm.tmpl",
Njobs = 8, walltime = 120, ntasks = 1, ncpus = 4, memory = 1024,
partition = "short")
sal <- addResources(sal, c("hisat2_mapping"), resources = resources)
sal <- runWF(sal)
systemPipeR workflows instances can be visualized with the plotWF function.
plotWF(sal, rstudio = TRUE)
To check the summary of the workflow, we can use:
sal
statusWF(sal)
systemPipeR compiles all the workflow execution logs in one central location,
making it easier to check any standard output (stdout) or standard error
(stderr) for any command-line tools used on the workflow or the R code stdout.
sal <- renderLogs(sal)
To check command-line tools used in this workflow, use listCmdTools, and use listCmdModules
to check if you have a modular system.
The following code will print out tools required in your custom SPR project in the report. In case you are running the workflow for the first time and do not have a project yet, or you just want to browser this workflow, following code displays the tools required by default.
if (file.exists(file.path(".SPRproject", "SYSargsList.yml"))) {
local({
sal <- systemPipeR::SPRproject(resume = TRUE)
systemPipeR::listCmdTools(sal)
systemPipeR::listCmdModules(sal)
})
} else {
cat(crayon::blue$bold("Tools and modules required by this workflow are:\n"))
cat(c("trimmomatic/0.39", "samtools/1.14", "gatk/4.2.0.0",
"bcftools/1.15", "bwa/0.7.17", "snpEff/5.3"), sep = "\n")
}
## Tools and modules required by this workflow are:
## trimmomatic/0.39
## samtools/1.14
## gatk/4.2.0.0
## bcftools/1.15
## bwa/0.7.17
## snpEff/5.3
This is the session information for rendering this report. To access the session information
of workflow running, check HTML report of renderLogs.
sessionInfo()
## R version 4.5.2 (2025-10-31)
## Platform: x86_64-pc-linux-gnu
## Running under: Ubuntu 24.04.3 LTS
##
## Matrix products: default
## BLAS: /home/biocbuild/bbs-3.22-bioc/R/lib/libRblas.so
## LAPACK: /usr/lib/x86_64-linux-gnu/lapack/liblapack.so.3.12.0 LAPACK version 3.12.0
##
## locale:
## [1] LC_CTYPE=en_US.UTF-8 LC_NUMERIC=C
## [3] LC_TIME=en_GB LC_COLLATE=C
## [5] LC_MONETARY=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8
## [7] LC_PAPER=en_US.UTF-8 LC_NAME=C
## [9] LC_ADDRESS=C LC_TELEPHONE=C
## [11] LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C
##
## time zone: America/New_York
## tzcode source: system (glibc)
##
## attached base packages:
## [1] stats4 stats graphics grDevices utils
## [6] datasets methods base
##
## other attached packages:
## [1] magrittr_2.0.4 systemPipeRdata_2.14.1
## [3] systemPipeR_2.16.3 ShortRead_1.68.0
## [5] GenomicAlignments_1.46.0 SummarizedExperiment_1.40.0
## [7] Biobase_2.70.0 MatrixGenerics_1.22.0
## [9] matrixStats_1.5.0 BiocParallel_1.44.0
## [11] Rsamtools_2.26.0 Biostrings_2.78.0
## [13] XVector_0.50.0 GenomicRanges_1.62.1
## [15] IRanges_2.44.0 S4Vectors_0.48.0
## [17] Seqinfo_1.0.0 BiocGenerics_0.56.0
## [19] generics_0.1.4 BiocStyle_2.38.0
##
## loaded via a namespace (and not attached):
## [1] gtable_0.3.6 xfun_0.55
## [3] bslib_0.9.0 hwriter_1.3.2.1
## [5] ggplot2_4.0.1 remotes_2.5.0
## [7] htmlwidgets_1.6.4 latticeExtra_0.6-31
## [9] lattice_0.22-7 vctrs_0.6.5
## [11] tools_4.5.2 bitops_1.0-9
## [13] parallel_4.5.2 tibble_3.3.0
## [15] pkgconfig_2.0.3 Matrix_1.7-4
## [17] RColorBrewer_1.1-3 S7_0.2.1
## [19] cigarillo_1.0.0 lifecycle_1.0.4
## [21] stringr_1.6.0 compiler_4.5.2
## [23] farver_2.1.2 deldir_2.0-4
## [25] textshaping_1.0.4 codetools_0.2-20
## [27] htmltools_0.5.9 sass_0.4.10
## [29] yaml_2.3.12 pillar_1.11.1
## [31] crayon_1.5.3 jquerylib_0.1.4
## [33] DelayedArray_0.36.0 cachem_1.1.0
## [35] abind_1.4-8 tidyselect_1.2.1
## [37] digest_0.6.39 stringi_1.8.7
## [39] dplyr_1.1.4 bookdown_0.46
## [41] fastmap_1.2.0 grid_4.5.2
## [43] cli_3.6.5 SparseArray_1.10.8
## [45] S4Arrays_1.10.1 dichromat_2.0-0.1
## [47] scales_1.4.0 rmarkdown_2.30
## [49] pwalign_1.6.0 jpeg_0.1-11
## [51] interp_1.1-6 otel_0.2.0
## [53] png_0.1-8 kableExtra_1.4.0
## [55] evaluate_1.0.5 knitr_1.51
## [57] viridisLite_0.4.2 rlang_1.1.6
## [59] Rcpp_1.1.0 glue_1.8.0
## [61] xml2_1.5.1 BiocManager_1.30.27
## [63] formatR_1.14 svglite_2.2.2
## [65] rstudioapi_0.17.1 jsonlite_2.0.0
## [67] R6_2.6.1 systemfonts_1.3.1
This project was supported by funds from the National Institutes of Health (NIH) and the National Science Foundation (NSF).
Bolger, Anthony M, Marc Lohse, and Bjoern Usadel. 2014. “Trimmomatic: A Flexible Trimmer for Illumina Sequence Data.” Bioinformatics 30 (15): 2114–20.
H Backman, Tyler W, and Thomas Girke. 2016. “systemPipeR: NGS workflow and report generation environment.” BMC Bioinformatics 17 (1): 388. https://doi.org/10.1186/s12859-016-1241-0.
Li, H, and R Durbin. 2009. “Fast and Accurate Short Read Alignment with Burrows-Wheeler Transform.” Bioinformatics 25 (14): 1754–60. https://doi.org/10.1093/bioinformatics/btp324.
Li, Heng. 2013. “Aligning Sequence Reads, Clone Sequences and Assembly Contigs with BWA-MEM.” arXiv [Q-bio.GN], March. http://arxiv.org/abs/1303.3997.