%\VignetteIndexEntry{Analyzing RNA-seq data with the "DESeq2" package}
%\VignettePackage{DESeq2}
%\VignetteEngine{knitr::knitr}

% To compile this document
% library('knitr'); rm(list=ls()); knit('DESeq2.Rnw')

\documentclass{article}

<<style, eval=TRUE, echo=FALSE, results="asis">>=
BiocStyle::latex2()
@

\usepackage{subfig}% for combining multiple plots in one figure

\newcommand{\deseqtwo}{\textit{DESeq2}}
\newcommand{\lowtilde}{\raise.17ex\hbox{$\scriptstyle\mathtt{\sim}$}}

<<knitr, echo=FALSE, results="hide">>=
library("knitr")
opts_chunk$set(
  tidy=FALSE,
  dev="png",
  fig.show="hide",
  fig.width=4, fig.height=4.5,
  fig.pos="tbh",
  cache=TRUE,
  message=FALSE)
@ 

<<loadDESeq2, echo=FALSE>>=
library("DESeq2")
@

\author{Michael I.~Love}
\affil{Department of Biostatistics, Dana-Farber Cancer Institute and Harvard TH Chan School of Public Health, Boston, US;}

\author{Simon Anders}
\affil{Institute for Molecular Medicine Finland (FIMM), Helsinki, Finland;}

\author{Wolfgang Huber}
\affil{European Molecular Biology Laboratory (EMBL), Heidelberg, Germany}

\title{Differential analysis of count data -- the DESeq2 package}

\begin{document}

\maketitle

\begin{abstract}
  A basic task in the analysis of count data from RNA-seq is the detection of
  differentially expressed genes. The count data are presented as a table which reports,
  for each sample, the number of sequence fragments that have been assigned to each
  gene. Analogous data also arise for other assay types, including comparative ChIP-Seq,
  HiC, shRNA screening, mass spectrometry.  An important analysis question is the
  quantification and statistical inference of systematic changes between conditions, as
  compared to within-condition variability. The package \deseqtwo{} provides
  methods to test for differential expression by use of negative binomial generalized
  linear models; the estimates of dispersion and logarithmic fold changes 
  incorporate data-driven prior distributions\footnote{Other \Bioconductor{} packages 
  with similar aims are \Biocpkg{edgeR}, \Biocpkg{limma},
  \Biocpkg{DSS}, \Biocpkg{EBSeq} and \Biocpkg{baySeq}.}. 
  This vignette explains the use of the package and demonstrates typical workflows.  
  An RNA-seq workflow\footnote{\url{http://www.bioconductor.org/help/workflows/rnaseqGene/}} 
  on the Bioconductor website covers similar material to this vignette
  but at a slower pace, including the generation of count matrices
  from FASTQ files.
\end{abstract}

\packageVersion{\Sexpr{BiocStyle::pkg_ver("DESeq2")}}

 \vspace{5mm}
  
  \begin{table}
    \begin{tabular}{ | l | }
      \hline 
      If you use \deseqtwo{} in published research, please cite:  \\
      \\
      M. I. Love, W. Huber, S. Anders: \textbf{Moderated estimation of} \\
      \textbf{fold change and dispersion for RNA-seq data with DESeq2}. \\
      \emph{Genome Biology} 2014, \textbf{15}:550. \\
      \url{http://dx.doi.org/10.1186/s13059-014-0550-8}  \\
      \hline 
    \end{tabular}
  \end{table}


<<options, results="hide", echo=FALSE>>=
options(digits=3, prompt=" ", continue=" ")
@

\newpage

\tableofcontents

\newpage

\section{Standard workflow}

\subsection{Quick start}

Here we show the most basic steps for a differential expression analysis.
These steps require you have a \Rclass{RangedSummarizedExperiment} object
\Robject{se} which contains the counts and information about samples.
The \Robject{design} indicates that we want to
measure the effect of condition, controlling for batch differences.
The two factor variables \Robject{batch} and \Robject{condition} 
should be columns of \Robject{colData(se)}.

<<quick, eval=FALSE>>=
dds <- DESeqDataSet(se, design = ~ batch + condition)
dds <- DESeq(dds)
res <- results(dds, contrast=c("condition","trt","con"))
@

If you have a count matrix and sample information table, the first
line would use \Rfunction{DESeqDataSetFromMatrix} instead of 
\Rfunction{DESeqDataSet}, as shown in Section~\ref{sec:countmat}.

\subsection{How to get help}

All \deseqtwo{} questions should be posted to the Bioconductor support
site: \url{https://support.bioconductor.org}, which serves as a
repository of questions and answers. See the first question in the
list of Frequently Asked Questions (Section \ref{sec:faq})
for more information about how to construct an informative post.

\subsection{Input data} \label{sec:prep}

\subsubsection{Why un-normalized counts?}

As input, the \deseqtwo{} package expects count data as obtained, e.\,g.,
from RNA-seq or another high-throughput sequencing experiment, in the form of a
matrix of integer values. The value in the $i$-th row and the $j$-th column of
the matrix tells how many reads can be assigned to gene $i$ in sample $j$.
Analogously, for other types of assays, the rows of the matrix might correspond
e.\,g.\ to binding regions (with ChIP-Seq) or peptide sequences (with
quantitative mass spectrometry). We will list method for obtaining count matrices
in sections below.

The values in the matrix should be un-normalized counts of sequencing reads (for
single-end RNA-seq) or fragments (for paired-end RNA-seq). 
The \href{http://www.bioconductor.org/help/workflows/rnaseqGene/}{RNA-seq workflow}
describes multiple techniques for preparing such count matrices.
It is important to provide count matrices as input for \deseqtwo{}'s
statistical model \cite{Love2014} to hold, as only the  
count values allow assessing the measurement precision correctly. The \deseqtwo{}
model internally corrects for library size, so transformed or
normalized values such as counts scaled by library size should not
be used as input. 

\subsubsection{\Rclass{SummarizedExperiment} input} \label{sec:sumExpInput}

The class used by the \deseqtwo{} package to store the read counts 
is \Rclass{DESeqDataSet} which extends the \Rclass{RangedSummarizedExperiment} 
class of the \Biocpkg{SummarizedExperiment} package. 
This facilitates preparation steps and also downstream exploration of results. 
For counting aligned reads in genes, the \Rfunction{summarizeOverlaps} function of
\Biocpkg{GenomicAlignments} with \Robject{mode="Union"} is
encouraged, resulting in a \Rclass{RangedSummarizedExperiment} object.
Other methods for obtaining count matrices are described in the next section.

An example of the steps to produce a \Rclass{RangedSummarizedExperiment} can
be found in an RNA-seq workflow on the Bioconductor 
website: \url{http://www.bioconductor.org/help/workflows/rnaseqGene/}
and in the vignette for the data package \Biocexptpkg{airway}.
Here we load the \Rclass{RangedSummarizedExperiment} from that package in
order to build a \Rclass{DESeqDataSet}.

<<loadSumExp>>=
library("airway")
data("airway")
se <- airway
@

A \Rclass{DESeqDataSet} object must have an associated design formula.  
The design formula expresses the variables which will be
used in modeling. The formula should be a tilde ($\sim$) followed by the
variables with plus signs between them (it will be coerced into an
\Rclass{formula} if it is not already).  An intercept is included,
representing the base mean of counts. The design can be changed later, 
however then all differential analysis steps should be repeated, 
as the design formula is used to estimate the dispersions and 
to estimate the log2 fold changes of the model. 
The constructor function below shows the generation of a
\Rclass{DESeqDataSet} from a \Rclass{RangedSummarizedExperiment} \Robject{se}. 

\emph{Note}: In order to benefit from the default settings of the
package, you should put the variable of interest at the end of the
formula and make sure the control level is the first level.

<<sumExpInput>>=
library("DESeq2")
ddsSE <- DESeqDataSet(se, design = ~ cell + dex)
ddsSE
@

\subsubsection{Count matrix input} \label{sec:countmat}

Alternatively, the function \Rfunction{DESeqDataSetFromMatrix} can be
used if you already have a matrix of read counts prepared from another
source. Another method for quickly producing count matrices 
from alignment files is the \Rfunction{featureCounts} function
in the \Biocpkg{Rsubread} package.
To use \Rfunction{DESeqDataSetFromMatrix}, the user should provide 
the counts matrix, the information about the samples (the columns of the 
count matrix) as a \Rclass{DataFrame} or \Rclass{data.frame}, 
and the design formula.

To demonstate the use of \Rfunction{DESeqDataSetFromMatrix}, 
we will read in count data from the \Biocexptpkg{pasilla} package.
We read in a count matrix, which we will name \Robject{countData}, 
and the sample information table, which we will name \Robject{colData}. 
Further below we describe how to extract 
these objects from, e.g. \Rfunction{featureCounts} output.

<<loadPasilla>>=
library("pasilla")
pasCts <- system.file("extdata", "pasilla_gene_counts.tsv",
                 package="pasilla", mustWork=TRUE)
pasAnno <- system.file("extdata", "pasilla_sample_annotation.csv",
                       package="pasilla", mustWork=TRUE)
countData <- as.matrix(read.csv(pasCts,sep="\t",row.names="gene_id"))
colData <- read.csv(pasAnno, row.names=1)
colData <- colData[,c("condition","type")]
@ 

We examine the count matrix and column data to see if they are consisent:

<<showPasilla>>=
head(countData)
head(colData)
@ 

Note that these are not in the same order with respect to samples! 
It is critical that the columns of the count matrix and the rows of
the column data (information about samples) are in the same order..
We should re-arrange one or the other so that they are consistent in
terms of sample order (if we do not, later functions would produce
an error). We additionally need to chop off the \Robject{"fb"} of the 
row names of \Robject{colData}, so the naming is consistent.

<<reorderPasila>>=
rownames(colData) <- sub("fb","",rownames(colData))
all(rownames(colData) %in% colnames(countData))
countData <- countData[, rownames(colData)]
all(rownames(colData) == colnames(countData))
@ 

If you have used the \Rfunction{featureCounts} function in the 
\Biocpkg{Rsubread} package, the matrix of read counts can be directly 
provided from the \Robject{"counts"} element in the list output.
The count matrix and column data can typically be read into R 
from flat files using base R functions such as \Rfunction{read.csv} 
or \Rfunction{read.delim}.
For \textit{HTSeq} count files, see the dedicated input function below.

With the count matrix, \Robject{countData}, and the sample
information, \Robject{colData}, we can construct a \Rclass{DESeqDataSet}:

<<matrixInput>>=
dds <- DESeqDataSetFromMatrix(countData = countData,
                              colData = colData,
                              design = ~ condition)
dds
@

If you have additional feature data, it can be added to the
\Rclass{DESeqDataSet} by adding to the metadata columns of a newly
constructed object. (Here we add redundant data just for demonstration, as
the gene names are already the rownames of the \Robject{dds}.)

<<addFeatureData>>=
featureData <- data.frame(gene=rownames(countData))
(mcols(dds) <- DataFrame(mcols(dds), featureData))
@ 

\subsubsection{tximport: transcript abundance summarized to gene-level}

Users can create gene-level count matrices for use with \deseqtwo{}
by importing information using the \Biocpkg{tximport} package.
This workflow allows users to import transcript abundance estimates
from a variety of external software, including the following methods:

\begin{itemize}
\item \href{http://www.cs.cmu.edu/~ckingsf/software/sailfish/}{Sailfish} 
  \cite{Patro2014Sailfish}
\item \href{http://combine-lab.github.io/salmon/}{Salmon} 
  \cite{Patro2015Salmon}
\item
  \href{https://pachterlab.github.io/kallisto/about.html}{kallisto} 
  \cite{Bray2015Near}
\item \href{http://deweylab.github.io/RSEM/}{RSEM} 
  \cite{Li2011RSEM}
\end{itemize}

Some advantages of using the above methods for transcript abundance
estimation are: (i) this approach corrects for potential changes
in gene length across samples 
(e.g. from differential isoform usage) \cite{Trapnell2013Differential},
(ii) some of these methods (\textit{Sailfish, Salmon, kallisto}) 
are substantially faster and require less memory
and disk usage compared to alignment-based methods that require
creation and storage of BAM files, and
(iii) it is possible to avoid discarding those fragments that can
align to multiple genes with homologous sequence, thus increasing
sensitivity \cite{Robert2015Errors}.

Full details on the motivation and methods for importing transcript
level abundance and count estimates, summarizing to gene-level count matrices 
and producing an offset which corrects for potential changes in average
transcript length across samples are described in \cite{Soneson2015}.
The \textit{tximport}$\rightarrow$\deseqtwo{} approach uses rounded estimated
gene counts (but not normalized) instead of the raw count of fragments
which can be unambiguously assigned to a gene.

Here, we demonstrate how to import transcript abundances
and construct of a gene-level \Rclass{DESeqDataSet} object
from \textit{Sailfish} \texttt{quant.sf} files, which are
stored in the \Biocexptpkg{tximportData} package.
Note that, instead of locating \Robject{dir} using \Rfunction{system.file},
a user would typically just provide a path, e.g. \texttt{/path/to/quant/files}.
For further details on use of \Rfunction{tximport}, 
including the construction of the \Robject{tx2gene} table for linking
transcripts to genes, please refer to the \Biocpkg{tximport} package vignette. 

<<tximport>>=
library("tximport")
library("readr")
library("tximportData")
dir <- system.file("extdata", package="tximportData")
samples <- read.table(file.path(dir,"samples.txt"), header=TRUE)
files <- file.path(dir,"salmon", samples$run, "quant.sf")
names(files) <- paste0("sample",1:6)
tx2gene <- read.csv(file.path(dir, "tx2gene.csv"))
txi <- tximport(files, type="salmon", tx2gene=tx2gene, reader=read_tsv)
@ 

Next we create an condition vector to demonstrate building an
\Robject{DESeqDataSet}. For a typical use, this information would already
be present as a column of the \Robject{samples} table.
The best practice is to read \Robject{colData} from a CSV or TSV file, 
and to construct \Robject{files} 
from a column of \Robject{colData}, as shown in the code chunk above.

<<txi2dds>>=
coldata <- data.frame(condition=factor(rep(c("A","B"),each=3)))
rownames(coldata) <- colnames(txi$counts)
ddsTxi <- DESeqDataSetFromTximport(txi, colData=coldata,
                                   design=~ condition)
@

The \Robject{ddsTxi} object can then be used as \Robject{dds} in the
following analysis steps.

\subsubsection{\textit{HTSeq} input}

You can use the function \Rfunction{DESeqDataSetFromHTSeqCount} if you
have \texttt{htseq-count} from the \textit{HTSeq} python  
package\footnote{available from \url{http://www-huber.embl.de/users/anders/HTSeq}, described in \cite{Anders:2014:htseq}}.  
For an example of using the python scripts, see the
\Biocexptpkg{pasilla} data package. First you will want to specify a
variable which points to the directory in which the \textit{HTSeq}
output files are located. 

<<htseqDirI, eval=FALSE>>=
directory <- "/path/to/your/files/"
@ 

However, for demonstration purposes only, the following line of
code points to the directory for the demo \textit{HTSeq} output
files packages for the \Biocexptpkg{pasilla} package.

<<htseqDirII>>=
directory <- system.file("extdata", package="pasilla", mustWork=TRUE)
@ 

We specify which files to read in using \Rfunction{list.files},
and select those files which contain the string \Robject{"treated"} 
using \Rfunction{grep}. The \Rfunction{sub} function is used to 
chop up the sample filename to obtain the condition status, or 
you might alternatively read in a phenotypic table 
using \Rfunction{read.table}.

<<htseqInput>>=
sampleFiles <- grep("treated",list.files(directory),value=TRUE)
sampleCondition <- sub("(.*treated).*","\\1",sampleFiles)
sampleTable <- data.frame(sampleName = sampleFiles,
                          fileName = sampleFiles,
                          condition = sampleCondition)
ddsHTSeq <- DESeqDataSetFromHTSeqCount(sampleTable = sampleTable,
                                       directory = directory,
                                       design= ~ condition)
ddsHTSeq
@

\subsubsection{Pre-filtering}

While it is not necessary to pre-filter low count genes before running the \deseqtwo{}
functions, there are two reasons which make pre-filtering useful:
by removing rows in which there are no reads or nearly no reads,
we reduce the memory size of the \Robject{dds} data object and 
we increase the speed of the transformation
and testing functions within \deseqtwo{}. Here we perform a minimal
pre-filtering to remove rows that have only 0 or 1 read. Note that more strict
filtering to increase power is \textit{automatically} applied via independent filtering
on the mean of normalized counts within the \Rfunction{results}
function, which will be discussed in Section~\ref{sec:autoFilt}.

<<prefilter>>=
dds <- dds[ rowSums(counts(dds)) > 1, ]
@ 

\subsubsection{Note on factor levels} \label{sec:factorLevels}

By default, R will choose a \textit{reference level} for factors based
on alphabetical order. Then, if you never tell the \deseqtwo{} functions
which level you want to compare against (e.g. which level represents
the control group), the comparisons will be based on the alphabetical
order of the levels. There are two solutions: you can either
explicitly tell \Rfunction{results} which comparison to make using the
\Robject{contrast} argument (this will be shown later), or you can
explicitly set the factors levels. Setting the factor levels can be done in two ways,
either using factor:

<<factorlvl>>=
dds$condition <- factor(dds$condition, levels=c("untreated","treated"))
@ 

...or using \Rfunction{relevel}, just specifying the reference level:

<<relevel>>=
dds$condition <- relevel(dds$condition, ref="untreated")
@ 

If you need to subset the columns of a \Rclass{DESeqDataSet},
i.e., when removing certain samples from the analysis, it is possible
that all the samples for one or more levels of a variable in the design
formula would be removed. In this case, the \Rfunction{droplevels} function can be used
to remove those levels which do not have samples in the current \Rclass{DESeqDataSet}:

<<droplevels>>=
dds$condition <- droplevels(dds$condition)
@ 

\subsubsection{Collapsing technical replicates}

\deseqtwo{} provides a function \Rfunction{collapseReplicates} which can
assist in combining the counts from technical replicates into single
columns of the count matrix. The term ``technical replicate'' 
implies multiple sequencing runs of the same library. 
You should not collapse biological replicates using this function.
See the manual page for an example of the use of
\Rfunction{collapseReplicates}. 

\subsubsection{About the pasilla dataset}

We continue with the \Biocexptpkg{pasilla} data constructed from the
count matrix method above. This data set is from an experiment on
\emph{Drosophila melanogaster} cell cultures and investigated the
effect of RNAi knock-down of the splicing factor \emph{pasilla}
\cite{Brooks2010}.  The detailed transcript of the production of
the \Biocexptpkg{pasilla} data is provided in the vignette of the 
data package \Biocexptpkg{pasilla}.

\subsection{Differential expression analysis} \label{sec:de}

The standard differential expression analysis steps are wrapped
into a single function, \Rfunction{DESeq}. The estimation steps performed
by this function are described in Section~\ref{sec:glm}, in the manual page for
\Robject{?DESeq} and in the Methods section of the \deseqtwo{} publication \cite{Love2014}. 
The individual sub-functions which are called by \Rfunction{DESeq}
are still available, described in Section~\ref{sec:steps}. 

Results tables are generated using the function \Rfunction{results}, which
extracts a results table with log2 fold changes, $p$ values and adjusted
$p$ values. With no arguments to \Rfunction{results}, the results will be for
the last variable in the design formula, and if this is a factor, 
the comparison will be the last level of this variable over the first level. 
Details about the comparison are printed to the console. The text, \texttt{condition}
\texttt{treated vs untreated}, tells you that the estimates are of the logarithmic
fold change $\log_2 ( \textrm{treated} / \textrm{untreated} )$.

<<deseq>>=
dds <- DESeq(dds)
res <- results(dds)
res
@ 

These steps should take less than 30 seconds for most analyses. For
experiments with many samples (e.g. 100 samples), one can take
advantage of parallelized computation.  Both of the above functions
have an argument \Robject{parallel} which if set to \Robject{TRUE} can
be used to distribute computation across cores specified by the
\Rfunction{register} function of \Biocpkg{BiocParallel}. For example,
the following chunk (not evaluated here), would register 4 cores, and
then the two functions above, with \Robject{parallel=TRUE}, would
split computation over these cores. 

<<parallel, eval=FALSE>>=
library("BiocParallel")
register(MulticoreParam(4))
@

We can order our results table by the smallest adjusted $p$ value:

<<resOrder>>=
resOrdered <- res[order(res$padj),]
@

We can summarize some basic tallies using the
\Rfunction{summary} function.

<<sumRes>>=
summary(res)
@ 

How many adjusted p-values were less than 0.1?

<<sumRes01>>=
sum(res$padj < 0.1, na.rm=TRUE)
@ 

The \Rfunction{results} function contains a number of arguments to
customize the results table which is generated.  Note that the
\Rfunction{results} function automatically performs independent
filtering based on the mean of normalized counts for each gene,
optimizing the number of genes which will have an adjusted $p$ value
below a given FDR cutoff, \Robject{alpha}.
Independent filtering is further discussed in Section~\ref{sec:autoFilt}.
By default the argument
\Robject{alpha} is set to $0.1$.  If the adjusted $p$ value cutoff
will be a value other than $0.1$, \Robject{alpha} should be set to
that value:

<<resAlpha05>>=
res05 <- results(dds, alpha=0.05)
summary(res05)
sum(res05$padj < 0.05, na.rm=TRUE)
@ 

A generalization of the idea of $p$ value filtering is to \textit{weight} hypotheses
to optimize power. A new Bioconductor package, \Biocpkg{IHW}, is now available
that implements the method of \textit{Independent Hypothesis Weighting} \cite{Ignatiadis2015}.
Here we show the use of \textit{IHW} for $p$ value adjustment of \deseqtwo{} results.
For more details, please see the vignette of the \Biocpkg{IHW} package.
Note that the \textit{IHW} result object is stored in the metadata.

<<IHW>>=
library("IHW")
resIHW <- results(dds, filterFun=ihw)
summary(resIHW)
sum(resIHW$padj < 0.1, na.rm=TRUE)
metadata(resIHW)$ihwResult
@ 

If a multi-factor design is used, or if the variable in the design
formula has more than two levels, the \Robject{contrast} argument of
\Rfunction{results} can be used to extract different comparisons from
the \Rclass{DESeqDataSet} returned by \Rfunction{DESeq}.
Multi-factor designs are discussed further in Section~\ref{sec:multifactor},
and the use of the \Robject{contrast} argument is dicussed in Section~\ref{sec:contrasts}.

For advanced users, note that all the values calculated by the \deseqtwo{} 
package are stored in the \Rclass{DESeqDataSet} object, and access 
to these values is discussed in Section~\ref{sec:access}.

\subsection{Exploring and exporting results}

\subsubsection{MA-plot}

\begin{figure}[tb]
\includegraphics[width=.49\textwidth]{figure/MANoPrior-1}
\includegraphics[width=.49\textwidth]{figure/MA-1}
\caption{
  MA-plot.
  These plots show the log2 fold changes from the treatment over
  the mean of normalized counts, i.e. the average of counts normalized by
  size factors. The left plot shows the ``unshrunken'' log2 fold changes, 
  while the right plot, produced by the code above, shows the shrinkage 
  of log2 fold changes resulting from the incorporation of zero-centered
  normal prior. The shrinkage is greater for the log2 fold change
  estimates from genes with low counts and high dispersion, 
  as can be seen by the narrowing of spread of leftmost points 
  in the right plot.}
\label{fig:MA}
\end{figure}

In \deseqtwo{}, the function \Rfunction{plotMA} shows the log2
fold changes attributable to a given variable over the mean of normalized counts.
Points will be colored red if the adjusted $p$ value is less than 0.1.  
Points which fall out of the window are plotted as open triangles pointing 
either up or down.

<<MA, fig.width=4.5, fig.height=4.5>>=
plotMA(res, main="DESeq2", ylim=c(-2,2))
@

After calling \Rfunction{plotMA}, one can use the function
\Rfunction{identify} to interactively detect the row number of
individual genes by clicking on the plot. One can then recover
the gene identifiers by saving the resulting indices:

<<MAidentify, eval=FALSE>>=
idx <- identify(res$baseMean, res$log2FoldChange)
rownames(res)[idx]
@ 

The MA-plot of log2 fold changes returned by \deseqtwo{} allows us to
see how the shrinkage of fold changes works for genes with low
counts. You can still obtain results tables which include the
``unshrunken'' log2 fold changes (for a simple comparison, the ratio
of the mean normalized counts in the two groups). A column
\Robject{lfcMLE} with the unshrunken maximum likelihood estimate (MLE)
for the log2 fold change will be added with an additional argument to
\Rfunction{results}:

<<resMLE>>=
resMLE <- results(dds, addMLE=TRUE)
head(resMLE, 4)
@ 

One can make an MA-plot of the unshrunken estimates like so:

<<MANoPrior, fig.width=4.5, fig.height=4.5>>=
plotMA(resMLE, MLE=TRUE, main="unshrunken LFC", ylim=c(-2,2))
@

\subsubsection{Plot counts} \label{sec:plotcounts}

It can also be useful to examine the counts of reads for a single gene
across the groups. A simple function for making this
plot is \Rfunction{plotCounts}, which normalizes counts by sequencing depth
and adds a pseudocount of $\frac{1}{2}$ to allow for log scale plotting.
The counts are grouped by the variables in \Robject{intgroup}, where
more than one variable can be specified. Here we specify the gene
which had the smallest $p$ value from the results table created
above. You can select the gene to plot by rowname or by numeric index.

<<plotCounts, dev="pdf", fig.width=4.5, fig.height=5>>=
plotCounts(dds, gene=which.min(res$padj), intgroup="condition")
@ 

For customized plotting, an argument \Robject{returnData} specifies
that the function should only return a \Rclass{data.frame} for
plotting with \Rfunction{ggplot}.

<<plotCountsAdv, dev="pdf", fig.width=3.5, fig.height=3.5>>=
d <- plotCounts(dds, gene=which.min(res$padj), intgroup="condition", 
                returnData=TRUE)
library("ggplot2")
ggplot(d, aes(x=condition, y=count)) + 
  geom_point(position=position_jitter(w=0.1,h=0)) + 
  scale_y_log10(breaks=c(25,100,400))
@ 

\begin{figure}
\includegraphics[width=.49\textwidth]{figure/plotCounts-1}
\includegraphics[width=.49\textwidth]{figure/plotCountsAdv-1}
\caption{
  Plot of counts for one gene.
  The plot of normalized counts (plus a pseudocount of $\frac{1}{2}$)
  either made using the \Rfunction{plotCounts} function (left)
  or using another plotting library (right, using \CRANpkg{ggplot2}).}
\label{fig:plotcounts}
\end{figure}

\subsubsection{More information on results columns} \label{sec:moreInfo}

Information about which variables and tests were used can be found by calling
the function \Rfunction{mcols} on the results object.

<<metadata>>=
mcols(res)$description
@

For a particular gene, a log2 fold change of $-1$ for
\Robject{condition treated vs untreated} means that the treatment
induces a multiplicative change in observed gene expression level of
$2^{-1} = 0.5$ compared to the untreated condition. If the variable of
interest is continuous-valued, then the reported log2 fold change is
per unit of change of that variable.

\textbf{Note on p-values set to NA}: some values in the results table
can be set to \Robject{NA} for one of the following reasons:

\begin{enumerate} 
  \item If within a row, all samples have zero counts, 
    the \Robject{baseMean} column will be zero, and the
    log2 fold change estimates, $p$ value and adjusted $p$ value
    will all be set to \texttt{NA}.
  \item If a row contains a sample with an extreme count outlier
    then the $p$ value and adjusted $p$ value will be set to \texttt{NA}.
    These outlier counts are detected by Cook's distance. Customization
    of this outlier filtering and description of functionality for 
    replacement of outlier counts and refitting is described in 
    Section~\ref{sec:outlierApproach},
  \item If a row is filtered by automatic independent filtering, 
    for having a low mean normalized count, then only the adjusted $p$
    value will be set to \texttt{NA}. 
    Description and customization of independent filtering is 
    described in Section~\ref{sec:autoFilt}.
\end{enumerate}

\subsubsection{Rich visualization and reporting of results}

\textbf{ReportingTools.} An HTML report of the results with plots and sortable/filterable columns
can be generated using the \Biocpkg{ReportingTools} package
on a \Rclass{DESeqDataSet} that has been processed by the \Rfunction{DESeq} function.
For a code example, see the ``RNA-seq differential expression'' vignette at
the \Biocpkg{ReportingTools} page, or the manual page for the 
\Rfunction{publish} method for the \Rclass{DESeqDataSet} class.

\textbf{regionReport.} An HTML and PDF summary of the results with plots
can also be generated using the \Biocpkg{regionReport} package.
The \Rfunction{DESeq2Report} function should be run on a 
\Rclass{DESeqDataSet} that has been processed by the \Rfunction{DESeq} function.
For more details see the manual page for \Rfunction{DESeq2Report} 
and an example vignette in the \Biocpkg{regionReport} package.

\textbf{Glimma.} Interactive visualization of \deseqtwo{} output, 
including MA-plots (also called MD-plot) can be generated using the
\Biocpkg{Glimma} package. See the manual page for \Rfunction{glMDPlot.DESeqResults}.

\textbf{pcaExplorer.} Interactive visualization of \deseqtwo{} output,
including PCA plots, boxplots of counts and other useful summaries can be
generated using the \Biocpkg{pcaExplorer} package.
See the ``Launching the application'' section of the package vignette.

\subsubsection{Exporting results to CSV files}

A plain-text file of the results can be exported using the 
base \R{} functions \Rfunction{write.csv} or \Rfunction{write.delim}. 
We suggest using a descriptive file name indicating the variable
and levels which were tested.

<<export, eval=FALSE>>=
write.csv(as.data.frame(resOrdered), 
          file="condition_treated_results.csv")
@

Exporting only the results which pass an adjusted $p$ value
threshold can be accomplished with the \Rfunction{subset} function,
followed by the \Rfunction{write.csv} function.

<<subset>>=
resSig <- subset(resOrdered, padj < 0.1)
resSig
@ 

\subsection{Multi-factor designs} \label{sec:multifactor}

Experiments with more than one factor influencing the counts can be
analyzed using design formula that include the additional variables.  
By adding these to the design, one can control for additional variation
in the counts. For example, if the condition samples are balanced
across experimental batches, by including the \Robject{batch} factor to the
design, one can increase the sensitivity for finding differences due
to \Robject{condition}. There are multiple ways to analyze experiments when the
additional variables are of interest and not just controlling factors 
(see Section \ref{sec:interactions} on interactions).

The data in the \Biocexptpkg{pasilla} package have a condition of interest 
(the column \Robject{condition}), as well as information on the type of sequencing 
which was performed (the column \Robject{type}), as we can see below:

<<multifactor>>=
colData(dds)
@

We create a copy of the \Rclass{DESeqDataSet}, so that we can rerun
the analysis using a multi-factor design.

<<copyMultifactor>>=
ddsMF <- dds
@

We can account for the different types of sequencing, and get a clearer picture
of the differences attributable to the treatment.  As \Robject{condition} is the
variable of interest, we put it at the end of the formula. Thus the \Rfunction{results}
function will by default pull the \Robject{condition} results unless 
\Robject{contrast} or \Robject{name} arguments are specified. 
Then we can re-run \Rfunction{DESeq}:

<<replaceDesign>>=
design(ddsMF) <- formula(~ type + condition)
ddsMF <- DESeq(ddsMF)
@

Again, we access the results using the \Rfunction{results} function.

<<multiResults>>=
resMF <- results(ddsMF)
head(resMF)
@

It is also possible to retrieve the log2 fold changes, $p$ values and adjusted
$p$ values of the \Robject{type} variable. The \Robject{contrast} argument of 
the function \Rfunction{results} takes a character vector of length three:
the name of the variable, the name of the factor level for the numerator
of the log2 ratio, and the name of the factor level for the denominator.
The \Robject{contrast} argument can also take other forms, as
described in the help page for \Rfunction{results} and in Section~\ref{sec:contrasts}.

<<multiTypeResults>>=
resMFType <- results(ddsMF,
                     contrast=c("type", "single-read", "paired-end"))
head(resMFType)
@

If the variable is continuous or an interaction term (see Section~\ref{sec:interactions})
then the results can be extracted using the \Robject{name} argument to \Rfunction{results},
where the name is one of elements returned by \Robject{resultsNames(dds)}.

\newpage

%---------------------------------------------------
\section{Data transformations and visualization} \label{sec:transf}
%---------------------------------------------------
\subsection{Count data transformations}
%---------------------------------------------------

In order to test for differential expression, we operate on raw counts
and use discrete distributions as described in the previous Section~\ref{sec:de}.
However for other downstream analyses -- 
e.g. for visualization or clustering -- it might be useful 
to work with transformed versions of the count data. 

Maybe the most obvious choice of transformation is the logarithm.
Since count values for a gene can be zero in some
conditions (and non-zero in others), some advocate the use of
\emph{pseudocounts}, i.\,e.\ transformations of the form
%
\begin{equation}\label{eq:shiftedlog}
  y = \log_2(n + 1)\quad\mbox{or more generally,}\quad y = \log_2(n + n_0),
\end{equation}
%
where $n$ represents the count values and $n_0$ is a positive constant.

In this section, we discuss two alternative
approaches that offer more theoretical justification and a rational way
of choosing the parameter equivalent to $n_0$ above.
The \emph{regularized logarithm} or \emph{rlog} incorporates a prior on
the sample differences \cite{Love2014}, 
and the other uses the concept of variance stabilizing
transformations (VST) \cite{Tibshirani1988,sagmb2003,Anders:2010:GB}.
Both transformations produce transformed data on the $\log_2$ scale
which has been normalized with respect to library size.

The point of these two transformations, the \emph{rlog} and the VST,
is to remove the dependence of the variance on the mean,
particularly the high variance of the logarithm of count data when the
mean is low. Both \emph{rlog} and VST use the experiment-wide trend
of variance over mean, in order to transform the data to remove the
experiment-wide trend. Note that we do not require or
desire that all the genes have \emph{exactly} the same variance after
transformation. Indeed, in Figure~\ref{fig:meansd} below, you will see
that after the transformations the genes with the same mean do not
have exactly the same standard deviations, but that the
experiment-wide trend has flattened. It is those genes with row
variance above the trend which will allow us to cluster samples into
interesting groups.

\textbf{Note on running time:} if you have many samples (e.g. 100s),
the \Rfunction{rlog} function might take too long, and the 
\Rfunction{varianceStabilizingTransformation} is a faster choice.  
The rlog and VST have similar properties, but the rlog requires fitting a shrinkage
term for each sample and each gene which takes time.  See the
\deseqtwo{} paper for more discussion on the differences
\cite{Love2014}. In addition, a new function \Rfunction{vst} provides
an even faster version of the \Rfunction{varianceStabilizingTransformation}
but calculating the global dispersion trend on a subset of the genes
(default 1000). \Rfunction{vst} may be attractive for interactive EDA.

\subsubsection{Blind dispersion estimation}

The two functions, \Rfunction{rlog} and
\Rfunction{varianceStabilizingTransformation}, have an argument
\Robject{blind}, for whether the transformation should be blind to the
sample information specified by the design formula. When
\Robject{blind} equals \Robject{TRUE} (the default), the functions
will re-estimate the dispersions using only an intercept (design
formula $\sim 1$). This setting should be used in order to compare
samples in a manner wholly unbiased by the information about
experimental groups, for example to perform sample QA (quality
assurance) as demonstrated below.

However, blind dispersion estimation is not the appropriate choice if
one expects that many or the majority of genes (rows) will have large
differences in counts which are explainable by the experimental design,
and one wishes to transform the data for downstream analysis. In this
case, using blind dispersion estimation will lead to large estimates
of dispersion, as it attributes differences due to experimental design
as unwanted ``noise'', and will result in overly shrinking the transformed
values towards each other. 
By setting \Robject{blind} to \Robject{FALSE}, the dispersions
already estimated will be used to perform transformations, or if not
present, they will be estimated using the current design formula. Note
that only the fitted dispersion estimates from mean-dispersion trend
line are used in the transformation (the global dependence of
dispersion on mean for the entire experiment).
So setting \Robject{blind} to \Robject{FALSE} is still for the most
part not using the information about which samples were in which
experimental group in applying the transformation.

\subsubsection{Extracting transformed values}

These functions return an object of class \Rclass{DESeqTransform}
which is a subclass of \Rclass{RangedSummarizedExperiment}. 
For $\sim 20$ samples, running on a newly created \Robject{DESeqDataSet},
\Rfunction{rlog} may take 30 seconds, 
\Rfunction{varianceStabilizingTransformation} may take 5 seconds, and
\Rfunction{vst} less than 1 second (by subsetting to 1000 genes for
calculating the global dispersion trend).
However, the running times are shorter and more similar with \Rcode{blind=FALSE} and
if the function \Rfunction{DESeq} has already been run, because then
it is not necessary to re-estimate the dispersion values.
The \Rfunction{assay} function is used to extract the matrix of normalized values.

<<rlogAndVST>>=
rld <- rlog(dds, blind=FALSE)
vsd <- varianceStabilizingTransformation(dds, blind=FALSE)
vsd.fast <- vst(dds, blind=FALSE)
head(assay(rld), 3)
@

\subsubsection{Regularized log transformation}

The function \Rfunction{rlog}, stands for \emph{regularized log},
transforming the original count data to the log2 scale by fitting a
model with a term for each sample and a prior distribution on the
coefficients which is estimated from the data. This is the same kind
of shrinkage (sometimes referred to as regularization, or moderation)
of log fold changes used by the \Rfunction{DESeq} and
\Rfunction{nbinomWaldTest}, as seen in Figure \ref{fig:MA}. The
resulting data contains elements defined as:

$$ \log_2(q_{ij}) = \beta_{i0} + \beta_{ij} $$

where $q_{ij}$ is a parameter proportional to the expected true
concentration of fragments for gene $i$ and sample $j$ (see
Section~\ref{sec:glm}), $\beta_{i0}$ is an intercept which does not
undergo shrinkage, and $\beta_{ij}$ is the sample-specific effect
which is shrunk toward zero based on the dispersion-mean trend over
the entire dataset. The trend typically captures high dispersions for
low counts, and therefore these genes exhibit higher shrinkage from
the\Rfunction{rlog}.

Note that, as $q_{ij}$ represents the part of the mean value
$\mu_{ij}$ after the size factor $s_j$ has been divided out, it is
clear that the rlog transformation inherently accounts for differences
in sequencing depth.  Without priors, this design matrix would lead to
a non-unique solution, however the addition of a prior on
non-intercept betas allows for a unique solution to be found.  The
regularized log transformation is preferable to the variance
stabilizing transformation if the size factors vary widely.

\subsubsection{Variance stabilizing transformation}

Above, we used a parametric fit for the dispersion. In this case, the
closed-form expression for the variance stabilizing transformation is
used by \Rfunction{varianceStabilizingTransformation}, which is
derived in the file \texttt{vst.pdf}, that is distributed in the
package alongside this vignette. If a local fit is used (option
\Robject{fitType="locfit"} to \Rfunction{estimateDispersions}) a
numerical integration is used instead.

<<vsd1, echo=FALSE, fig.width=4.5, fig.height=4.5, fig.show="asis", fig.small=TRUE, fig.pos="!bt", fig.cap="VST and log2. Graphs of the variance stabilizing transformation for sample 1, in blue, and of the transformation $f(n) = \\log_2(n/s_1)$, in black. $n$ are the counts and $s_1$ is the size factor for the first sample.\\label{figure/vsd1-1}">>=
px     <- counts(dds)[,1] / sizeFactors(dds)[1]
ord    <- order(px)
ord    <- ord[px[ord] < 150]
ord    <- ord[seq(1, length(ord), length=50)]
last   <- ord[length(ord)]
vstcol <- c("blue", "black")
matplot(px[ord],
        cbind(assay(vsd)[, 1], log2(px))[ord, ],
        type="l", lty=1, col=vstcol, xlab="n", ylab="f(n)")
legend("bottomright",
       legend = c(
        expression("variance stabilizing transformation"),
        expression(log[2](n/s[1]))),
       fill=vstcol)
@

The resulting variance stabilizing transformation is shown in Figure
\ref{figure/vsd1-1}.  The code that produces the figure is hidden from
this vignette for the sake of brevity, but can be seen in the
\texttt{.Rnw} or \texttt{.R} source file. Note that the vertical axis
in such plots is the square root of the variance over all samples, so
including the variance due to the experimental conditions.  While a
flat curve of the square root of variance over the mean may seem like
the goal of such transformations, this may be unreasonable in the case
of datasets with many true differences due to the experimental
conditions.

\subsubsection{Effects of transformations on the variance}

Figure~\ref{fig:meansd} plots the standard deviation of the transformed
data, across samples, against the mean, using the shifted
logarithm transformation \eqref{eq:shiftedlog}, the
regularized log transformation and the variance stabilizing transformation.
The shifted logarithm has elevated standard deviation in the lower
count range, and the regularized log to a lesser extent, while for
the variance stabilized data the standard deviation is roughly constant
along the whole dynamic range.

<<meansd, fig.width=4, fig.height=3, fig.show="asis", fig.wide=TRUE, fig.pos="tb", out.width=".32\\linewidth", fig.cap="Per-gene standard deviation (taken across samples), against the rank of the mean. {\\bfhelvet(a)} for the shifted logarithm $\\log_2(n+1)$, the regularized log transformation {\\bfhelvet(b)} and the variance stabilizing transformation {\\bfhelvet(c)}.\\label{fig:meansd}", fig.subcap="">>=
library("vsn")
notAllZero <- (rowSums(counts(dds))>0)
meanSdPlot(log2(counts(dds,normalized=TRUE)[notAllZero,] + 1))
meanSdPlot(assay(rld[notAllZero,]))
meanSdPlot(assay(vsd[notAllZero,]))
@

%---------------------------------------------------------------
\subsection{Data quality assessment by sample clustering and visualization}\label{sec:quality}
%---------------------------------------------------------------

Data quality assessment and quality control (i.\,e.\ the removal of
insufficiently good data) are essential steps of any data
analysis. These steps should typically be performed 
very early in the analysis of a new data set,
preceding or in parallel to the differential expression testing.

We define the term \emph{quality} as 
\emph{fitness for purpose}\footnote{\url{http://en.wikipedia.org/wiki/Quality_\%28business\%29}}.
Our purpose is the detection of differentially expressed genes, and we
are looking in particular for samples whose experimental treatment
suffered from an anormality that renders the data points obtained from
these particular samples detrimental to our purpose.

\subsubsection{Heatmap of the count matrix}\label{sec:hmc}
To explore a count matrix, it is often instructive to look at it as a
heatmap.  Below we show how to produce such a heatmap 
for various transformations of the data.

<<heatmap, dev="pdf", fig.width=5, fig.height=7>>=
library("pheatmap")
select <- order(rowMeans(counts(dds,normalized=TRUE)),
                decreasing=TRUE)[1:20]

nt <- normTransform(dds) # defaults to log2(x+1)
log2.norm.counts <- assay(nt)[select,]
df <- as.data.frame(colData(dds)[,c("condition","type")])
pheatmap(log2.norm.counts, cluster_rows=FALSE, show_rownames=FALSE,
         cluster_cols=FALSE, annotation_col=df)

pheatmap(assay(rld)[select,], cluster_rows=FALSE, show_rownames=FALSE,
         cluster_cols=FALSE, annotation_col=df)

pheatmap(assay(vsd)[select,], cluster_rows=FALSE, show_rownames=FALSE,
         cluster_cols=FALSE, annotation_col=df)
@

\begin{figure*}
\includegraphics[width=.32\textwidth]{figure/heatmap-1}
\includegraphics[width=.32\textwidth]{figure/heatmap-2}
\includegraphics[width=.32\textwidth]{figure/heatmap-3}
\caption{Heatmaps showing the expression data of the \Sexpr{length(select)}
  most highly expressed genes. The data is of log2 normalized counts (left),
  from regularized log transformation (center) and from variance
  stabilizing transformation (right).}
\label{fig:heatmap2}
\end{figure*}

\subsubsection{Heatmap of the sample-to-sample distances}\label{sec:dists}

Another use of the transformed data is sample clustering. Here, we apply the
\Rfunction{dist} function to the transpose of the transformed count matrix to get
sample-to-sample distances. We could alternatively use the variance stabilized
transformation here.

<<sampleClust>>=
sampleDists <- dist(t(assay(rld)))
@

A heatmap of this distance matrix gives us an overview over similarities
and dissimilarities between samples (Figure \ref{figure/figHeatmapSamples-1}):
We have to provide a hierarchical clustering \Robject{hc} to the heatmap
function based on the sample distances, or else the heatmap
function would calculate a clustering based on the distances between
the rows/columns of the distance matrix.

<<figHeatmapSamples, dev="pdf", fig.width=7, fig.height=7, fig.show="asis", fig.small=TRUE, fig.pos="tb", fig.cap="Sample-to-sample distances.  Heatmap showing the Euclidean distances between the samples as calculated from the regularized log transformation.\\label{figure/figHeatmapSamples-1}">>=
library("RColorBrewer")
sampleDistMatrix <- as.matrix(sampleDists)
rownames(sampleDistMatrix) <- paste(rld$condition, rld$type, sep="-")
colnames(sampleDistMatrix) <- NULL
colors <- colorRampPalette( rev(brewer.pal(9, "Blues")) )(255)
pheatmap(sampleDistMatrix,
         clustering_distance_rows=sampleDists,
         clustering_distance_cols=sampleDists,
         col=colors)
@

\subsubsection{Principal component plot of the samples}\label{sec:pca}

Related to the distance matrix of Section~\ref{sec:dists} is the PCA
plot of the samples, which we obtain as follows (Figure \ref{figure/figPCA-1}).

<<figPCA, dev="pdf", fig.width=5, fig.height=3>>=
plotPCA(rld, intgroup=c("condition", "type"))
@

\incfig[tbh]{figure/figPCA-1}{\textwidth}{PCA plot.}{
  PCA plot. The \Sexpr{ncol(rld)} samples shown in the 2D
  plane spanned by their first two principal components. This type of
  plot is useful for visualizing the overall effect of experimental
  covariates and batch effects.
}

It is also possible to customize the PCA plot using the
\Rfunction{ggplot} function.

<<figPCA2, dev="pdf", fig.width=5, fig.height=3>>=
data <- plotPCA(rld, intgroup=c("condition", "type"), returnData=TRUE)
percentVar <- round(100 * attr(data, "percentVar"))
ggplot(data, aes(PC1, PC2, color=condition, shape=type)) +
  geom_point(size=3) +
  xlab(paste0("PC1: ",percentVar[1],"% variance")) +
  ylab(paste0("PC2: ",percentVar[2],"% variance")) + 
  coord_fixed()
@

\incfig[tbh]{figure/figPCA2-1}{\textwidth}{PCA plot.}{
  PCA plot customized using the \CRANpkg{ggplot2} library.
}


\newpage

%--------------------------------------------------
\section{Variations to the standard workflow}
%--------------------------------------------------

\subsection{Wald test individual steps} \label{sec:steps}

The function \Rfunction{DESeq} runs the following functions in order:

<<WaldTest, eval=FALSE>>=
dds <- estimateSizeFactors(dds)
dds <- estimateDispersions(dds)
dds <- nbinomWaldTest(dds)
@

\subsection{Contrasts} \label{sec:contrasts}

A contrast is a linear combination of estimated log2 fold changes,
which can be used to test if differences between groups are equal to
zero.  The simplest use case for contrasts is an experimental design
containing a factor with three levels, say A, B and C.  Contrasts
enable the user to generate results for all 3 possible differences:
log2 fold change of B vs A, of C vs A, and of C vs B.
The \Robject{contrast} argument of \Rfunction{results} function is
used to extract test results of log2 fold changes of interest, for example:

<<simpleContrast, eval=FALSE>>=
results(dds, contrast=c("condition","C","B"))
@ 

Log2 fold changes can also be added and subtracted by providing a
\Robject{list} to the \Robject{contrast} argument which has two elements:
the names of the log2 fold changes to add, and the names of the log2
fold changes to subtract. The names used in the list should come from
\Robject{resultsNames(dds)}.

Alternatively, a numeric vector of the
length of \Robject{resultsNames(dds)} can be provided, for manually
specifying the linear combination of terms.  Demonstrations of the use
of contrasts for various designs can be found in the examples section
of the help page for the \Rfunction{results} function. The
mathematical formula that is used to generate the contrasts can be found in
Section~\ref{sec:ctrstTheory}.

\subsection{Interactions} \label{sec:interactions}

Interaction terms can be added to the design formula, in order to
test, for example, if the log2 fold change attributable to a given
condition is \textit{different} based on another factor, for example if the
condition effect differs across genotype.

Many users begin to add interaction terms to the design formula, when
in fact a much simpler approach would give all the results tables that
are desired. We will explain this approach first, because it is much
simpler to perform.
If the comparisons of interest are, for example, the effect
of a condition for different sets of samples, a simpler approach than
adding interaction terms explicitly to the design formula is to
perform the following steps:

\begin{enumerate}
\item combine the factors of interest into a single factor with all
  combinations of the original factors 
\item change the design to include just this factor, e.g. \Robject{\lowtilde{} group}
\end{enumerate}

Using this design is similar to adding an interaction term, 
in that it models multiple condition effects which
can be easily extracted with \Rfunction{results}.
Suppose we have two factors \Robject{genotype} (with values I, II, and III) 
and \Robject{condition} (with values A and B), and we want to extract 
the condition effect specifically for each genotype. We could use the
following approach to obtain, e.g. the condition effect for genotype I: 

<<combineFactors, eval=FALSE>>=
dds$group <- factor(paste0(dds$genotype, dds$condition))
design(dds) <- ~ group
dds <- DESeq(dds)
resultsNames(dds)
results(dds, contrast=c("group", "IB", "IA"))
@

<<interFig, dev="pdf", fig.width=4, fig.height=3, echo=FALSE, results="hide">>=
npg <- 20
mu <- 2^c(8,10,9,11,10,12)
cond <- rep(rep(c("A","B"),each=npg),3)
geno <- rep(c("I","II","III"),each=2*npg)
table(cond, geno)
counts <- rnbinom(6*npg, mu=rep(mu,each=npg), size=1/.01)
d <- data.frame(log2c=log2(counts+1), cond, geno)
library(ggplot2)
plotit <- function(d, title) {
  ggplot(d, aes(x=cond, y=log2c, group=geno)) + 
    geom_jitter(size=1.5, position = position_jitter(width=.15)) +
    facet_wrap(~ geno) + 
    stat_summary(fun.y=mean, geom="line", colour="red", size=0.8) + 
    xlab("condition") + ylab("log2(counts+1)") + ggtitle(title)
}
plotit(d, "Gene 1") + ylim(7,13)
lm(log2c ~ cond + geno + geno:cond, data=d)
@ 

<<interFig2, dev="pdf", fig.width=4, fig.height=3,  echo=FALSE, results="hide">>=
mu[4] <- 2^12
mu[6] <- 2^8
counts <- rnbinom(6*npg, mu=rep(mu,each=npg), size=1/.01)
d2 <- data.frame(log2c=log2(counts + 1), cond, geno)
plotit(d2, "Gene 2") + ylim(7,13)
lm(log2c ~ cond + geno + geno:cond, data=d2)
@ 

\begin{figure*}
\includegraphics[width=.49\textwidth]{figure/interFig-1}
\includegraphics[width=.49\textwidth]{figure/interFig2-1}
\caption{
  Genotype-specific condition effects.
  Here, the y-axis represents $\log_2(\textrm{counts}+1)$, and each
  group has 20 samples (black dots). A red line connects the mean of
  the groups within each genotype.
  On the left side (Gene 1), note that the condition effect is consistent
  across genotypes. Although condition A has a different baseline for
  I,II, and III, the condition effect is a log2 fold change of about 2
  for each genotype.
  Using a model with an interaction term \Robject{genotype:condition},
  the interaction terms for genotype II and genotype III will be nearly 0.
  On the right side (Gene 2), we can see that the condition effect is
  not consistent across genotype. Here the main condition effect (the
  effect for the reference genotype I) is again 2. However, this time
  the interaction terms will be around 1 for genotype II and
  -4 for genotype III. This is 
  because the condition effect is higher by 1 for genotype II compared to
  genotype I, and lower by 4 for genotype III compared to genotype I.
  The condition effect for genotype II (or III) is obtained by adding the
  main condition effect and the interaction term for that genotype.
  Such a plot can be made using the \Rfunction{plotCounts} function
  (Section~\ref{sec:plotcounts}).
}
\label{fig:inter}
\end{figure*}

Now we will continue to explain the use of interactions in order to
test for \textit{differences} in condition effects. We continue with
the example of condition effects across three genotypes (I, II, and III).
For a diagram of how interactions might look across genotypes 
please refer to Figure \ref{fig:inter}. 

The key point to remember about designs with interaction terms is
that, unlike for a design \Robject{\lowtilde{} 
  genotype + condition}, where the condition effect represents the
\textit{overall} effect controlling for differences due to genotype, by adding
\Robject{genotype:condition}, the main condition effect only
represents the effect of condition for the \textit{reference level} of
genotype (I, or whichever level was defined by the user as the
reference level). The interaction terms \Robject{genotypeII.conditionB}
and \Robject{genotypeIII.conditionB} give the \textit{difference}
between the condition effect for a given genotype and the condition
effect for the reference genotype. 

This genotype-condition interaction example is examined in further
detail in Example 3 in the help page for \Rfunction{results}, which
can be found by typing \Rcode{?results}. In particular, we show how to
test for differences in the condition effect across genotype, and we
show how to obtain the condition effect for non-reference genotypes.
Note that in \deseqtwo{} version 1.10, the \Rfunction{DESeq} function will turn
off log fold change shrinkage (setting \Robject{betaPrior=FALSE}),
for designs which contain an interaction term. Turning off the log
fold change shrinkage allows the software to use standard model
matrices (as would be produced by \Rfunction{model.matrix}), where the
interaction coefficients are easier to interpret.

\subsection{Time-series experiments}

There are a number of ways to analyze time-series experiments,
depending on the biological question of interest. In order to test for
any differences over multiple time points, once can use a design
including the time factor, and then test using the likelihood ratio
test as described in Section~\ref{sec:LRT}, where the time factor is
removed in the reduced formula. For a control and treatment time
series, one can use a design formula containing the condition factor,
the time factor, and the interaction of the two. In this case, using
the likelihood ratio test with a reduced model which does not contain
the interaction terms will test whether the condition induces a change
in gene expression at any time point after the reference level time point
(time 0). An example of the later analysis is provided in an RNA-seq
workflow on the Bioconductor
website: \url{http://www.bioconductor.org/help/workflows/rnaseqGene/}.

\subsection{Likelihood ratio test} \label{sec:LRT}

\deseqtwo{} offers two kinds of hypothesis tests: the Wald test, where
we use the estimated standard error of a log2 fold change to test if it is
equal to zero, and the likelihood ratio test (LRT). The LRT examines
two models for the counts, a \emph{full} model with a certain number
of terms and a \emph{reduced} model, in which some of the terms of the
\emph{full} model are removed. The test determines if the increased
likelihood of the data using the extra terms in the \emph{full} model
is more than expected if those extra terms are truly zero.

The LRT is therefore useful for testing multiple
terms at once, for example testing 3 or more levels of a factor at once,
or all interactions between two variables. 
The LRT for count data is conceptually similar to an analysis of variance (ANOVA)
calculation in linear regression, except that in the case of the Negative
Binomial GLM, we use an analysis of deviance (ANODEV), where the
\emph{deviance} captures the difference in likelihood between a full
and a reduced model.

The likelihood ratio test can be performed by specifying \Rcode{test="LRT"}
when using the \Rfunction{DESeq} function, and
providing a reduced design formula, e.g. one in which a
number of terms from \Robject{design(dds)} are removed.
The degrees of freedom for the test is obtained from the difference
between the number of parameters in the two models. 
A simple likelihood ratio test, if the full design was
\Robject{~condition} would look like:

<<simpleLRT, eval=FALSE>>=
dds <- DESeq(dds, test="LRT", reduced=~1)
res <- results(dds)
@ 

If the full design contained other variables, 
such as a batch variable,
then the likelihood ratio test would look like:

<<simpleLRT2, eval=FALSE>>=
dds <- DESeq(dds, test="LRT", reduced=~batch)
res <- results(dds)
@ 

\subsection{Approach to count outliers} \label{sec:outlierApproach}

RNA-seq data sometimes contain isolated instances of very large counts that are apparently
unrelated to the experimental or study design, and which may be 
considered outliers. There are many reasons why outliers can arise, including rare
technical or experimental artifacts, read mapping problems in the case of genetically
differing samples, and genuine, but rare biological events. In many cases, users appear
primarily interested in genes that show a consistent behavior, and this is the reason why
by default, genes that are affected by such outliers are set aside by \deseqtwo{}, 
or if there are sufficient samples, outlier counts are replaced for model fitting. 
These two behaviors are described below.

The \Rfunction{DESeq} function calculates, for every gene and for every sample,
a diagnostic test for outliers called \emph{Cook's distance}. Cook's distance 
is a measure of how much a single sample is influencing the fitted 
coefficients for a gene, and a large value of Cook's distance is 
intended to indicate an outlier count. 
The Cook's distances are stored as a matrix available in 
\Robject{assays(dds)[["cooks"]]}.

The \Rfunction{results} function automatically flags genes which contain a 
Cook's distance above a cutoff for samples which have 3 or more replicates. 
The $p$ values and adjusted $p$ values for these genes are set to \Robject{NA}. 
At least 3 replicates are required for flagging, as it is difficult to judge
which sample might be an outlier with only 2 replicates.
This filtering can be turned off with \Rcode{results(dds, cooksCutoff=FALSE)}.

With many degrees of freedom -- i.\,e., many more samples than number of parameters to 
be estimated -- it is undesirable to remove entire genes from the analysis
just because their data include a single count outlier. When there
are 7 or more replicates for a given sample, the \Rfunction{DESeq}
function will automatically replace counts with large Cook's distance 
with the trimmed mean over all samples, scaled up by the size factor or 
normalization factor for that sample. This approach is conservative, 
it will not lead to false positives, as it replaces
the outlier value with the value predicted by the null hypothesis.
This outlier replacement only occurs when there are 7 or more
replicates, and can be turned off with 
\Rcode{DESeq(dds, minReplicatesForReplace=Inf)}.

The default Cook's distance cutoff for the two behaviors described above
depends on the sample size and number of parameters
to be estimated. The default is to use the $99\%$ quantile of the 
$F(p,m-p)$ distribution (with $p$ the number of parameters including the 
intercept and $m$ number of samples).
The default for gene flagging can be modified using the \Robject{cooksCutoff} 
argument to the \Rfunction{results} function. 
For outlier replacement, \Rfunction{DESeq} preserves the original counts in
\Robject{counts(dds)} saving the replacement counts as a matrix named
\Robject{replaceCounts} in \Robject{assays(dds)}.
Note that with continuous variables in the design, outlier detection
and replacement is not automatically performed, as our 
current methods involve a robust estimation of within-group variance
which does not extend easily to continuous covariates. However, users
can examine the Cook's distances in \Rcode{assays(dds)[["cooks"]]}, in
order to perform manual visualization and filtering if necessary.

\textbf{Note on many outliers:} if there are very many outliers 
(e.g. many hundreds or thousands) reported by
\Rcode{summary(res)}, one might consider further exploration to see if
a single sample or a few samples should be removed due to low quality. 
The automatic outlier filtering/replacement is most useful in situations which the number
of outliers is limited. When there are thousands of reported outliers, 
it might make more sense to turn off the outlier filtering/replacement
(\Rfunction{DESeq} with \Robject{minReplicatesForReplace=Inf} and
\Rfunction{results} with \Robject{cooksCutoff=FALSE})
and perform manual inspection: First it would be
advantageous to make a PCA plot using the code example in Section
\ref{sec:pca} to spot individual sample outliers; Second, one can make
a boxplot of the Cook's distances to see if one sample is consistently
higher than others: 

<<boxplotCooks, fig.show="asis", fig.small=TRUE, fig.cap="Boxplot of Cook's distances.  Here we can look to see if one sample has much higher Cook's distances than the other samples. In this case, the samples all have comparable range of Cook's distances.\\label{figure/boxplotCooks-1}">>=
par(mar=c(8,5,2,2))
boxplot(log10(assays(dds)[["cooks"]]), range=0, las=2)
@ 

\subsection{Dispersion plot and fitting alternatives}

Plotting the dispersion estimates is a useful diagnostic. The dispersion
plot in Figure \ref{figure/dispFit-1} is typical, with the final estimates shrunk
from the gene-wise estimates towards the fitted estimates. Some gene-wise
estimates are flagged as outliers and not shrunk towards the fitted value,
(this outlier detection is described in the manual page for \Rfunction{estimateDispersionsMAP}).
The amount of shrinkage can be more or less than seen here, depending 
on the sample size, the number of coefficients, the row mean
and the variability of the gene-wise estimates.

<<dispFit, fig.show="asis", fig.small=TRUE, fig.cap="Dispersion plot.  The dispersion estimate plot shows the gene-wise estimates (black), the fitted values (red), and the final maximum \\textit{a posteriori} estimates used in testing (blue).\\label{figure/dispFit-1}">>=
plotDispEsts(dds)
@

\subsubsection{Local or mean dispersion fit}

A local smoothed dispersion fit is automatically substitited in the case that
the parametric curve doesn't fit the observed dispersion mean relationship.
This can be prespecified by providing the argument
\Robject{fitType="local"} to either \Rfunction{DESeq} or \Rfunction{estimateDispersions}.
Additionally, using the mean of gene-wise disperion estimates as the
fitted value can be specified by providing the argument \Robject{fitType="mean"}. 

\subsubsection{Supply a custom dispersion fit}

Any fitted values can be provided during dispersion estimation, using
the lower-level functions described in the manual page for
\Rfunction{estimateDispersionsGeneEst}. In the code chunk below, we
store the gene-wise estimates which were already calculated and saved 
in the metadata column \Robject{dispGeneEst}. Then we calculate the
median value of the dispersion estimates above a threshold, and save
these values as the fitted dispersions, using the replacement function
for \Rfunction{dispersionFunction}. In the last line, the function
\Rfunction{estimateDispersionsMAP}, uses the 
fitted dispersions to generate maximum \textit{a posteriori} (MAP)
estimates of dispersion. 

<<dispFitCustom>>=
ddsCustom <- dds
useForMedian <- mcols(ddsCustom)$dispGeneEst > 1e-7
medianDisp <- median(mcols(ddsCustom)$dispGeneEst[useForMedian],
                     na.rm=TRUE)
dispersionFunction(ddsCustom) <- function(mu) medianDisp
ddsCustom <- estimateDispersionsMAP(ddsCustom)
@


\subsection{Independent filtering of results}\label{sec:autoFilt}

The \Rfunction{results} function of the \deseqtwo{} package 
performs independent filtering by default using 
the mean of normalized counts as a filter statistic. 
A threshold on the filter statistic is found which optimizes the number
of adjusted $p$ values lower than a significance level \Robject{alpha}
(we use the standard variable name for significance level, 
though it is unrelated to the dispersion parameter $\alpha$). 
The theory behind independent filtering is discussed in greater detail
in Section~\ref{sec:indepfilt}. The adjusted $p$ values for the genes
which do not pass the filter threshold are set to \Robject{NA}.

The independent filtering is performed using the \Rfunction{filtered\_p} function 
of the \Biocpkg{genefilter} package, and all of the arguments of \Rfunction{filtered\_p}
can be passed to the \Rfunction{results} function. 
The filter threshold value and the number of rejections at each quantile
of the filter statistic are available as metadata of the object 
returned by \Rfunction{results}. For example, we can visualize
the optimization by plotting the \Robject{filterNumRej} attribute of 
the results object, as seen in Figure \ref{figure/filtByMean-1}.

<<filtByMean, dev="pdf", fig.show="asis", fig.small=TRUE, fig.cap="Independent filtering.  The \\Rfunction{results} function maximizes the number of rejections (adjusted $p$ value less than a significance level), over the quantiles of a filter statistic (the mean of normalized counts). The threshold chosen (vertical line) is the lowest quantile of the filter for which the number of rejections is within 1 residual standard deviation to the peak of a curve fit to the number of rejections over the filter quantiles.\\label{figure/filtByMean-1}">>=
metadata(res)$alpha
metadata(res)$filterThreshold
plot(metadata(res)$filterNumRej, 
     type="b", ylab="number of rejections",
     xlab="quantiles of filter")
lines(metadata(res)$lo.fit, col="red")
abline(v=metadata(res)$filterTheta)
@

Independent filtering can be turned off by setting 
\Robject{independentFiltering} to \Robject{FALSE}.

<<noFilt>>=
resNoFilt <- results(dds, independentFiltering=FALSE)
addmargins(table(filtering=(res$padj < .1),
                 noFiltering=(resNoFilt$padj < .1)))
@ 

\subsection{Tests of log2 fold change above or below a threshold}

It is also possible to provide thresholds for constructing
Wald tests of significance. Two arguments to the \Rfunction{results}
function allow for threshold-based Wald tests: \Robject{lfcThreshold},
which takes a numeric of a non-negative threshold value, 
and \Robject{altHypothesis}, which specifies the kind of test.
Note that the \textit{alternative hypothesis} is specified by the user, 
i.e. those genes which the user is interested in finding, and the test 
provides $p$ values for the null hypothesis, the complement of the set 
defined by the alternative. The \Robject{altHypothesis} argument can take one 
of the following four values, where $\beta$ is the log2 fold change
specified by the \Robject{name} argument:

\begin{itemize}
 \item \Robject{greaterAbs} - $|\beta| > \textrm{lfcThreshold}$ - tests are two-tailed
 \item \Robject{lessAbs} - $|\beta| < \textrm{lfcThreshold}$ - $p$ values are the maximum of the upper and lower tests
 \item \Robject{greater} - $\beta > \textrm{lfcThreshold} $
 \item \Robject{less} - $\beta < -\textrm{lfcThreshold} $
\end{itemize}

The test \Robject{altHypothesis="lessAbs"} requires that the user have
run \Rfunction{DESeq} with the argument \Robject{betaPrior=FALSE}.  To
understand the reason for this requirement, consider that during
hypothesis testing, the null hypothesis is favored unless the data
provide strong evidence to reject the null.  For this test, including
a zero-centered prior on log fold change would favor the alternative
hypothesis, shrinking log fold changes toward zero.  Removing the
prior on log fold changes for tests of small log fold change allows
for detection of only those genes where the data alone provides
evidence against the null.

The four possible values of \Robject{altHypothesis} are demonstrated
in the following code and visually by MA-plots in Figure~\ref{figure/lfcThresh-1}. 
First we run \Rfunction{DESeq} and specify \Robject{betaPrior=FALSE} in order 
to demonstrate \Robject{altHypothesis="lessAbs"}.

<<ddsNoPrior>>=
ddsNoPrior <- DESeq(dds, betaPrior=FALSE)
@

In order to produce results tables for the following tests, the same arguments
(except \Robject{ylim}) would be provided to the \Rfunction{results} function. 

<<lfcThresh, fig.show="asis", fig.cap='MA-plots of tests of log2 fold change with respect to a threshold value.  Going left to right across rows, the tests are for \\Robject{altHypothesis = "greaterAbs"}, \\Robject{"lessAbs"}, \\Robject{"greater"}, and \\Robject{"less"}.\\label{figure/lfcThresh-1}'>>=
par(mfrow=c(2,2),mar=c(2,2,1,1))
yl <- c(-2.5,2.5)

resGA <- results(dds, lfcThreshold=.5, altHypothesis="greaterAbs")
resLA <- results(ddsNoPrior, lfcThreshold=.5, altHypothesis="lessAbs")
resG <- results(dds, lfcThreshold=.5, altHypothesis="greater")
resL <- results(dds, lfcThreshold=.5, altHypothesis="less")

plotMA(resGA, ylim=yl)
abline(h=c(-.5,.5),col="dodgerblue",lwd=2)
plotMA(resLA, ylim=yl)
abline(h=c(-.5,.5),col="dodgerblue",lwd=2)
plotMA(resG, ylim=yl)
abline(h=.5,col="dodgerblue",lwd=2)
plotMA(resL, ylim=yl)
abline(h=-.5,col="dodgerblue",lwd=2)
@ 

\subsection{Access to all calculated values}\label{sec:access}

All row-wise calculated values (intermediate dispersion calculations,
coefficients, standard errors, etc.) are stored in the \Rclass{DESeqDataSet} 
object, e.g. \Robject{dds} in this vignette. These values are accessible 
by calling \Rfunction{mcols} on \Robject{dds}. 
Descriptions of the columns are accessible by two calls to 
\Rfunction{mcols}.

<<mcols>>=
mcols(dds,use.names=TRUE)[1:4,1:4]
# here using substr() only for display purposes
substr(names(mcols(dds)),1,10) 
mcols(mcols(dds), use.names=TRUE)[1:4,]
@

The mean values $\mu_{ij} = s_j q_{ij}$ and the Cook's distances for each gene and
sample are stored as matrices in the assays slot:

<<muAndCooks>>=
head(assays(dds)[["mu"]])
head(assays(dds)[["cooks"]])
@ 

The dispersions $\alpha_i$ can be accessed with the
\Rfunction{dispersions} function.

<<dispersions>>=
head(dispersions(dds))
# which is the same as 
head(mcols(dds)$dispersion)
@ 

The size factors $s_j$ are accessible via \Rfunction{sizeFactors}:

<<sizefactors>>=
sizeFactors(dds)
@ 

For advanced users, we also include a convenience function \Rfunction{coef} for 
extracting the matrix of coefficients $[\beta_{ir}]$ for all genes $i$ and
parameters $r$, as in the formula in Section~\ref{sec:glm}.
This function can also return a matrix of standard errors, see \Robject{?coef}.
The columns of this matrix correspond to the effects returned by \Rfunction{resultsNames}.
Note that the \Rfunction{results} function is best for building 
results tables with $p$ values and adjusted $p$ values.

<<coef>>=
head(coef(dds))
@ 

The beta prior variance $\sigma_r^2$ is stored as an attribute of the
\Rclass{DESeqDataSet}: 

<<betaPriorVar>>=
attr(dds, "betaPriorVar")
@ 

The dispersion prior variance $\sigma_d^2$ is stored as an
attribute of the dispersion function:

<<dispPriorVar>>=
dispersionFunction(dds)
attr(dispersionFunction(dds), "dispPriorVar")
@ 

The version of \deseqtwo{} which was used to construct the
\Rclass{DESeqDataSet} object, or the version used when
\Rfunction{DESeq} was run, is stored here:

<<versionNum>>=
metadata(dds)[["version"]]
@ 

\subsection{Sample-/gene-dependent normalization factors} \label{sec:normfactors}

In some experiments, there might be gene-dependent dependencies
which vary across samples. For instance, GC-content bias or length
bias might vary across samples coming from different labs or
processed at different times. We use the terms ``normalization factors''
for a gene $\times$ sample matrix, and ``size factors'' for a
single number per sample.  Incorporating normalization factors,
the mean parameter $\mu_{ij}$ from Section~\ref{sec:glm} becomes:

$$ \mu_{ij} = NF_{ij} q_{ij} $$

with normalization factor matrix $NF$ having the same dimensions
as the counts matrix $K$. This matrix can be incorporated as shown
below. We recommend providing a matrix with row-wise geometric means of $1$, 
so that the mean of normalized counts for a gene is close to the mean
of the unnormalized counts.
This can be accomplished by dividing out the current row geometric means.

<<normFactors, eval=FALSE>>=
normFactors <- normFactors / exp(rowMeans(log(normFactors)))
normalizationFactors(dds) <- normFactors
@

These steps then replace \Rfunction{estimateSizeFactors} in the steps
described in Section~\ref{sec:steps}. Normalization factors, if present,
will always be used in the place of size factors.

The methods provided by the \Biocpkg{cqn} or \Biocpkg{EDASeq} packages
can help correct for GC or length biases. They both describe in their
vignettes how to create matrices which can be used by \deseqtwo{}.
From the formula above, we see that normalization factors should be on
the scale of the counts, like size factors, and unlike offsets which
are typically on the scale of the predictors (i.e. the logarithmic scale for
the negative binomial GLM). At the time of writing, the transformation
from the matrices provided by these packages should be:

<<offsetTransform, eval=FALSE>>=
cqnOffset <- cqnObject$glm.offset
cqnNormFactors <- exp(cqnOffset)
EDASeqNormFactors <- exp(-1 * EDASeqOffset)
@

\subsection{``Model matrix not full rank''}

While most experimental designs run easily using design formula, some
design formulas can cause problems and result in the \Rfunction{DESeq}
function returning an error with the text: ``the model matrix is not
full rank, so the model cannot be fit as specified.''  There are two
main reasons for this problem: either one or more columns in the model
matrix are linear combinations of other columns, or there are levels
of factors or combinations of levels of multiple factors which are
missing samples. We address these two problems below and discuss
possible solutions:

\subsubsection{Linear combinations}

The simplest case is the linear combination, or linear dependency
problem, when two variables contain exactly the same information, such
as in the following sample table. The software cannot fit an effect
for \Robject{batch} and \Robject{condition}, because they produce
identical columns in the model matrix. This is also referred to as
``perfect confounding''. A unique solution of coefficients (the $\beta_i$ in
the formula in Section~\ref{sec:glm}) is not possible.

<<lineardep, echo=FALSE>>=
data.frame(batch=factor(c(1,1,2,2)), condition=factor(c("A","A","B","B")))
@ 

Another situation which will cause problems is when the variables are
not identical, but one variable can be formed by the combination of
other factor levels. In the following example, the effect of batch 2
vs 1 cannot be fit because it is identical to a column in the model
matrix which represents the condition C vs A effect.

<<lineardep2, echo=FALSE>>=
data.frame(batch=factor(c(1,1,1,1,2,2)), condition=factor(c("A","A","B","B","C","C")))
@ 

In both of these cases above, the batch effect cannot be fit and must
be removed from the model formula. There is just no way to tell apart
the condition effects and the batch effects. The options are either to assume
there is no batch effect (which we know is highly unlikely given the
literature on batch effects in sequencing datasets) or to repeat the
experiment and properly balance the conditions across batches.
A balanced design would look like:

<<lineardep3, echo=FALSE>>=
data.frame(batch=factor(c(1,1,1,2,2,2)), condition=factor(c("A","B","C","A","B","C")))
@ 

Finally, there is a case where we can in fact perform inference.
Consider an experiment with grouped individuals,
where we seek to test the group-specific effect of a treatment, while
controlling for individual effects. A simple example of such a design is:

<<groupeffect>>=
(coldata <- data.frame(grp=factor(rep(c("X","Y"),each=4)),
                       ind=factor(rep(1:4,each=2)),
                       cnd=factor(rep(c("A","B"),4))))
@


This design can be analyzed by \deseqtwo{} but requires a bit of
refactoring in order to fit the model terms. Here we will use a trick
described in the \Biocpkg{edgeR} user guide, from the section
``Comparisons Both Between and Within Subjects''.  If we try to
analyze with a formula such as, \Rcode{$\sim$ ind + grp*cnd}, we will
obtain an error, because the effect for group is a linear combination
of the individuals.

However, the following steps allow for an analysis of group-specific
condition effects, while controlling for differences in individual.
For object construction, use a dummy design, such as \Rcode{$\sim$
  1}. Then add a column \Robject{ind.n} which distinguishes the
individuals ``nested'' within a group. Here, we add this column to
coldata, but in practice you would add this column to \Rcode{dds}.

<<groupeffect2>>=
coldata$ind.n <- factor(rep(rep(1:2,each=2),2))
coldata
@ 

Now we can reassign our \Rclass{DESeqDataSet} a design of
\Rcode{$\sim$ grp + grp:ind.n + grp:cnd}, before we call
\Rfunction{DESeq}. This new design will result in the following model
matrix: 

<<groupeffect3>>=
model.matrix(~ grp + grp:ind.n + grp:cnd, coldata)
@ 

where the terms \Robject{grpX.cndB} and \Robject{grpY.cndB} give the
group-specific condition effects. These can be extracted using
\Rfunction{results} with the \Robject{name} argument.
Furthermore, \Robject{grpX.cndB} and
\Robject{grpY.cndB} can be contrasted using the \Robject{contrast}
argument, in order to test if the condition effect is different across group:

<<groupeffect4, eval=FALSE>>=
results(dds, contrast=list("grpY.cndB","grpX.cndB"))
@ 

\subsubsection{Levels without samples}

The base R function for creating model matrices will produce a column
of zeros if a level is missing from a factor or a combination of
levels is missing from an interaction of factors. The solution to the
first case is to call \Rfunction{droplevels} on the column, which will
remove levels without samples. This was shown in the beginning of this
vignette.

The second case is also solvable, by manually editing the model
matrix, and then providing this to \Rfunction{DESeq}. Here we
construct an example dataset to illustrate:

<<missingcombo>>=
group <- factor(rep(1:3,each=6))
condition <- factor(rep(rep(c("A","B","C"),each=2),3))
(d <- data.frame(group, condition)[-c(17,18),])
@ 

Note that if we try to estimate all interaction terms, we introduce a
column with all zeros, as there are no condition C samples for group
3. (Here, \Rfunction{unname} is used to display the matrix concisely.)

<<missingcombo2>>=
m1 <- model.matrix(~ condition*group, d)
colnames(m1)
unname(m1)
@ 

We can remove this column like so:

<<missingcombo3>>=
m1 <- m1[,-9]
unname(m1)
@ 

Now this matrix \Robject{m1} can be provided to the \Robject{full}
argument of \Rfunction{DESeq}.  For a likelihood ratio test of
interactions, a model matrix using a reduced design such as
\Rcode{$\sim$ condition + group} can be given to the \Robject{reduced}
argument. Wald tests can also be generated instead of the likelihood
ratio test, but for user-supplied model matrices, the argument
\Robject{betaPrior} must be set to \Robject{FALSE}.

\newpage

%--------------------------------------------------
\section{Theory behind DESeq2}
%--------------------------------------------------
  
\subsection{The DESeq2 model} \label{sec:glm}

The \deseqtwo{} model and all the steps taken in the software
are described in detail in our publication \cite{Love2014},
and we include the formula and descriptions in this section as well.
The differential expression analysis in \deseqtwo{} uses a generalized
linear model of the form:

$$ K_{ij} \sim \textrm{NB}(\mu_{ij}, \alpha_i) $$
$$ \mu_{ij} = s_j q_{ij} $$
$$ \log_2(q_{ij}) = x_{j.} \beta_i $$

where counts $K_{ij}$ for gene $i$, sample $j$ are modeled using
a negative binomial distribution with fitted mean $\mu_{ij}$
and a gene-specific dispersion parameter $\alpha_i$.
The fitted mean is composed of a sample-specific size factor
$s_j$\footnote{The model can be generalized to use sample- 
\textbf{and} gene-dependent normalization factors, see
Appendix~\ref{sec:normfactors}.} and a parameter $q_{ij}$ 
proportional to the expected true concentration of fragments for sample $j$.
The coefficients $\beta_i$ give the log2 fold changes for gene $i$ for each 
column of the model matrix $X$. 

The dispersion parameter $\alpha_i$ defines the relationship between
the variance of the observed count and its mean value. In other
words, how far do we expected the observed count will be from the
mean value, which depends both on the size factor $s_j$ and the
covariate-dependent part $q_{ij}$ as defined above.

$$ \textrm{Var}(K_{ij}) = E[ (K_{ij} - \mu_{ij})^2 ] = \mu_{ij} + \alpha_i \mu_{ij}^2 $$

The log2 fold changes in $\beta_i$ are the maximum \emph{a posteriori}
estimates after incorporating a 
zero-centered Normal prior -- in the software referrred to as a $\beta$-prior -- hence \deseqtwo{}
provides ``moderated'' log2 fold change estimates.  Dispersions are estimated using expected mean
values from the maximum likelihood estimate of log2 fold changes, and optimizing the Cox-Reid
adjusted profile likelihood, as first implemented for RNA-seq data in \Biocpkg{edgeR}
\cite{CR,edgeR_GLM}. The steps performed by the \Rfunction{DESeq} function are documented in its
manual page; briefly, they are:

\begin{enumerate}
\item estimation of size factors $s_j$ by \Rfunction{estimateSizeFactors}
\item estimation of dispersion $\alpha_i$ by \Rfunction{estimateDispersions}
\item negative binomial GLM fitting for $\beta_i$ and Wald statistics by 
\Rfunction{nbinomWaldTest}
\end{enumerate}

For access to all the values calculated during these steps,
see Section~\ref{sec:access}

\subsection{Changes compared to the  \Biocpkg{DESeq} package}

The main changes in the package \deseqtwo{}, compared to the (older)
version \Biocpkg{DESeq}, are as follows:

\begin{itemize}
\item \Rclass{RangedSummarizedExperiment} is used as the superclass for storage of input data,
  intermediate calculations and results.
\item Maximum \textit{a posteriori} estimation of GLM coefficients
  incorporating a zero-centered
  Normal prior with variance estimated from data (equivalent to Tikhonov/ridge
  regularization). This adjustment has little effect on genes with high counts, yet it
  helps to moderate the otherwise large variance in log2 fold change estimates
  for genes with low counts or highly variable counts.
\item Maximum \textit{a posteriori} estimation of dispersion replaces the
  \Robject{sharingMode} options \Robject{fit-only} or \Robject{maximum} of the previous version
  of the package. This is similar to the dispersion estimation methods of DSS \cite{Wu2012New}.
\item All estimation and inference is based on the generalized linear model, which
  includes the two condition case (previously the \textit{exact test} was used).
\item The Wald test for significance of GLM coefficients is provided as the default
  inference method, with the likelihood ratio test of the previous version still available.
\item It is possible to provide a matrix of sample-/gene-dependent
  normalization factors (Section \ref{sec:normfactors}).
\item Automatic independent filtering on the mean of normalized counts
  (Section \ref{sec:indepfilt}).
\item Automatic outlier detection and handling (Section \ref{sec:cooks}).
\end{itemize}

\subsection{Methods changes since the 2014 DESeq2 paper}

\begin{itemize}
  \item For the calculation of the beta prior variance, instead of
    matching the empirical quantile to the quantile of a Normal
    distribution, \deseqtwo() now uses the weighted quantile function
    of the \CRANpkg{Hmisc} package. The weighting is described in the
    manual page for \Rfunction{nbinomWaldTest}.  The weights are the
    inverse of the expected variance of log counts (as used in the
    diagonals of the matrix $W$ in the GLM). The effect of the change
    is that the estimated prior variance is robust against noisy
    estimates of log fold change from genes with very small
    counts. This change was introduced in version 1.6 (October 2014).
  \item For designs with interaction terms, the solution described in
    the paper is no longer used (log fold change shrinkage only
    applied to interaction terms). Instead, \deseqtwo{} now turns off
    log fold change shrinkage for all terms if an interaction term is
    present (\Robject{betaPrior=FALSE}).  While the inference on
    interaction terms was correct with \Robject{betaPrior=TRUE}, the
    interpretation of the individual terms and the extraction of
    contrasts was too confusing.  This change was introduced in version 1.10
    (October 2015).
  \item A small change to the independent filtering routine: instead
    of taking the quantile of the filter (the mean of normalized counts) which
    directly \textit{maximizes} the number of rejections, the threshold chosen is 
    the lowest quantile of the filter for which the
    number of rejections is close to the peak of a curve fit
    to the number of rejections over the filter quantiles.
    ``Close to'' is defined as within 1 residual standard deviation.
    This change was introduced in version 1.10 (October 2015).
\end{itemize}

For a list of all changes since version 1.0.0, see the NEWS file
included in the package.

\subsection{Count outlier detection} \label{sec:cooks}

\deseqtwo{} relies on the negative binomial distribution to make
estimates and perform statistical inference on differences.  While the
negative binomial is versatile in having a mean and dispersion
parameter, extreme counts in individual samples might not fit well to
the negative binomial. For this reason, we perform automatic detection
of count outliers. We use Cook's distance, which is a measure of how
much the fitted coefficients would change if an individual sample were
removed \cite{Cook1977Detection}. For more on the implementation of 
Cook's distance see Section~\ref{sec:outlierApproach} and the manual page
for the \Rfunction{results} function. Below we plot the maximum value of
Cook's distance for each row over the rank of the test statistic 
to justify its use as a filtering criterion.

<<cooksPlot, fig.show="asis", fig.small=TRUE, fig.cap="Cook's distance.  Plot of the maximum Cook's distance per gene over the rank of the Wald statistics for the condition. The two regions with small Cook's distances are genes with a single count in one sample. The horizontal line is the default cutoff used for 7 samples and 3 estimated parameters.\\label{figure/cooksPlot-1}">>=
W <- res$stat
maxCooks <- apply(assays(dds)[["cooks"]],1,max)
idx <- !is.na(W)
plot(rank(W[idx]), maxCooks[idx], xlab="rank of Wald statistic", 
     ylab="maximum Cook's distance per gene",
     ylim=c(0,5), cex=.4, col=rgb(0,0,0,.3))
m <- ncol(dds)
p <- 3
abline(h=qf(.99, p, m - p))
@ 

\subsection{Contrasts} \label{sec:ctrstTheory}

Contrasts can be calculated for a \Rclass{DESeqDataSet} object for which
the GLM coefficients have already been fit using the Wald test steps
(\Rfunction{DESeq} with \texttt{test="Wald"} or using \Rfunction{nbinomWaldTest}).
The vector of coefficients $\beta$ is left multiplied by the contrast vector $c$
to form the numerator of the test statistic. The denominator is formed by multiplying
the covariance matrix $\Sigma$ for the coefficients on either side by the 
contrast vector $c$. The square root of this product is an estimate
of the standard error for the contrast. The contrast statistic is then compared
to a normal distribution as are the Wald statistics for the \deseqtwo{}
package.

$$ W = \frac{c^t \beta}{\sqrt{c^t \Sigma c}} $$

\subsection{Expanded model matrices} \label{sec:expanded}

\deseqtwo{} uses ``expanded model matrices'' with the log2 fold change prior, 
in order to produce shrunken log2 fold change estimates and test 
results which are independent of the choice of reference level. 
Another way of saying this is that the shrinkage is \textit{symmetric}
with respect to all the levels of the factors in the design.
The expanded model matrices differ from the standard model matrices, in that
they have an indicator column (and therefore a coefficient) for
each level of factors in the design formula in addition to an intercept. 
Note that in version 1.10 and onward, standard model matrices are used for
designs with interaction terms, as the shrinkage of log2 fold changes
is not recommended for these designs.

The expanded model matrices are not full rank, but a coefficient
vector $\beta_i$ can still be found due to the zero-centered prior on
non-intercept coefficients. The prior variance for the log2 fold
changes is calculated by first generating maximum likelihood estimates
for a standard model matrix. The prior variance for each level of a
factor is then set as the average of the mean squared maximum
likelihood estimates for each level and every possible contrast, such
that that this prior value will be reference-level-independent. The
\Robject{contrast} argument of the \Rfunction{results} function is
used in order to generate comparisons of interest.

%--------------------------------------------------
\subsection{Independent filtering and multiple testing} \label{sec:indepfilt}
\subsubsection{Filtering criteria} \label{sec:filtbycount}
%--------------------------------------------------

The goal of independent filtering is to filter out those tests from the procedure 
that have no, or little chance of showing significant evidence, without even
looking at their test statistic. Typically, this results in increased detection
power at the same experiment-wide type I error. Here, we  measure experiment-wide
type I error in terms of the false discovery rate.

A good choice for a filtering criterion is one that
\begin{enumerate}
  \item\label{it:indp} is statistically independent from the test statistic under the null hypothesis,
  \item\label{it:corr} is correlated with the test statistic under the alternative, and
  \item\label{it:joint} does not notably change the dependence structure --if there is any--
    between the tests that pass the filter, compared to the dependence structure between the tests before filtering.
\end{enumerate}

The benefit from filtering relies on property \ref{it:corr}, and we will explore
it further in Section~\ref{sec:whyitworks}. Its statistical validity relies on
property \ref{it:indp} -- which is simple to formally prove for many combinations
of filter criteria with test statistics-- and \ref{it:joint}, which is less
easy to theoretically imply from first principles, but rarely a problem in practice.
We refer to \cite{Bourgon:2010:PNAS} for further discussion of this topic.

A simple filtering criterion readily available in the results object is the
mean of normalized counts irrespective of biological condition (Figure \ref{figure/indFilt-1}),
and so this is the criterion which is used automatically by the
\Rfunction{results} function to perform independent filtering.
Genes with very low counts are not likely to 
see significant differences typically due to high
dispersion. For example, we can plot the $-\log_{10}$ $p$ values from all genes
over the normalized mean counts.

<<indFilt, fig.show="asis", fig.small=TRUE, fig.cap="Mean counts as a filter statistic.  The mean of normalized counts provides an independent statistic for filtering the tests. It is independent because the information about the variables in the design formula is not used. By filtering out genes which fall on the left side of the plot, the majority of the low $p$ values are kept.\\label{figure/indFilt-1}">>=
plot(res$baseMean+1, -log10(res$pvalue),
     log="x", xlab="mean of normalized counts",
     ylab=expression(-log[10](pvalue)),
     ylim=c(0,30),
     cex=.4, col=rgb(0,0,0,.3))
@

%--------------------------------------------------
\subsubsection{Why does it work?}\label{sec:whyitworks}
%--------------------------------------------------

Consider the $p$ value histogram in Figure \ref{figure/fighistindepfilt-1}.
It shows how the filtering ameliorates the multiple testing problem
-- and thus the severity of a multiple testing adjustment -- by
removing a background set of hypotheses whose $p$ values are distributed
more or less uniformly in $[0,1]$.

<<histindepfilt, dev="pdf", fig.width=7, fig.height=5>>=
use <- res$baseMean > metadata(res)$filterThreshold
h1 <- hist(res$pvalue[!use], breaks=0:50/50, plot=FALSE)
h2 <- hist(res$pvalue[use], breaks=0:50/50, plot=FALSE)
colori <- c(`do not pass`="khaki", `pass`="powderblue")
@ 

<<fighistindepfilt, fig.show="asis", fig.small=TRUE, fig.cap="Histogram of p values for all tests.  The area shaded in blue indicates the subset of those that pass the filtering, the area in khaki those that do not pass.\\label{figure/fighistindepfilt-1}">>=
barplot(height = rbind(h1$counts, h2$counts), beside = FALSE,
        col = colori, space = 0, main = "", ylab="frequency")
text(x = c(0, length(h1$counts)), y = 0, label = paste(c(0,1)),
     adj = c(0.5,1.7), xpd=NA)
legend("topright", fill=rev(colori), legend=rev(names(colori)))
@

\section{Frequently asked questions} \label{sec:faq}

\subsection{How can I get support for DESeq2?}

We welcome questions about our software, and want to
ensure that we eliminate issues if and when they appear. We have a few
requests to optimize the process:

\begin{itemize}
\item all questions should take place on the Bioconductor support
  site: \url{https://support.bioconductor.org}, which serves as a
  repository of questions and answers. This helps to save the
  developers' time in responding to similar questions. Make sure to
  tag your post with ``deseq2''. It is often very helpful in addition 
  to describe the aim of your experiment.
\item before posting, first search the Bioconductor support site
  mentioned above for past threads which might have answered your
  question.
\item if you have a question about the behavior of a function, read
  the sections of the manual page for this function by typing a
  question mark and the function name, e.g. \Robject{?results}.  We
  spend a lot of time documenting individual functions and the exact
  steps that the software is performing.
\item include all of your R code, especially the creation of the
  \Rclass{DESeqDataSet} and the design formula.  Include complete
  warning or error messages, and conclude your message with the full
  output of \Robject{sessionInfo()}.
\item if possible, include the output of
  \Robject{as.data.frame(colData(dds))}, so that we can have a sense
  of the experimental setup. If this contains confidential
  information, you can replace the levels of those factors using
  \Rfunction{levels()}.
\end{itemize}

\subsection{Why are some $p$ values set to \texttt{NA}?}
  
See the details in Section~\ref{sec:moreInfo}.  

\subsection{How can I get unfiltered DESeq results?}

Users can obtain unfiltered GLM results, i.e. without outlier removal
or independent filtering with the following call:

<<vanillaDESeq, eval=FALSE>>=
dds <- DESeq(dds, minReplicatesForReplace=Inf)
res <- results(dds, cooksCutoff=FALSE, independentFiltering=FALSE)
@

In this case, the only $p$ values set to \Robject{NA} are those from
genes with all counts equal to zero.

\subsection{How do I use the variance stabilized or rlog 
  transformed data for differential testing?}
  
  The variance stabilizing and rlog transformations are provided for
  applications other than differential testing, for example clustering
  of samples or other machine learning applications. For differential
  testing we recommend the \Rfunction{DESeq} function applied to raw
  counts as outlined in Section~\ref{sec:de}.
      
  
\subsection{Can I use DESeq2 to analyze paired samples?}

Yes, you should use a multi-factor design which includes the sample
information as a term in the design formula. This will account for 
differences between the samples while estimating the effect due to 
the condition. The condition of interest should go at the end of the 
design formula. See Section~\ref{sec:multifactor}.

\subsection{If I have multiple groups, should I run all together or split into pairs of groups?}

Typically, we recommend users to run samples from all groups together, and then
use the \Rcode{contrast} argument of the \Rfunction{results} function
to extract comparisons of interest after fitting the model using \Rfunction{DESeq}.

The model fit by \Rfunction{DESeq} estimates a single dispersion
parameter for each gene, which defines how far we expect the observed
count for a sample will be from the mean value from the model 
given its size factor and its condition group. See Section~\ref{sec:glm} 
and the \deseqtwo{} paper for full details.
Having a single dispersion parameter for each gene is usually
sufficient for analyzing multi-group data, as the final dispersion value will
incorporate the within-group variability across all groups. 

However, for some datasets, exploratory data analysis (EDA) plots as outlined
in Section~\ref{sec:pca} could reveal that one or more groups has much
higher within-group variability than the others. A simulated example
of such a set of samples is shown in Figure~\ref{figure/varGroup-1}.
This is case where, by comparing groups A and B separately --
subsetting a \Rclass{DESeqDataSet} to only samples from those two
groups and then running \Rfunction{DESeq} on this subset -- will be
more sensitive than a model including all samples together.
It should be noted that such an extreme range of within-group
variability is not common, although it could arise if certain
treatments produce an extreme reaction (e.g. cell death).
Again, this can be easily detected from the EDA plots such as PCA
described in this vignette.

<<varGroup, echo=FALSE, fig.width=5, fig.height=5, fig.show="asis", fig.small=TRUE, fig.cap="Extreme range of within-group variability.  Typically, it is recommended to run \\Rfunction{DESeq} across samples from all groups, for datasets with multiple groups. However, this simulated dataset shows a case where it would be preferable to compare groups A and B by creating a smaller dataset without the C samples. Group C has much higher within-group variability, which would inflate the per-gene dispersion estimate for groups A and B as well.\\label{figure/varGroup-1}">>=
set.seed(3)
dds1 <- makeExampleDESeqDataSet(n=1000,m=12,betaSD=.3,dispMeanRel=function(x) 0.01)
dds2 <- makeExampleDESeqDataSet(n=1000,m=12,
                                betaSD=.3,
                                interceptMean=mcols(dds1)$trueIntercept,
                                interceptSD=0,
                                dispMeanRel=function(x) 0.2)
dds2 <- dds2[,7:12]
dds2$condition <- rep("C",6)
mcols(dds2) <- NULL
dds <- cbind(dds1, dds2)
rld <- rlog(dds, blind=FALSE, fitType="mean")
plotPCA(rld)
@ 

\subsection{Can I run DESeq2 to contrast the levels of 100 groups?}

\deseqtwo{} will work with any kind of design specified using the R
formula. We enourage users to consider exploratory data analysis such
as principal components analysis as described in Section~\ref{sec:pca}, 
rather than performing statistical testing of all combinations of
dozens of groups. 

As a speed concern with fitting very large models, 
note that each additional level of a factor in the
design formula adds another parameter to the GLM which is fit by
\deseqtwo. Users might consider first removing genes with very few
reads, e.g.\ genes with row sum of 1, as this will speed up the
fitting procedure.

\subsection{Can I use DESeq2 to analyze a dataset without replicates?}

If a \Rclass{DESeqDataSet} is provided with an experimental design without replicates,
a warning is printed, that the samples are treated as replicates
for estimation of dispersion. This kind of analysis is
only useful for exploring the data, but will not provide the kind of
proper statistical inference on differences between groups.
Without biological replicates, it is not possible to estimate the biological
variability of each gene. 
More details can be found in the manual page for \Rfunction{?DESeq}.

\subsection{How can I include a continuous covariate in the design formula?}

Continuous covariates can be included in the design formula in the
same manner as factorial covariates. Continuous covariates might make
sense in certain experiments, where a constant fold change might be
expected for each unit of the covariate.  However, in many cases, more
meaningful results can be obtained by cutting continuous covariates
into a factor defined over a small number of bins (e.g. 3-5).  In this
way, the average effect of each group is controlled for, regardless of
the trend over the continuous covariates.  In R, \Rclass{numeric}
vectors can be converted into \Rclass{factors} using the function
\Rfunction{cut}.

\subsection{Will the log fold change shrinkage ``overshrink'' large differences?}

For most datasets, the application of a prior to the log fold changes
is a good choice, providing log fold change estimates that are
more stable across the entire range of mean counts than the maximum
likelihood estimates (see Figure~\ref{fig:MA} and the \deseqtwo{} paper).
One situation in which the prior on log fold changes might
``overshrink'' the estimates is 
if nearly all genes show no difference across condition, a very
small set of genes have extremely large differences, and no genes in between.
A simulated example of such a dataset is Figure~\ref{figure/overShrink-1}.
This is not likely to be the case for most experiments, where typically
there is a range of differences by size: some genes with medium-to-large
differences across treatment, and some with small differences.

<<overShrink, echo=FALSE, fig.width=5, fig.height=5, fig.show="asis", fig.small=TRUE, fig.cap="Example of a dataset with where the log fold change prior should be turned off.  Here we show a simulated MA-plot, where nearly all of the log fold changes are falling near the x-axis, with three genes that have very large log fold changes (note the y-axis is from -10 to 10 on the log2 scale). This would indicate a dataset where the log fold change prior would ``overshrink'' the large fold changes, and so \\Robject{betaPrior=FALSE} should be used.\\label{figure/overShrink-1}">>=
plot(c(10^rnorm(1000, 3, 2),300,2000,5000), 
     c(rnorm(1000, 0, .15), -5.5, -8.5, 7.5),
     ylim=c(-10,10), log="x", cex=.4,
     xlab="mean of normalized counts", 
     ylab="log2 fold change")
abline(h=0, col=rgb(1,0,0,.7))
@ 

There could be experiments in which only a few genes have
very large log fold changes, and the rest of the genes are
nearly constant across treatment.
Or, there could be artificially constructed libraries fitting this description,
e.g. technical replicates where the only difference across libraries 
is the concentration of a few spiked-in genes.
``Overshrinking'' of a few large log fold changes
can be assessed by running \Rfunction{results} with \Rcode{addMLE=TRUE},
which will print a results table with columns for the shrunken and
unshrunken (MLE) log fold changes.
The two estimates can be visually compared by running \Rfunction{plotMA} with
\Rcode{MLE=TRUE} and \Rcode{MLE=FALSE}. 
If ``overshrinking'' very large log fold changes is a concern,
it is better to turn off the log fold change prior by
running \Rfunction{DESeq} with \Rcode{betaPrior=FALSE}.

Even more detail: how do we avoid overshrinking on typical datasets?
The answer is that we estimate the width of the log fold change prior in a
robust way to accommodate the very largest log fold changes, and so to
avoid overshrinking. 
The details of the prior estimation are described in the manual page for
\Rfunction{nbinomWaldTest}. Briefly, a weighted upper quantile
is used to match the width of the log fold change prior to the upper
5\% of the MLE log fold changes, weighting by the expected sampling
variability of the estimated log fold changes given the mean count for
each gene. Note that this does not equal an assumption that 5\% of genes are
differentially expressed, but that a reasonable width of a log fold
change distribution can be obtained from the upper 5\% of MLE log fold
changes. 

\subsection{I ran a likelihood ratio test, but \texttt{results()} only gives me one comparison.}

``\dots How do I get the $p$ values for all of the variables/levels 
that were removed in the reduced design?''

This is explained in the help page for \texttt{?results} in the
section about likelihood ratio test p-values, but we will restate the
answer here. When one performs a likelihood ratio test, the $p$ values and
the test statistic (the \Robject{stat} column) are values for the test
that removes all of the variables which are present in the full
design and not in the reduced design. This tests the null hypothesis
that all the coefficients from these variables and levels of these factors
are equal to zero.

The likelihood ratio test $p$ values therefore
represent a test of \textit{all the variables and all the levels of factors}
which are among these variables. However, the results table only has space for
one column of log fold change, so a single variable and a single
comparison is shown (among the potentially multiple log fold changes
which were tested in the likelihood ratio test). 
This is indicated at the top of the results table
with the text, e.g.: ``log2 fold change (MLE): condition C vs A'' followed
by ``LRT p-value: '\lowtilde{} batch + condition' vs '\lowtilde{} batch' ''.
This indicates that the $p$ value is for the likelihood ratio test of
\textit{all the variables and all the levels}, while the log fold change is a single
comparison from among those variables and levels.
See the help page for \Rfunction{results} for more details.

\subsection{What are the exact steps performed by \Rfunction{DESeq()}?}

See the manual page for \Rfunction{DESeq}, which links to the 
subfunctions which are called in order, where complete details are listed.

\subsection{Is there an official Galaxy tool for DESeq2?}

Yes. The repository for the \deseqtwo{} tool is
\url{https://github.com/galaxyproject/tools-iuc/tree/master/tools/deseq2} 
and a link to its location in the Tool Shed is 
\url{https://toolshed.g2.bx.psu.edu/view/iuc/deseq2/d983d19fbbab}.

\subsection{I want to benchmark DESeq2 comparing to other DE tools.}

One aspect which can cause problems for comparison is that, by default,
\deseqtwo{} outputs \Rcode{NA} values for adjusted $p$ values based on 
independent filtering of genes which have low counts.
This is a way for the \deseqtwo{} to give extra
information on why the adjusted $p$ value for this gene is not small.
Additionally, $p$ values can be set to \Rcode{NA} based on extreme 
count outlier detection (see Section~\ref{sec:moreInfo} for full details). 
These \Rcode{NA} values should be considered
negatives for purposes of estimating sensitivity and specificity. The
easiest way to work with the adjusted $p$ values in a benchmarking
context is probably to convert these \Rcode{NA} values to 1:

<<convertNA, eval=FALSE>>=
res$padj <- ifelse(is.na(res$padj), 1, res$padj)
@ 

\section{Acknowledgments}

We have benefited in the development of \deseqtwo{} from the help and
feedback of many individuals, including but not limited to: 
The Bionconductor Core Team,
Alejandro Reyes, Andrzej Ole\'s, Aleksandra Pekowska, Felix Klein,
Nikolaos Ignatiadis,
Vince Carey,
Owen Solberg,
Ruping Sun,
Devon Ryan, 
Steve Lianoglou, Jessica Larson, Christina Chaivorapol, Pan Du, Richard Bourgon,
Willem Talloen, 
Elin Videvall, Hanneke van Deutekom,
Todd Burwell, 
Jesse Rowley,
Igor Dolgalev,
Stephen Turner,
Ryan C Thompson,
Tyr Wiesner-Hanks,
Konrad Rudolph,
David Robinson,
Mingxiang Teng,
Mathias Lesche,
Sonali Arora,
Jordan Ramilowski,
Ian Dworkin,
Bj\"orn Gr\"uning,
Ryan McMinds,
Paul Gordon,
Leonardo Collado Torres,
Enrico Ferrero.
\section{Session Info}

<<sessInfo, results="asis", echo=FALSE>>=
toLatex(sessionInfo())
@

<<resetOptions, results="hide", echo=FALSE>>=
options(prompt="> ", continue="+ ")
@ 

\bibliography{library}

\end{document}