---
title: "Machine learning techniques available in pRoloc"
author:
- name: Laurent Gatto
  affiliation: de Duve Institute, UCLouvain, Belgium
package: pRoloc
abstract: >
  This vignette provides a general background about machine learning
  (ML) methods and concepts, and their application to the analysis of
  spatial proteomics data in the *pRoloc* package. See the
  `pRoloc-tutorial` vignette for details about the package itself.
output:
  BiocStyle::html_document:
   toc_float: true
bibliography: pRoloc.bib
vignette: >
  %\VignetteIndexEntry{Machine learning techniques available in pRoloc}
  %\VignetteEngine{knitr::rmarkdown}
  %\VignetteKeywords{Bioinformatics, Machine learning, Organelle, Spatial Proteomics}
  %\VignetteEncoding{UTF-8}
---

```{r style, echo = FALSE, results = 'asis'}
BiocStyle::markdown()
```


```{r env, include=FALSE, echo=FALSE, cache=FALSE}
library("knitr")
opts_chunk$set(stop_on_error = 1L)
suppressPackageStartupMessages(library("MSnbase"))
suppressWarnings(suppressPackageStartupMessages(library("pRoloc")))
suppressPackageStartupMessages(library("pRolocdata"))
```

# Introduction {#sec:intro}

For a general practical introduction to `r Biocpkg("pRoloc")`, readers
are referred to the tutorial, available using
`vignette("pRoloc-tutorial", package = "pRoloc")`. The following
document provides a overview of the algorithms available in the
package. The respective section describe unsupervised machine learning
(USML), supervised machine learning (SML), semi-supervised machine
learning (SSML) as implemented in the novelty detection algorithm and
transfer learning.

# Data sets

We provide `r nrow(pRolocdata()$results)` test data sets in the
`r Biocexptpkg("pRolocdata")` package that can be readily used with
`r Biocpkg("pRoloc")`. The data set can be listed with *pRolocdata*
and loaded with the *data* function. Each data set, including its
origin, is individually documented.

The data sets are distributed as *MSnSet* instances. Briefly,
these are dedicated containers for quantitation data as well as
feature and sample meta-data. More details about *MSnSet*s are
available in the `r Biocpkg("pRoloc")` tutorial and in the
`r Biocpkg("MSnbase")` package, that defined the class.

```{r pRolocdata}
library("pRolocdata")
data(tan2009r1)
tan2009r1
```

## Other omics data {-}

While our primary biological domain is quantitative proteomics, with
special emphasis on spatial proteomics, the underlying class
infrastructure on which `r Biocpkg("pRoloc")` and implemented in the
Bioconductor `r Biocpkg("MSnbase")` package enables the conversion from/to
transcriptomics data, in particular microarray data available as
*ExpressionSet* objects using the *as* coercion
methods (see the *MSnSet* section in the
`MSnbase-development` vignette). As a result, it is
straightforward to apply the methods summarised here in detailed in
the other `r Biocpkg("pRoloc")` vignettes to these other data structures.

# Unsupervised machine learning {#sec:usml}

Unsupervised machine learning refers to clustering, i.e. finding
structure in a quantitative, generally multi-dimensional data set of
unlabelled data.

Currently, unsupervised clustering facilities are available through
the *plot2D* function and the `r Biocpkg("MLInterfaces")`
package [@MLInterfaces]. The former takes an *MSnSet*
instance and represents the data on a scatter plot along the first two
principal components. Arbitrary feature meta-data can be represented
using different colours and point characters. The reader is referred
to the manual page available through *?plot2D* for more
details and examples.

`r Biocpkg("pRoloc")` also implements a *MLean* method for
*MSnSet* instances, allowing to use the relevant
infrastructure with the organelle proteomics framework. Although
provides a common interface to unsupervised and numerous supervised
algorithms, we refer to the `r Biocpkg("pRoloc")` tutorial for its usage
to several clustering algorithms.

**Note** Current development efforts in terms of clustering are
described on the *Clustering infrastructure* wiki page
(<https://github.com/lgatto/pRoloc/wiki/Clustering-infrastructure>)
and will be incorporated in future version of the package.

# Supervised machine learning {#sec:sml}

Supervised machine learning refers to a broad family of classification
algorithms. The algorithms learns from a modest set of labelled data
points called the training data. Each training data example consists
of a pair of inputs: the actual data, generally represented as a
vector of numbers and a class label, representing the membership to
exactly 1 of multiple possible classes. When there are only two
possible classes, on refers to binary classification. The training
data is used to construct a model that can be used to classifier new,
unlabelled examples. The model takes the numeric vectors of the
unlabelled data points and return, for each of these inputs, the
corresponding mapped class.

## Algorithms used {#sec:algo}

**k-nearest neighbour (KNN)** Function *knn* from package
`r Biocpkg("class")`. For each row of the test set, the *k* nearest
(in Euclidean distance) training set vectors are found, and the
classification is decided by majority vote over the *k* classes, with
ties broken at random. This is a simple algorithm that is often used
as baseline classifier.  If there are ties for the *k*th nearest
vector, all candidates are included in the vote.

**Partial least square DA (PLS-DA)** Function *plsda* from package
\CRANpkg{caret}. Partial least square discriminant analysis is used to
fit a standard PLS model for classification.

**Support vector machine (SVM)** A support vector machine constructs a
hyperplane (or set of hyperplanes for multiple-class problem), which
are then used for classification. The best separation is defined as
the hyperplane that has the largest distance (the margin) to the
nearest data points in any class, which also reduces the
classification generalisation error. To assure liner separation of the
classes, the data is transformed using a *kernel function* into a
high-dimensional space, permitting liner separation of the classes.

`r Biocpkg("pRoloc")` makes use of the functions *svm* from
package \CRANpkg{e1071} and *ksvm* from \CRANpkg{kernlab}.

**Artificial neural network (ANN)** Function *nnet* from package
\CRANpkg{nnet}. Fits a single-hidden-layer neural network, possibly
with skip-layer connections.

**Naive Bayes (NB)** Function *naiveBayes* from package
\CRANpkg{e1071}. Naive Bayes classifier that computes the conditional
a-posterior probabilities of a categorical class variable given
independent predictor variables using the Bayes rule. Assumes
independence of the predictor variables, and Gaussian distribution
(given the target class) of metric predictors.

**Random Forest (RF)** Function *randomForest* from package
\CRANpkg{randomForest}.

**Chi-square ($\chi^2$)** Assignment based on squared differences
between a labelled marker and a new feature to be
classified. Canonical protein correlation profile method (PCP) uses
squared differences between a labelled marker and new features. In
[@Andersen2003], $\chi^2$ is defined as \emph{the [summed] squared
deviation of the normalised profile [from the marker] for all peptides
divided by the number of data points}, i.e. $\chi^{2} =
\frac{\sum_{i=1}^{n} (x_i - m_i)^{2}}{n}$, whereas [@Wiese2007] divide
by the value the squared value by the value of the reference feature
in each fraction, i.e. $\chi^{2} = \sum_{i=1}^{n}\frac{(x_i -
m_i)^{2}}{m_i}$, where $x_i$ is normalised intensity of feature *x* in
fraction *i*, $m_i$ is the normalised intensity of marker *m* in
fraction *i* and *n* is the number of fractions available. We will use
the former definition.

**PerTurbo ** From [@perturbo]: PerTurbo, an original, non-parametric
and efficient classification method is presented here. In our
framework, the manifold of each class is characterised by its
Laplace-Beltrami operator, which is evaluated with classical methods
involving the graph Laplacian. The classification criterion is
established thanks to a measure of the magnitude of the spectrum
perturbation of this operator. The first experiments show good
performances against classical algorithms of the
state-of-the-art. Moreover, from this measure is derived an efficient
policy to design sampling queries in a context of active
learning. Performances collected over toy examples and real world
datasets assess the qualities of this strategy.

The PerTurbo implementation comes from the `r Biocpkg("pRoloc")`
packages.

## Estimating algorithm parameters

It is essential when applying any of the above classification
algorithms, to wisely set the algorithm parameters, as these can have
an important effect on the classification. Such parameters are for
example the width *sigma* of the Radial Basis Function (Gaussian
kernel) $exp(-\sigma \| x - x' \|^2 )$ and the *cost* (slack)
parameter (controlling the tolerance to mis-classification) of our SVM
classifier. The number of neighbours *k* of the KNN classifier is
equally important as will be discussed in this sections.

The [next figure](#fig:knnboundaries) illustrates the effect of different
choices of $k$ using organelle proteomics data from
[@Dunkley2006] (*dunkley2006* from
`r Biocexptpkg("pRolocdata")`). As highlighted in the squared region, we can
see that using a low $k$ (*k = 1* on the left) will result in very
specific classification boundaries that precisely follow the contour
or our marker set as opposed to a higher number of neighbours (*k = 8*
on the right). While one could be tempted to believe that
*optimised* classification boundaries are preferable, it is
essential to remember that these boundaries are specific to the marker
set used to construct them, while there is absolutely no reason to
expect these regions to faithfully separate any new data points, in
particular proteins that we wish to classify to an organelle. In other
words, the highly specific *k = 1* classification boundaries are
*over-fitted* for the marker set or, in other words, lack
generalisation to new instances. We will demonstrate this using
simulated data taken from [@ISLwR] and show what detrimental
effect *over-fitting* has on new data.

![Classification boundaries using $k=1$ or $k=8$ on the `dunkley2006` data.](./Figures/knnboundaries.png){#fig:knnboundaries}

The [figure below](#fig:ISL1) uses 2 *x* 100 simulated data
points belonging to either of the orange or blue classes. The genuine
classes for all the points is known (solid lines) and the KNN
algorithm has been applied using *k = 1* (left) and *k = 100* (right)
respectively (purple dashed lines). As in our organelle proteomics
examples, we observe that when k = 1, the decision boundaries are
overly flexible and identify patterns in the data that do not reflect
to correct boundaries (in such cases, the classifier is said to have
low bias but very high variance). When a large *k* is used, the
classifier becomes inflexible and produces approximate and nearly
linear separation boundaries (such a classifier is said to have low
variance but high bias). On this simulated data set, neither *k = 1*
nor *k = 100* give good predictions and have test error rates
(i.e. the proportion of wrongly classified points) of 0.1695 and
0.1925, respectively.


![The KNN classifier using $k = 1$ (left, solid classification boundaries) and $k = 100$ (right, solid classification boundaries) compared the Bayes decision boundaries (see original material for details). Reproduced with permission from [@ISLwR].](./Figures/ISL-2_16.png){#fig:ISL1}

To quantify the effect of flexibility and lack thereof in defining the
classification boundaries, [@ISLwR] calculate the classification
error rates using training data (training error rate) and testing data
(testing error rate). The latter is completely new data that was not
used to assess the model error rate when defining algorithm
parameters; one often says that the model used for classification has
not *seen* this data. If the model performs well on new data,
it is said to generalise well. This is a quality that is required in
most cases, and in particular in our organelle proteomics experiments
where the training data corresponds to our marker sets. Figure
\@ref{fig:ISL2} plots the respective training and testing error rates
as a function of *1/k* which is a reflection of the
flexibility/complexity of our model; when *1/k = 1*, i.e. *k = 1* (far
right), we have a very flexible model with the risk of
over-fitting. Greater values of *k* (towards the left) characterise
less flexible models. As can be seen, high values of *k* produce poor
performance for both training and testing data. However, while the
training error steadily decreases when the model complexity increases
(smaller *k*), the testing error rate displays a typical U-shape: at a
value around *k = 10*, the testing error rate reaches a minimum and
then starts to increase due to over-fitting to the training data and
lack of generalisation of the model.


![Effect of train and test error with respect to model complexity. The former decreases for lower values of $k$ while the test error reaches a minimum around $k = 10$ before increasing again. Reproduced with permission from [@ISLwR].](./Figures/ISL-2_17.png){#fig:ISL2}

These results show that adequate optimisation of the model parameters
are essential to avoid either too flexible models (that do not
generalise well to new data) or models that do not describe the
decision boundaries adequately. Such parameter selection is achieved
by cross validation, where the initial marker proteins are separated
into training data used to build classification models and independent
testing data used to assess the model on new data.

We recommend the book *An Introduction to Statistical Learning*
(<http://www-bcf.usc.edu/~gareth/ISL/>) by [@ISLwR] for a more
detailed introduction of machine learning.


## Default analysis scheme

Below, we present a typical classification analysis using
`r Biocpkg("pRoloc")`. The analysis typically consists of two steps. The
first one is to optimise the classifier parameters to be used for
training and testing (see above). A range of parameters are tested
using the labelled data, for which the labels are known. For each set
of parameters, we hide the labels of a subset of labelled data and use
the other part to train a model and apply in on the testing data with
hidden labels. The comparison of the estimated and expected labels
enables to assess the validity of the model and hence the adequacy if
the parameters. Once adequate parameters have been identified, they
are used to infer a model on the complete organelle marker set and
used to infer the sub-cellular location of the unlabelled examples.

## Parameter optimisation {-}

Algorithmic performance is estimated using a stratified 20/80
partitioning. The 80% partitions are subjected to 5-fold
cross-validation in order to optimise free parameters via a grid
search, and these parameters are then applied to the remaining
20%. The procedure is repeated *n = 100* `times` to sample *n*
accuracy metrics (see below) values using *n*, possibly different,
optimised parameters for evaluation.

Models accuracy is evaluated using the F1 score,
$F1 = 2 ~ \frac{precision \times recall}{precision + recall}$,
calculated as the harmonic mean of the precision ($precision =
\frac{tp}{tp+fp}$, a measure of *exactness* -- returned output is a
relevant result) and recall ($recall=\frac{tp}{tp+fn}$, a measure of
*completeness* -- indicating how much was missed from the
output). What we are aiming for are high generalisation accuracy, i.e
high $F1$, indicating that the marker proteins in the test data set
are consistently correctly assigned by the algorithms.

The results of the optimisation procedure are stored in an
*GenRegRes* object that can be inspected, plotted and best
parameter pairs can be extracted.

For a given algorithm `alg`, the corresponding parameter optimisation
function is names *algOptimisation* or, equivalently,
*algOptimization*. See the table below for details. A description of
each of the respective model parameters is provided in the
optimisation function manuals, available through *?algOptimisation*.

```{r svmParamOptim, cache = TRUE, warning = FALSE, message = FALSE}
params <- svmOptimisation(tan2009r1, times = 10,
						  xval = 5, verbose = FALSE)
params
```

## Classification {-}

```{r svmRes, warning=FALSE, tidy=FALSE, eval=TRUE}
tan2009r1 <- svmClassification(tan2009r1, params)
tan2009r1
```

## Customising model parameters

Below we illustrate how to weight different classes according to the
number of labelled instances, where large sets are down weighted.
This strategy can help with imbalanced designs.

```{r weigths, eval=FALSE}
w <- table(fData(markerMSnSet(dunkley2006))$markers)
wpar <- svmOptimisation(dunkley2006, class.weights = w)
wres <- svmClassification(dunkley2006, wpar, class.weights = w)
```


```{r getmlfunction, echo=FALSE}
## Add chi^2.
tab <- data.frame('parameter optimisation' =
					  grep("Optimisation",
						   ls("package:pRoloc"), value = TRUE),
				  'classification' =
					  grep("Classification",
						   ls("package:pRoloc"), value = TRUE))

tab$algorithm <- c("nearest neighbour",
				   "nearest neighbour transfer learning",
				   "support vector machine",
				   "naive bayes",
				   "neural networks",
				   "PerTurbo",
				   "partial least square",
				   "random forest",
				   "support vector machine")

tab$package <- c("class", "pRoloc", "kernlab", "e1071",
				 "nnet", "pRoloc", "caret",
				 "randomForest", "e1071")

colnames(tab)[1] <- c("parameter optimisation")
```


```{r comptab, echo=FALSE}
kable(tab)
```

# Comparison of different classifiers

Several supervised machine learning algorithms have already been
applied to organelle proteomics data classification: partial least
square discriminant analysis in [@Dunkley2006, Tan2009], support
vector machines (SVMs) in [@Trotter2010], random forest in
[@Ohta2010], neural networks in [@Tardif2012], naive Bayes
[@Nikolovski2012]. In our HUPO 2011
poster^[Gatto, Laurent; Breckels, Lisa M.; Trotter, Matthew W.B.; Lilley, Kathryn S. (2011): `pRoloc` - A unifying bioinformatics framework for organelle proteomics. https://doi.org/10.6084/m9.figshare.5042965.v1],
we show that different classification algorithms provide very similar
performance. We have extended this comparison on various datasets
distributed in the `r Biocexptpkg("pRolocdata")` package. On figure
\@ref{fig:f1box}, we illustrate how different algorithms reach very
similar performances on most of our test datasets.


![Comparison of classification performances of several contemporary classification algorithms on data from the `r Biocexptpkg("pRolocdata")` package.](./Figures/F1boxplots.png){#fig:f1box}

# Bayesian generative models {#sec:bayes}

We also offer generative models that, as opposed to the descriptive
classifier presented above, explicitly model the spatial proteomics
data. In `pRoloc`, we probose two models using T-augmented Gaussian
mixtures using repectively a Expectration-Maximisation approach to
*maximum a posteriori* estimation of the model parameters (TAGM-MAP),
and an MCMC approach (TAGM-MCMC) that enables a proteome-wide
uncertainty quantitation. These methods are described in the
*pRoloc-bayesian* vignette.

For a details description of the methods and their validation, please
refer to [@Crook:2018]:

> A Bayesian Mixture Modelling Approach For Spatial Proteomics Oliver
> M Crook, Claire M Mulvey, Paul D. W. Kirk, Kathryn S Lilley, Laurent
> Gatto bioRxiv 282269; doi: https://doi.org/10.1101/282269

# Semi-supervised machine learning {#sec:ssml}

The *phenoDisco* algorithm is a semi-supervised novelty detection
method by [@Breckels2013] ([figure below](#fig:pd)). It uses the
labelled (i.e. markers, noted $D_L$) and unlabelled (i.e. proteins of
unknown localisation, noted $D_U$) sets of the input data. The
algorithm is repeated $N$ times (the `times` argument in the
*phenoDisco* function). At each iteration, each organelle
class $D_{L}^{i}$ and the unlabelled complement are clustered using
Gaussian mixture modelling. While unlabelled members that
systematically cluster with $D_{L}^{i}$ and pass outlier detection are
labelled as new putative members of class $i$, any example of $D_U$
which are not merged with any any of the $D_{L}^{i}$ and are
consistently clustered together throughout the $N$ iterations are
considered members of a new phenotype.


![The PhenoDisco iterative algorithm.](./Figures/phenodisco.png){#fig:pd}

# Transfer learning {#sec:tl}

When multiple sources of data are available, it is often beneficial to
take all or several into account with the aim of increasing the
information to tackle a problem of interest. While it is at times
possible to combine these different sources of data, this can lead to
substantially harm to performance of the analysis when the different
data sources are of variable signal-to-noise ratio or the data are
drawn from different domains and recorded by different encoding
(quantitative and binary, for example). If we defined the following
two data source


1. *primary* data, of high signal-to-noise ratio, but general
   available in limited amounts;
2. *auxiliary* data, of limited signal-to-noise, and available in
   large amounts;

then, a *transfer learning* algorithm will efficiently
support/complement the primary target domain with auxiliary data
features without compromising the integrity of our primary data.

We have developed a transfer learning framework [@Breckels:2016]
and applied to the analysis of spatial proteomics data, as described
in the `pRoloc-transfer-learning` vignette.

# Session information

All software and respective versions used to produce this document are listed below.

```{r sessioninfo, echo=FALSE}
sessionInfo()
```

# References