Type: | Package |
Title: | Tools for the Analysis of Air Pollution Data |
Version: | 2.19.0 |
Description: | Tools to analyse, interpret and understand air pollution data. Data are typically regular time series and air quality measurement, meteorological data and dispersion model output can be analysed. The package is described in Carslaw and Ropkins (2012, <doi:10.1016/j.envsoft.2011.09.008>) and subsequent papers. |
License: | MIT + file LICENSE |
URL: | https://openair-project.github.io/openair/, https://github.com/openair-project/openair |
BugReports: | https://github.com/openair-project/openair/issues |
Depends: | R (≥ 3.2.0) |
Imports: | cli, cluster, dplyr (≥ 1.1), graphics, grDevices, grid, hexbin, lattice, latticeExtra, lubridate, mapproj, MASS, methods, mgcv, purrr (≥ 1.0.0), Rcpp, readr, rlang, stats, tibble, tidyr, utils |
Suggests: | KernSmooth, knitr, mapdata, maps, quantreg, rmarkdown, spelling, testthat (≥ 3.0.0) |
LinkingTo: | Rcpp |
ByteCompile: | true |
Config/Needs/website: | openair-project/openairpkgdown |
Config/testthat/edition: | 3 |
Encoding: | UTF-8 |
Language: | en-GB |
LazyData: | yes |
LazyLoad: | yes |
RoxygenNote: | 7.3.2 |
NeedsCompilation: | yes |
Packaged: | 2025-08-28 08:35:48 UTC; JD38 |
Author: | David Carslaw |
Maintainer: | David Carslaw <david.carslaw@york.ac.uk> |
Repository: | CRAN |
Date/Publication: | 2025-08-28 12:20:02 UTC |
openair: Tools for the Analysis of Air Pollution Data
Description
Tools to analyse, interpret and understand air pollution data. Data are typically regular time series and air quality measurement, meteorological data and dispersion model output can be analysed. The package is described in Carslaw and Ropkins (2012, doi:10.1016/j.envsoft.2011.09.008) and subsequent papers.
Details
This is a UK Natural Environment Research Council (NERC) funded knowledge exchange project that aims to make available innovative analysis tools for air pollution data; with additional support from Defra. The tools have generally been developed to analyse data of hourly resolution (or at least a regular time series) both for air pollution monitoring and dispersion modelling. The availability of meteorological data at the same time resolution greatly enhances the capabilities of these tools.
openair
contains collection of functions to analyse air pollution data.
Typically it is expected that data are hourly means, although most
functions consider other time periods. The principal aim to make available
analysis techniques that most users of air quality data and model output
would not normally have access to. The functions consist of those developed
by the authors and a growing number from other researchers.
The package also provides access to a wide range of data sources including the UK Automatic Urban and Rural Network (AURN), networks run by Imperial College London (e.g., the LAQN) and the Scottish Air Quality Network (SAQN).
The package has a number of requirements for input data and these are
discussed in the manual (available in the openair
book at
https://openair-project.github.io/openair/). The key requirements are
that a date or date-time field must have the name date
(and can be Date
or POSIXct
format), that wind speed is represented as ws
and that wind
direction is wd
.
Most functions work in a very straightforward way, but offer many options for finer control and perhaps more in-depth analysis.
NOTE: openair assumes that data are not expressed in local time where
'Daylight Saving Time' is used. All functions check that this is the case
and issue a warning if TRUE. It is recommended that data are expressed in
UTC/GMT (or a fixed offset from) to avoid potential problems with R and
openair
functions. The openair
book provides advice on these issues
(available on the website).
To check to see if openair
has been correctly installed, try some of the
examples below.
The openair
class
As well as generating the plots themselves, openair
plotting functions
also return an object of class "openair"
. The object includes three main
components:
-
call
, the command used to generate the plot. -
data
, the data frame of summarised information used to make the plot. -
plot
, the plot itself.
If retained, e.g., using output <- polarPlot(mydata, "nox")
, this output
can be used to recover the data, reproduce or rework the original plot or
undertake further analysis.
An openair
output can be manipulated using a number of generic
operations, including print
, plot
and summary
. The examples below
show some examples of using an openair
object.
Author(s)
Maintainer: David Carslaw david.carslaw@york.ac.uk (ORCID)
Authors:
Jack Davison jack.davison@ricardo.com (ORCID)
Karl Ropkins K.Ropkins@its.leeds.ac.uk (ORCID)
References
Most reference details are given under the specific functions. The principal reference is below.
Carslaw, D.C. and K. Ropkins, (2012) openair — an R package for air quality data analysis. Environmental Modelling & Software. Volume 27-28, 52-61.
See Also
See https://openair-project.github.io/openair/ for up to date information on the project, and the openair book (https://openair-project.github.io/book/) for thorough documentation and examples.
Examples
## Not run:
# load package
library(openair)
# summarise data in a compact way
summaryPlot(mydata)
# traditional wind rose
windRose(mydata)
# polar plot
polar_nox <- polarPlot(mydata, pollutant = "nox")
# see call
polar_nox$call
# get data
polar_nox$data
# could, e.g., re-plot in {ggplot2}
library(ggplot2)
ggplot(polar_nox$data, aes(u, v, fill = z)) +
geom_tile() +
coord_equal() +
scale_fill_gradientn(colours = openair::openColours(), na.value = NA)
## End(Not run)
Taylor Diagram for model evaluation with conditioning
Description
Function to draw Taylor Diagrams for model evaluation. The function allows conditioning by any categorical or numeric variables, which makes the function very flexible.
Usage
TaylorDiagram(
mydata,
obs = "obs",
mod = "mod",
group = NULL,
type = "default",
normalise = FALSE,
cols = "brewer1",
rms.col = "darkgoldenrod",
cor.col = "black",
arrow.lwd = 3,
annotate = "centred\nRMS error",
text.obs = "observed",
key = TRUE,
key.title = group,
key.columns = 1,
key.pos = "right",
strip = TRUE,
auto.text = TRUE,
...
)
Arguments
mydata |
A data frame minimally containing a column of observations and a column of predictions. |
obs |
A column of observations with which the predictions ( |
mod |
A column of model predictions. Note, |
group |
The
|
type |
It is also possible to choose Type can be up length two e.g. Note that often it will make sense to use |
normalise |
Should the data be normalised by dividing the standard
deviation of the observations? The statistics can be normalised (and
non-dimensionalised) by dividing both the RMS difference and the standard
deviation of the |
cols |
Colours to be used for plotting. Useful options for categorical
data are available from |
rms.col |
Colour for centred-RMS lines and text. |
cor.col |
Colour for correlation coefficient lines and text. |
arrow.lwd |
Width of arrow used when used for comparing two model outputs. |
annotate |
Annotation shown for RMS error. |
text.obs |
The plot annotation for observed values; default is "observed". |
key |
Should the key be shown? |
key.title |
Title for the key. |
key.columns |
Number of columns to be used in the key. With many
pollutants a single column can make to key too wide. The user can thus
choose to use several columns by setting |
key.pos |
Position of the key e.g. “top”, “bottom”,
“left” and “right”. See details in |
strip |
Should a strip be shown? |
auto.text |
Either |
... |
Other graphical parameters are passed onto |
Details
The Taylor Diagram is a very useful model evaluation tool. The diagram provides a way of showing how three complementary model performance statistics vary simultaneously. These statistics are the correlation coefficient R, the standard deviation (sigma) and the (centred) root-mean-square error. These three statistics can be plotted on one (2D) graph because of the way they are related to one another which can be represented through the Law of Cosines.
The openair
version of the Taylor Diagram has several enhancements
that increase its flexibility. In particular, the straightforward way of
producing conditioning plots should prove valuable under many circumstances
(using the type
option). Many examples of Taylor Diagrams focus on
model-observation comparisons for several models using all the available
data. However, more insight can be gained into model performance by
partitioning the data in various ways e.g. by season, daylight/nighttime, day
of the week, by levels of a numeric variable e.g. wind speed or by land-use
type etc.
To consider several pollutants on one plot, a column identifying the
pollutant name can be used e.g. pollutant
. Then the Taylor Diagram can
be plotted as (assuming a data frame thedata
):
TaylorDiagram(thedata, obs = "obs", mod = "mod", group = "model", type = "pollutant")
which will give the model performance by pollutant in each panel.
Note that it is important that each panel represents data with the same mean
observed data across different groups. Therefore TaylorDiagram(mydata, group = "model", type = "season")
is OK, whereas TaylorDiagram(mydata, group = "season", type = "model")
is not because each panel (representing a
model) will have four different mean values — one for each season.
Generally, the option group
is either missing (one model being
evaluated) or represents a column giving the model name. However, the data
can be normalised using the normalise
option. Normalisation is carried
out on a per group
/type
basis making it possible to compare
data on different scales e.g. TaylorDiagram(mydata, group = "season", type = "model", normalise = TRUE)
. In this way it is possible to compare
different pollutants, sites etc. in the same panel.
Also note that if multiple sites are present it makes sense to use type = "site"
to ensure that each panel represents an individual site with its
own specific standard deviation etc. If this is not the case then select a
single site from the data first e.g. subset(mydata, site == "Harwell")
.
Value
an openair object. If retained, e.g., using
output <- TaylorDiagram(thedata, obs = "nox", mod = "mod")
, this
output can be used to recover the data, reproduce or rework the original
plot or undertake further analysis. For example, output$data
will be
a data frame consisting of the group, type, correlation coefficient (R),
the standard deviation of the observations and measurements.
Author(s)
David Carslaw
References
Taylor, K.E.: Summarizing multiple aspects of model performance in a single diagram. J. Geophys. Res., 106, 7183-7192, 2001 (also see PCMDI Report 55).
See Also
taylor.diagram
from the plotrix
package from which
some of the annotation code was used.
Other model evaluation functions:
conditionalEval()
,
conditionalQuantile()
,
modStats()
Examples
## in the examples below, most effort goes into making some artificial data
## the function itself can be run very simply
## Not run:
## dummy model data for 2003
dat <- selectByDate(mydata, year = 2003)
dat <- data.frame(date = mydata$date, obs = mydata$nox, mod = mydata$nox)
## now make mod worse by adding bias and noise according to the month
## do this for 3 different models
dat <- transform(dat, month = as.numeric(format(date, "%m")))
mod1 <- transform(dat,
mod = mod + 10 * month + 10 * month * rnorm(nrow(dat)),
model = "model 1"
)
## lag the results for mod1 to make the correlation coefficient worse
## without affecting the sd
mod1 <- transform(mod1, mod = c(mod[5:length(mod)], mod[(length(mod) - 3):
length(mod)]))
## model 2
mod2 <- transform(dat,
mod = mod + 7 * month + 7 * month * rnorm(nrow(dat)),
model = "model 2"
)
## model 3
mod3 <- transform(dat,
mod = mod + 3 * month + 3 * month * rnorm(nrow(dat)),
model = "model 3"
)
mod.dat <- rbind(mod1, mod2, mod3)
## basic Taylor plot
TaylorDiagram(mod.dat, obs = "obs", mod = "mod", group = "model")
## Taylor plot by season
TaylorDiagram(mod.dat, obs = "obs", mod = "mod", group = "model", type = "season")
## now show how to evaluate model improvement (or otherwise)
mod1a <- transform(dat,
mod = mod + 2 * month + 2 * month * rnorm(nrow(dat)),
model = "model 1"
)
mod2a <- transform(mod2, mod = mod * 1.3)
mod3a <- transform(dat,
mod = mod + 10 * month + 10 * month * rnorm(nrow(dat)),
model = "model 3"
)
mod.dat2 <- rbind(mod1a, mod2a, mod3a)
mod.dat$mod2 <- mod.dat2$mod
## now we have a data frame with 3 models, 1 set of observations
## and TWO sets of model predictions (mod and mod2)
## do for all models
TaylorDiagram(mod.dat, obs = "obs", mod = c("mod", "mod2"), group = "model")
## End(Not run)
## Not run:
## all models, by season
TaylorDiagram(mod.dat,
obs = "obs", mod = c("mod", "mod2"), group = "model",
type = "season"
)
## consider two groups (model/month). In this case all months are shown by model
## but are only differentiated by model.
TaylorDiagram(mod.dat, obs = "obs", mod = "mod", group = c("model", "month"))
## End(Not run)
Tests for trends using Theil-Sen estimates
Description
Theil-Sen slope estimates and tests for trend.
Usage
TheilSen(
mydata,
pollutant = "nox",
deseason = FALSE,
type = "default",
avg.time = "month",
statistic = "mean",
percentile = NA,
data.thresh = 0,
alpha = 0.05,
dec.place = 2,
xlab = "year",
lab.frac = 0.99,
lab.cex = 0.8,
x.relation = "same",
y.relation = "same",
data.col = "cornflowerblue",
trend = list(lty = c(1, 5), lwd = c(2, 1), col = c("red", "red")),
text.col = "darkgreen",
slope.text = NULL,
cols = NULL,
shade = "grey95",
auto.text = TRUE,
autocor = FALSE,
slope.percent = FALSE,
date.breaks = 7,
date.format = NULL,
plot = TRUE,
silent = FALSE,
...
)
Arguments
mydata |
A data frame containing the field |
pollutant |
The parameter for which a trend test is required. Mandatory. |
deseason |
Should the data be de-deasonalized first? If |
type |
It is also possible to choose Type can be up length two e.g. |
avg.time |
Can be “month” (the default), “season” or “year”. Determines the time over which data should be averaged. Note that for “year”, six or more years are required. For “season” the data are split up into spring: March, April, May etc. Note that December is considered as belonging to winter of the following year. |
statistic |
Statistic used for calculating monthly values. Default is
“mean”, but can also be “percentile”. See |
percentile |
Single percentile value to use if |
data.thresh |
The data capture threshold to use (%) when aggregating
the data using |
alpha |
For the confidence interval calculations of the slope. The default is 0.05. To show 99\ trend, choose alpha = 0.01 etc. |
dec.place |
The number of decimal places to display the trend estimate at. The default is 2. |
xlab |
x-axis label, by default |
lab.frac |
Fraction along the y-axis that the trend information should be printed at, default 0.99. |
lab.cex |
Size of text for trend information. |
x.relation |
This determines how the x-axis scale is plotted. “same” ensures all panels use the same scale and “free” will use panel-specific scales. The latter is a useful setting when plotting data with very different values. |
y.relation |
This determines how the y-axis scale is plotted. “same” ensures all panels use the same scale and “free” will use panel-specific scales. The latter is a useful setting when plotting data with very different values. |
data.col |
Colour name for the data |
trend |
list containing information on the line width, line type and line colour for the main trend line and confidence intervals respectively. |
text.col |
Colour name for the slope/uncertainty numeric estimates |
slope.text |
The text shown for the slope (default is ‘units/year’). |
cols |
Predefined colour scheme, currently only enabled for
|
shade |
The colour used for marking alternate years. Use “white” or “transparent” to remove shading. |
auto.text |
Either |
autocor |
Should autocorrelation be considered in the trend uncertainty
estimates? The default is |
slope.percent |
Should the slope and the slope uncertainties be
expressed as a percentage change per year? The default is For |
date.breaks |
Number of major x-axis intervals to use. The function
will try and choose a sensible number of dates/times as well as formatting
the date/time appropriately to the range being considered. This does not
always work as desired automatically. The user can therefore increase or
decrease the number of intervals by adjusting the value of
|
date.format |
This option controls the date format on the
x-axis. While |
plot |
Should a plot be produced? |
silent |
When |
... |
Other graphical parameters passed onto |
Details
The TheilSen
function provides a collection of functions to
analyse trends in air pollution data. The TheilSen
function
is flexible in the sense that it can be applied to data in many
ways e.g. by day of the week, hour of day and wind direction. This
flexibility makes it much easier to draw inferences from data
e.g. why is there a strong downward trend in concentration from
one wind sector and not another, or why trends on one day of the
week or a certain time of day are unexpected.
For data that are strongly seasonal, perhaps from a background
site, or a pollutant such as ozone, it will be important to
deseasonalise the data (using the option deseason = TRUE
.Similarly, for data that increase, then decrease, or show
sharp changes it may be better to use smoothTrend()
.
A minimum of 6 points are required for trend estimates to be made.
Note! that since version 0.5-11 openair uses Theil-Sen to derive the p values also for the slope. This is to ensure there is consistency between the calculated p value and other trend parameters i.e. slope estimates and uncertainties. The p value and all uncertainties are calculated through bootstrap simulations.
Note that the symbols shown next to each trend estimate relate to how statistically significant the trend estimate is: p $<$ 0.001 = ***, p $<$ 0.01 = **, p $<$ 0.05 = * and p $<$ 0.1 = $+$.
Some of the code used in TheilSen
is based on that from
Rand Wilcox. This mostly
relates to the Theil-Sen slope estimates and uncertainties.
Further modifications have been made to take account of correlated
data based on Kunsch (1989). The basic function has been adapted
to take account of auto-correlated data using block bootstrap
simulations if autocor = TRUE
(Kunsch, 1989). We follow the
suggestion of Kunsch (1989) of setting the block length to n(1/3)
where n is the length of the time series.
The slope estimate and confidence intervals in the slope are plotted and numerical information presented.
Value
an openair object. The data
component of the
TheilSen
output includes two subsets: main.data
, the monthly
data res2
the trend statistics. For output <- TheilSen(mydata, "nox")
, these can be extracted as object$data$main.data
and
object$data$res2
, respectively. Note: In the case of the intercept,
it is assumed the y-axis crosses the x-axis on 1/1/1970.
Author(s)
David Carslaw with some trend code from Rand Wilcox
References
Helsel, D., Hirsch, R., 2002. Statistical methods in water resources. US Geological Survey. Note that this is a very good resource for statistics as applied to environmental data.
Hirsch, R. M., Slack, J. R., Smith, R. A., 1982. Techniques of trend analysis for monthly water-quality data. Water Resources Research 18 (1), 107-121.
Kunsch, H. R., 1989. The jackknife and the bootstrap for general stationary observations. Annals of Statistics 17 (3), 1217-1241.
Sen, P. K., 1968. Estimates of regression coefficient based on Kendall's tau. Journal of the American Statistical Association 63(324).
Theil, H., 1950. A rank invariant method of linear and polynomial regression analysis, i, ii, iii. Proceedings of the Koninklijke Nederlandse Akademie Wetenschappen, Series A - Mathematical Sciences 53, 386-392, 521-525, 1397-1412.
... see also several of the Air Quality Expert Group (AQEG) reports for the use of similar tests applied to UK/European air quality data.
See Also
Other time series and trend functions:
calendarPlot()
,
runRegression()
,
smoothTrend()
,
timePlot()
,
timeProp()
,
timeVariation()
,
trendLevel()
Examples
# trend plot for nox
TheilSen(mydata, pollutant = "nox")
# trend plot for ozone with p=0.01 i.e. uncertainty in slope shown at
# 99 % confidence interval
## Not run:
TheilSen(mydata, pollutant = "o3", ylab = "o3 (ppb)", alpha = 0.01)
## End(Not run)
# trend plot by each of 8 wind sectors
## Not run:
TheilSen(mydata, pollutant = "o3", type = "wd", ylab = "o3 (ppb)")
## End(Not run)
# and for a subset of data (from year 2000 onwards)
## Not run:
TheilSen(selectByDate(mydata, year = 2000:2005), pollutant = "o3", ylab = "o3 (ppb)")
## End(Not run)
Calculate summary statistics for air pollution data by year
Description
This function calculates a range of common and air pollution-specific
statistics from a data frame. The statistics are calculated on an annual
basis and the input is assumed to be hourly data. The function can cope with
several sites and years, e.g., using type = "site"
. The user can control
the output by setting transpose
appropriately. Note that the input data is
assumed to be in mass units, e.g., ug/m3 for all species except CO (mg/m3).
Usage
aqStats(
mydata,
pollutant = "no2",
type = "default",
data.thresh = 0,
percentile = c(95, 99),
transpose = FALSE,
progress = TRUE,
...
)
Arguments
mydata |
A data frame containing a |
pollutant |
The name of a pollutant e.g. |
type |
|
data.thresh |
The data capture threshold to use (%). A value of zero
means that all available data will be used in a particular period
regardless if of the number of values available. Conversely, a value of 100
will mean that all data will need to be present for the average to be
calculated, else it is recorded as |
percentile |
Percentile values to calculate for each pollutant. |
transpose |
The default is to return a data frame with columns
representing the statistics. If |
progress |
Show a progress bar when many groups make up |
... |
Other arguments, currently unused. |
Details
The following statistics are calculated:
For all pollutants:
-
data.capture — percentage data capture over a full year.
-
mean — annual mean.
-
minimum — minimum hourly value.
-
maximum — maximum hourly value.
-
median — median value.
-
max.daily — maximum daily mean.
-
max.rolling.8 — maximum 8-hour rolling mean.
-
max.rolling.24 — maximum 24-hour rolling mean.
-
percentile.95 — 95th percentile. Note that several percentiles can be calculated.
When pollutant == "o3"
:
-
roll.8.O3.gt.100 — number of days when the daily maximum rolling 8-hour mean ozone concentration is >100 ug/m3. This is the target value.
-
roll.8.O3.gt.120 — number of days when the daily maximum rolling 8-hour mean ozone concentration is >120 ug/m3. This is the Limit Value not to be exceeded > 10 days a year.
-
AOT40 — is the accumulated amount of ozone over the threshold value of 40 ppb for daylight hours in the growing season (April to September). Note that
latitude
andlongitude
can also be passed to this calculation.
When pollutant == "no2"
:
-
hours — number of hours NO2 is more than 200 ug/m3.
When pollutant == "pm10"
:
-
days — number of days PM10 is more than 50 ug/m3.
For the rolling means, the user can supply the option align
, which can be
"centre" (default), "left" or "right". See rollingMean()
for more details.
There can be small discrepancies with the AURN due to the treatment of
rounding data. The aqStats()
function does not round, whereas AURN data can
be rounded at several stages during the calculations.
Author(s)
David Carslaw
Examples
# Statistics for 2004. NOTE! these data are in ppb/ppm so the
# example is for illustrative purposes only
aqStats(selectByDate(mydata, year = 2004), pollutant = "no2")
Bin data, calculate mean and bootstrap confidence interval in the mean
Description
binData()
summarises data by intervals and calculates the mean and
bootstrap confidence intervals (by default 95% CI) in the mean of a chosen
variable in a data frame. Any other numeric variables are summarised by their
mean intervals. This occurs via bootMeanDF()
, which calculates the
uncertainty intervals in the mean of a vector.
Usage
binData(
mydata,
bin = "nox",
uncer = "no2",
type = "default",
n = 40,
interval = NA,
breaks = NA,
conf.int = 0.95,
B = 250,
...
)
bootMeanDF(x, conf.int = 0.95, B = 1000)
Arguments
mydata |
Name of the data frame to process. |
bin |
The name of the column to divide into intervals. |
uncer |
The name of the column for which the mean, lower and upper
uncertainties should be calculated for each interval of |
type |
Used for splitting the data further. Passed to |
n |
The number of intervals to split |
interval |
The interval to be used for binning the data. |
breaks |
User specified breaks to use for binning. |
conf.int |
The confidence interval, defaulting to |
B |
The number of bootstrap simulations. |
... |
Other parameters that are passed on to |
x |
A vector from which the mean and bootstrap confidence intervals in the mean are to be calculated |
Details
There are three options for binning. The default is to bin bin
into 40
intervals. Second, the user can choose an binning interval
, e.g., interval = 5
. Third, the user can supply their own breaks
to use as binning
intervals. Note that intervals are calculated on the whole dataset before the
data is cut into categories using type
.
Value
Returns a summarised data frame with new columns for the mean and upper / lower confidence intervals in the mean.
Examples
# work with vectors
test <- rnorm(20, mean = 10)
bootMeanDF(test)
# how does nox vary by intervals of wind speed?
results <- binData(mydata, bin = "ws", uncer = "nox")
## Not run:
library(ggplot2)
ggplot(results, aes(x = ws, y = mean, ymin = min, ymax = max)) +
geom_pointrange()
## End(Not run)
# what about weekend vs weekday?
results2 <- binData(mydata, bin = "ws", uncer = "nox", type = "weekend")
## Not run:
ggplot(results2, aes(x = ws, y = mean, ymin = min, ymax = max)) +
geom_pointrange() +
facet_wrap(vars(weekend))
## End(Not run)
Estimate NO2/NOX emission ratios from monitoring data
Description
Given hourly NOX and NO2 from a roadside site and hourly NOX, NO2 and O3 from a background site the function will estimate the emissions ratio of NO2/NOX — the level of primary NO2
Usage
calcFno2(input, tau = 60, user.fno2, main = "", xlab = "year", ...)
Arguments
input |
A data frame with the following fields. |
tau |
Mixing time scale. It is unlikely the user will need to adjust this. See details below. |
user.fno2 |
User-supplied f-NO2 fraction e.g. 0.1 is a NO2/NOX ratio of
10% by volume. |
main |
Title of plot if required. |
xlab |
x-axis label. |
... |
Arguments passed on to
|
Details
The principal purpose of this function is to estimate the level of primary
(or direct) NO2 from road vehicles. When hourly data of NOX, NO2 and O3 are
available, the total oxidant method of Clapp and Jenkin (2001) can be used.
If roadside O3 measurements are available see linearRelation()
for details
of how to estimate the primary NO2 fraction.
In the absence of roadside O3 measurements, it is rather more problematic to
calculate the fraction of primary NO2. Carslaw and Beevers (2005c) developed
an approach based on linearRelation()
the analysis of roadside and
background measurements. The increment in roadside NO2 concentrations is
primarily determined by direct emissions of NO2 and the availability of One
to react with NO to form NO2. The method aims to quantify the amount of NO2
formed through these two processes by seeking the optimum level of primary
NO2 that gives the least error.
Value
an openair object
Author(s)
David Carslaw
References
Clapp, L.J., Jenkin, M.E., 2001. Analysis of the relationship between ambient levels of O3, NO2 and NO as a function of NOX in the UK. Atmospheric Environment 35 (36), 6391-6405.
Carslaw, D.C. and N Carslaw (2007). Detecting and characterising small changes in urban nitrogen dioxide concentrations. Atmospheric Environment. Vol. 41, 4723-4733.
Carslaw, D.C., Beevers, S.D. and M.C. Bell (2007). Risks of exceeding the hourly EU limit value for nitrogen dioxide resulting from increased road transport emissions of primary nitrogen dioxide. Atmospheric Environment 41 2073-2082.
Carslaw, D.C. (2005a). Evidence of an increasing NO2/NOX emissions ratio from road traffic emissions. Atmospheric Environment, 39(26) 4793-4802.
Carslaw, D.C. and Beevers, S.D. (2005b). Development of an urban inventory for road transport emissions of NO2 and comparison with estimates derived from ambient measurements. Atmospheric Environment, (39): 2049-2059.
Carslaw, D.C. and Beevers, S.D. (2005c). Estimations of road vehicle primary NO2 exhaust emission fractions using monitoring data in London. Atmospheric Environment, 39(1): 167-177.
Carslaw, D. C. and S. D. Beevers (2004). Investigating the Potential Importance of Primary NO2 Emissions in a Street Canyon. Atmospheric Environment 38(22): 3585-3594.
Carslaw, D. C. and S. D. Beevers (2004). New Directions: Should road vehicle emissions legislation consider primary NO2? Atmospheric Environment 38(8): 1233-1234.
See Also
linearRelation()
if you have roadside ozone
measurements.
Calculate percentile values from a time series
Description
Calculates multiple percentile values from a time series, with flexible time
aggregation. This function is a wrapper for timeAverage()
, making it easier
to calculate several percentiles at once. Like timeAverage()
, it requires a
data frame with a date
field and one other numeric variable.
Usage
calcPercentile(
mydata,
pollutant = "o3",
avg.time = "month",
percentile = 50,
type = "default",
data.thresh = 0,
start.date = NA,
end.date = NA,
prefix = "percentile."
)
Arguments
mydata |
A data frame containing a |
pollutant |
Name of column containing variable to summarise, likely a
pollutant (e.g., |
avg.time |
This defines the time period to average to. Can be Note that |
percentile |
A vector of percentile values; for example, |
type |
|
data.thresh |
The data capture threshold to use (%). A value of zero
means that all available data will be used in a particular period
regardless if of the number of values available. Conversely, a value of 100
will mean that all data will need to be present for the average to be
calculated, else it is recorded as |
start.date |
A string giving a start date to use. This is sometimes
useful if a time series starts between obvious intervals. For example, for
a 1-minute time series that starts |
end.date |
A string giving an end date to use. This is sometimes useful
to make sure a time series extends to a known end point and is useful when
|
prefix |
Each new column is named by appending a |
Value
Returns a data.frame
with a date
column plus an additional
column for each given percentile
.
Author(s)
David Carslaw
See Also
Examples
# 95th percentile monthly o3 concentrations
percentiles <- calcPercentile(mydata,
pollutant = "o3",
avg.time = "month", percentile = 95
)
head(percentiles)
# 5, 50, 95th percentile monthly o3 concentrations
## Not run:
percentiles <- calcPercentile(mydata,
pollutant = "o3",
avg.time = "month", percentile = c(5, 50, 95)
)
head(percentiles)
## End(Not run)
Plot time series values in a conventional calendar format
Description
This function will plot data by month laid out in a conventional calendar format. The main purpose is to help rapidly visualise potentially complex data in a familiar way. Users can also choose to show daily mean wind vectors if wind speed and direction are available.
Usage
calendarPlot(
mydata,
pollutant = "nox",
year = 2003,
month = 1:12,
type = "default",
annotate = "date",
statistic = "mean",
cols = "heat",
limits = c(0, 100),
lim = NULL,
col.lim = c("grey30", "black"),
col.arrow = "black",
font.lim = c(1, 2),
cex.lim = c(0.6, 1),
cex.date = 0.6,
digits = 0,
data.thresh = 0,
labels = NA,
breaks = NA,
w.shift = 0,
w.abbr.len = 1,
remove.empty = TRUE,
main = NULL,
key.header = "",
key.footer = "",
key.position = "right",
key = TRUE,
auto.text = TRUE,
plot = TRUE,
...
)
Arguments
mydata |
A data frame minimally containing |
pollutant |
Mandatory. A pollutant name corresponding to a variable in a
data frame should be supplied e.g. |
year |
Year to plot e.g. |
month |
If only certain month are required. By default the function will
plot an entire year even if months are missing. To only plot certain months
use the |
type |
Not yet implemented. |
annotate |
This option controls what appears on each day of the calendar. Can be: "date" — shows day of the month; "wd" — shows vector-averaged wind direction, or "ws" — shows vector-averaged wind direction scaled by wind speed. Finally it can be “value” which shows the daily mean value. |
statistic |
Statistic passed to |
cols |
Colours to be used for plotting. See |
limits |
Use this option to manually set the colour scale limits. This
is useful in the case when there is a need for two or more plots and a
consistent scale is needed on each. Set the limits to cover the maximum
range of the data for all plots of interest. For example, if one plot had
data covering 0–60 and another 0–100, then set |
lim |
A threshold value to help differentiate values above and below
|
col.lim |
For the annotation of concentration labels on each day. The
first sets the colour of the text below |
col.arrow |
The colour of the annotated wind direction / wind speed arrows. |
font.lim |
For the annotation of concentration labels on each day. The
first sets the font of the text below |
cex.lim |
For the annotation of concentration labels on each day. The
first sets the size of the text below |
cex.date |
The base size of the annotation text for the date. |
digits |
The number of digits used to display concentration values when
|
data.thresh |
Data capture threshold passed to |
labels |
If a categorical scale is defined using |
breaks |
If a categorical scale is required then these breaks will be
used. For example, |
w.shift |
Controls the order of the days of the week. By default the
plot shows Saturday first ( |
w.abbr.len |
The default ( |
remove.empty |
Should months with no data present be removed? Default is
|
main |
The plot title; default is pollutant and year. |
key.header |
Adds additional text/labels to the scale key. For example,
passing |
key.footer |
see |
key.position |
Location where the scale key is to plotted. Allowed
arguments currently include |
key |
Fine control of the scale key via |
auto.text |
Either |
plot |
Should a plot be produced? |
... |
Other graphical parameters are passed onto the |
Details
calendarPlot()
will plot data in a conventional calendar format, i.e., by
month and day of the week. Daily statistics are calculated using
timeAverage()
, which by default will calculate the daily mean
concentration.
If wind direction is available it is then possible to plot the wind direction
vector on each day. This is very useful for getting a feel for the
meteorological conditions that affect pollutant concentrations. Note that if
hourly or higher time resolution are supplied, then calendarPlot()
will
calculate daily averages using timeAverage()
, which ensures that wind
directions are vector-averaged.
If wind speed is also available, then setting the option annotate = "ws"
will plot the wind vectors whose length is scaled to the wind speed. Thus
information on the daily mean wind speed and direction are available.
It is also possible to plot categorical scales. This is useful where, for
example, an air quality index defines concentrations as bands, e.g., "good",
"poor". In these cases users must supply labels
and corresponding breaks
.
Note that is is possible to pre-calculate concentrations in some way before
passing the data to calendarPlot()
. For example rollingMean()
could be
used to calculate rolling 8-hour mean concentrations. The data can then be
passed to calendarPlot()
and statistic = "max"
chosen, which will plot
maximum daily 8-hour mean concentrations.
Value
an openair object
Author(s)
David Carslaw
See Also
Other time series and trend functions:
TheilSen()
,
runRegression()
,
smoothTrend()
,
timePlot()
,
timeProp()
,
timeVariation()
,
trendLevel()
Examples
# basic plot
calendarPlot(mydata, pollutant = "o3", year = 2003)
# show wind vectors
calendarPlot(mydata, pollutant = "o3", year = 2003, annotate = "wd")
## Not run:
# show wind vectors scaled by wind speed and different colours
calendarPlot(mydata,
pollutant = "o3", year = 2003, annotate = "ws",
cols = "heat"
)
# show only specific months with selectByDate
calendarPlot(selectByDate(mydata, month = c(3, 6, 10), year = 2003),
pollutant = "o3", year = 2003, annotate = "ws", cols = "heat"
)
# categorical scale example
calendarPlot(mydata,
pollutant = "no2", breaks = c(0, 50, 100, 150, 1000),
labels = c("Very low", "Low", "High", "Very High"),
cols = c("lightblue", "green", "yellow", "red"), statistic = "max"
)
# UK daily air quality index
pm10.breaks <- c(0, 17, 34, 50, 59, 67, 75, 84, 92, 100, 1000)
calendarPlot(mydata, "pm10",
year = 1999, breaks = pm10.breaks,
labels = c(1:10), cols = "daqi", statistic = "mean", key.header = "DAQI"
)
## End(Not run)
Conditional quantile estimates with additional variables for model evaluation
Description
This function enhances conditionalQuantile()
by also considering how other
variables vary over the same intervals. Conditional quantiles are very useful
on their own for model evaluation, but provide no direct information on how
other variables change at the same time. For example, a conditional quantile
plot of ozone concentrations may show that low concentrations of ozone tend
to be under-predicted. However, the cause of the under-prediction can be
difficult to determine. However, by considering how well the model predicts
other variables over the same intervals, more insight can be gained into the
underlying reasons why model performance is poor.
Usage
conditionalEval(
mydata,
obs = "obs",
mod = "mod",
var.obs = "var.obs",
var.mod = "var.mod",
type = "default",
bins = 31,
statistic = "MB",
xlab = "predicted value",
ylab = "statistic",
col = brewer.pal(5, "YlOrRd"),
col.var = "Set1",
var.names = NULL,
auto.text = TRUE,
...
)
Arguments
mydata |
A data frame containing the field |
obs |
The name of the observations in |
mod |
The name of the predictions (modelled values) in |
var.obs |
Other variable observations for which statistics should be
calculated. Can be more than length one e.g. |
var.mod |
Other variable predictions for which statistics should be
calculated. Can be more than length one e.g. |
type |
It is also possible to choose |
bins |
Number of bins to be used in calculating the different quantile levels. |
statistic |
Statistic(s) to be plotted. Can be “MB”,
“NMB”, “r”, “COE”, “MGE”, “NMGE”,
“RMSE” and “FAC2”, as described in
|
xlab |
label for the x-axis, by default “predicted value”. |
ylab |
label for the y-axis, by default “observed value”. |
col |
Colours to be used for plotting the uncertainty bands and median line. Must be of length 5 or more. |
col.var |
Colours for the additional variables to be compared. See
|
var.names |
Variable names to be shown on plot for plotting
|
auto.text |
Either |
... |
Other graphical parameters passed onto |
Details
The conditionalEval
function provides information on how other
variables vary across the same intervals as shown on the conditional quantile
plot. There are two types of variable that can be considered by setting the
value of statistic
. First, statistic
can be another variable in
the data frame. In this case the plot will show the different proportions of
statistic
across the range of predictions. For example statistic = "season"
will show for each interval of mod
the proportion of
predictions that were spring, summer, autumn or winter. This is useful
because if model performance is worse for example at high concentrations of
mod
then knowing that these tend to occur during a particular season
etc. can be very helpful when trying to understand why a model fails.
See cutData()
for more details on the types of variable that can
be statistic
. Another example would be statistic = "ws"
(if
wind speed were available in the data frame), which would then split wind
speed into four quantiles and plot the proportions of each.
Second, conditionalEval
can simultaneously plot the model performance
of other observed/predicted variable pairs according to different
model evaluation statistics. These statistics derive from the
modStats()
function and include “MB”, “NMB”,
“r”, “COE”, “MGE”, “NMGE”, “RMSE” and
“FAC2”. More than one statistic can be supplied e.g. statistic = c("NMB", "COE")
. Bootstrap samples are taken from the corresponding values
of other variables to be plotted and their statistics with 95\
intervals calculated. In this case, the model performance of other
variables is shown across the same intervals of mod
, rather than just
the values of single variables. In this second case the model would need to
provide observed/predicted pairs of other variables.
For example, a model may provide predictions of NOx and wind speed (for which
there are also observations available). The conditionalEval
function
will show how well these other variables are predicted for the same intervals
of the main variables assessed in the conditional quantile e.g. ozone. In
this case, values are supplied to var.obs
(observed values for other
variables) and var.mod
(modelled values for other variables). For
example, to consider how well the model predicts NOx and wind speed
var.obs = c("nox.obs", "ws.obs")
and var.mod = c("nox.mod", "ws.mod")
would be supplied (assuming nox.obs, nox.mod, ws.obs, ws.mod
are present in the data frame). The analysis could show for example,
when ozone concentrations are under-predicted, the model may also be shown to
over-predict concentrations of NOx at the same time, or under-predict wind
speeds. Such information can thus help identify the underlying causes of poor
model performance. For example, an under-prediction in wind speed could
result in higher surface NOx concentrations and lower ozone concentrations.
Similarly if wind speed predictions were good and NOx was over predicted it
might suggest an over-estimate of NOx emissions. One or more additional
variables can be plotted.
A special case is statistic = "cluster"
. In this case a data frame is
provided that contains the cluster calculated by trajCluster()
and importTraj()
. Alternatively users could supply their own
pre-calculated clusters. These calculations can be very useful in showing
whether certain back trajectory clusters are associated with poor (or good)
model performance. Note that in the case of statistic = "cluster"
there will be fewer data points used in the analysis compared with the
ordinary statistics above because the trajectories are available for every
three hours. Also note that statistic = "cluster"
cannot be used
together with the ordinary model evaluation statistics such as MB. The output
will be a bar chart showing the proportion of each interval of mod
by
cluster number.
Far more insight can be gained into model performance through conditioning
using type
. For example, type = "season"
will plot conditional
quantiles and the associated model performance statistics of other variables
by each season. type
can also be a factor or character field e.g.
representing different models used.
See Wilks (2005) for more details of conditional quantile plots.
Author(s)
David Carslaw
References
Wilks, D. S., 2005. Statistical Methods in the Atmospheric Sciences, Volume 91, Second Edition (International Geophysics), 2nd Edition. Academic Press.
See Also
The verification
package for comprehensive functions for
forecast verification.
Other model evaluation functions:
TaylorDiagram()
,
conditionalQuantile()
,
modStats()
Conditional quantile estimates for model evaluation
Description
Function to calculate conditional quantiles with flexible conditioning. The function is for use in model evaluation and more generally to help better understand forecast predictions and how well they agree with observations.
Usage
conditionalQuantile(
mydata,
obs = "obs",
mod = "mod",
type = "default",
bins = 31,
min.bin = c(10, 20),
xlab = "predicted value",
ylab = "observed value",
col = brewer.pal(5, "YlOrRd"),
key.columns = 2,
key.position = "bottom",
auto.text = TRUE,
...
)
Arguments
mydata |
A data frame containing the field |
obs |
The name of the observations in |
mod |
The name of the predictions (modelled values) in |
type |
It is also possible to choose Type can be up length two e.g. |
bins |
Number of bins to be used in calculating the different quantile levels. |
min.bin |
The minimum number of points required for the estimates of the 25/75th and 10/90th percentiles. |
xlab |
label for the x-axis, by default “predicted value”. |
ylab |
label for the y-axis, by default “observed value”. |
col |
Colours to be used for plotting the uncertainty bands and median line. Must be of length 5 or more. |
key.columns |
Number of columns to be used in the key. |
key.position |
Location of the key e.g. “top”, “bottom”,
“right”, “left”. See |
auto.text |
Either |
... |
Other graphical parameters passed onto |
Details
Conditional quantiles are a very useful way of considering model performance against observations for continuous measurements (Wilks, 2005). The conditional quantile plot splits the data into evenly spaced bins. For each predicted value bin e.g. from 0 to 10~ppb the corresponding values of the observations are identified and the median, 25/75th and 10/90 percentile (quantile) calculated for that bin. The data are plotted to show how these values vary across all bins. For a time series of observations and predictions that agree precisely the median value of the predictions will equal that for the observations for each bin.
The conditional quantile plot differs from the quantile-quantile plot (Q-Q plot) that is often used to compare observations and predictions. A Q-Q~plot separately considers the distributions of observations and predictions, whereas the conditional quantile uses the corresponding observations for a particular interval in the predictions. Take as an example two time series, the first a series of real observations and the second a lagged time series of the same observations representing the predictions. These two time series will have identical (or very nearly identical) distributions (e.g. same median, minimum and maximum). A Q-Q plot would show a straight line showing perfect agreement, whereas the conditional quantile will not. This is because in any interval of the predictions the corresponding observations now have different values.
Plotting the data in this way shows how well predictions agree with
observations and can help reveal many useful characteristics of how well
model predictions agree with observations — across the full distribution of
values. A single plot can therefore convey a considerable amount of
information concerning model performance. The conditionalQuantile
function in openair allows conditional quantiles to be considered in a
flexible way e.g. by considering how they vary by season.
The function requires a data frame consisting of a column of observations and
a column of predictions. The observations are split up into bins
according to values of the predictions. The median prediction line together
with the 25/75th and 10/90th quantile values are plotted together with a line
showing a “perfect” model. Also shown is a histogram of predicted
values (shaded grey) and a histogram of observed values (shown as a blue
line).
Far more insight can be gained into model performance through conditioning
using type
. For example, type = "season"
will plot conditional
quantiles by each season. type
can also be a factor or character field
e.g. representing different models used.
See Wilks (2005) for more details and the examples below.
Author(s)
David Carslaw
References
Murphy, A. H., B.G. Brown and Y. Chen. (1989) Diagnostic Verification of Temperature Forecasts, Weather and Forecasting, Volume: 4, Issue: 4, Pages: 485-501.
Wilks, D. S., 2005. Statistical Methods in the Atmospheric Sciences, Volume 91, Second Edition (International Geophysics), 2nd Edition. Academic Press.
See Also
The verification
package for comprehensive functions for
forecast verification.
Other model evaluation functions:
TaylorDiagram()
,
conditionalEval()
,
modStats()
Examples
## make some dummy prediction data based on 'nox'
mydata$mod <- mydata$nox * 1.1 + mydata$nox * runif(1:nrow(mydata))
# basic conditional quantile plot
## A "perfect" model is shown by the blue line
## predictions tend to be increasingly positively biased at high nox,
## shown by departure of median line from the blue one.
## The widening uncertainty bands with increasing NOx shows that
## hourly predictions are worse for higher NOx concentrations.
## Also, the red (median) line extends beyond the data (blue line),
## which shows in this case some predictions are much higher than
## the corresponding measurements. Note that the uncertainty bands
## do not extend as far as the median line because there is insufficient
# to calculate them
conditionalQuantile(mydata, obs = "nox", mod = "mod")
## can split by season to show seasonal performance (not very
## enlightening in this case - try some real data and it will be!)
## Not run:
conditionalQuantile(mydata, obs = "nox", mod = "mod", type = "season")
## End(Not run)
Correlation matrices with conditioning
Description
Function to to draw and visualise correlation matrices using lattice. The primary purpose is as a tool for exploratory data analysis. Hierarchical clustering is used to group similar variables.
Usage
corPlot(
mydata,
pollutants = NULL,
type = "default",
cluster = TRUE,
method = "pearson",
use = "pairwise.complete.obs",
dendrogram = FALSE,
lower = FALSE,
cols = "default",
r.thresh = 0.8,
text.col = c("black", "black"),
auto.text = TRUE,
plot = TRUE,
...
)
Arguments
mydata |
A data frame which should consist of some numeric columns. |
pollutants |
the names of data-series in |
type |
It is also possible to choose |
cluster |
Should the data be ordered according to cluster analysis. If
|
method |
The correlation method to use. Can be “pearson”, “spearman” or “kendall”. |
use |
How to handle missing values in the |
dendrogram |
Should a dendrogram be plotted? When |
lower |
Should only the lower triangle be plotted? |
cols |
Colours to be used for plotting. Options include
“default”, “increment”, “heat”, “spectral”,
“hue”, “greyscale” and user defined (see |
r.thresh |
Values of greater than |
text.col |
The colour of the text used to show the correlation values. The first value controls the colour of negative correlations and the second positive. |
auto.text |
Either |
plot |
Should a plot be produced? |
... |
Other graphical parameters passed onto |
Details
The corPlot
function plots correlation matrices. The implementation
relies heavily on that shown in Sarkar (2007), with a few extensions.
Correlation matrices are a very effective way of understating relationships
between many variables. The corPlot
shows the correlation coded in
three ways: by shape (ellipses), colour and the numeric value. The ellipses
can be thought of as visual representations of scatter plot. With a perfect
positive correlation a line at 45 degrees positive slope is drawn. For zero
correlation the shape becomes a circle. See examples below.
With many different variables it can be difficult to see relationships
between variables, i.e., which variables tend to behave most like one
another. For this reason hierarchical clustering is applied to the
correlation matrices to group variables that are most similar to one another
(if cluster = TRUE
).
If clustering is chosen it is also possible to add a dendrogram using the
option dendrogram = TRUE
. Note that dendrogramscan only be plotted for
type = "default"
i.e. when there is only a single panel. The
dendrogram can also be recovered from the plot object itself and plotted more
clearly; see examples below.
It is also possible to use the openair
type option to condition the
data in many flexible ways, although this may become difficult to visualise
with too many panels.
Value
an openair object
Author(s)
David Carslaw — but mostly based on code contained in Sarkar (2007)
References
Sarkar, D. (2007). Lattice Multivariate Data Visualization with R. New York: Springer.
Friendly, M. (2002). Corrgrams : Exploratory displays for correlation matrices. American Statistician, 2002(4), 1-16. doi:10.1198/000313002533
See Also
taylor.diagram
from the plotrix
package from which
some of the annotation code was used.
Examples
## basic corrgram plot
corPlot(mydata)
## plot by season ... and so on
corPlot(mydata, type = "season")
## recover dendrogram when cluster = TRUE and plot it
res <- corPlot(mydata)
plot(res$clust)
## Not run:
## a more interesting are hydrocarbon measurements
hc <- importAURN(site = "my1", year = 2005, hc = TRUE)
## now it is possible to see the hydrocarbons that behave most
## similarly to one another
corPlot(hc)
## End(Not run)
Function to split data in different ways for conditioning
Description
Utility function to split data frames up in various ways for conditioning
plots. Widely used by many openair
functions usually through the option
type
.
Usage
cutData(
x,
type = "default",
names = NULL,
suffix = NULL,
hemisphere = "northern",
n.levels = 4,
start.day = 1,
is.axis = FALSE,
local.tz = NULL,
latitude = 51,
longitude = -0.5,
...
)
Arguments
x |
A data frame containing a field |
type |
A string giving the way in which the data frame should be split.
Pre-defined values are:
|
names |
By default, the columns created by |
suffix |
If |
hemisphere |
Can be |
n.levels |
Number of quantiles to split numeric data into. |
start.day |
What day of the week should the |
is.axis |
A logical ( |
local.tz |
Used for identifying whether a date has daylight savings time
(DST) applied or not. Examples include |
latitude , longitude |
The decimal latitude and longitudes used when |
... |
All additional parameters are passed on to next function(s). |
Details
This section give a brief description of each of the define levels of type
.
Note that all time dependent types require a column date
.
-
"default"
does not split the data but will describe the levels as a date range in the format "day month year". -
"year"
splits the data by each year. -
"month"
splits the data by month of the year. -
"hour"
splits the data by hour of the day. -
"monthyear"
splits the data by year and month. It differs from month in that a level is defined for each month of the data set. This is useful sometimes to show an ordered sequence of months if the data set starts half way through a year; rather than starting in January. -
"weekend"
splits the data by weekday and weekend. -
"weekday"
splits the data by day of the week - ordered to start Monday. -
"season"
splits data up by season. In the northern hemisphere winter = December, January, February; spring = March, April, May etc. These definitions will change ofhemisphere = "southern"
. -
"seasonyear"
(or"yearseason"
) will split the data into year-season intervals, keeping the months of a season together. For example, December 2010 is considered as part of winter 2011 (with January and February 2011). This makes it easier to consider contiguous seasons. In contrast,type = "season"
will just split the data into four seasons regardless of the year. -
"daylight"
splits the data relative to estimated sunrise and sunset to give either daylight or nighttime. The cut is made bycutDaylight
but more conveniently accessed viacutData
, e.g.cutData(mydata, type = "daylight", latitude = my.latitude, longitude = my.longitude)
. The daylight estimation, which is valid for dates between 1901 and 2099, is made using the measurement location, date, time and astronomical algorithms to estimate the relative positions of the Sun and the measurement location on the Earth's surface, and is based on NOAA methods. Measurement location should be set usinglatitude
(+ to North; - to South) andlongitude
(+ to East; - to West). -
"dst"
will split the data by hours that are in daylight saving time (DST) and hours that are not for appropriate time zones. The option also requires that the local time zone is given e.g.local.tz = "Europe/London"
,local.tz = "America/New_York"
. Each of the two periods will be in local time. The main purpose of this option is to test whether there is a shift in the diurnal profile when DST and non-DST hours are compared. This option is particularly useful with thetimeVariation()
function. For example, close to the source of road vehicle emissions, "rush-hour" will tend to occur at the same local time throughout the year, e.g., 8 am and 5 pm. Therefore, comparing non-DST hours with DST hours will tend to show similar diurnal patterns (at least in the timing of the peaks, if not magnitude) when expressed in local time. By contrast a variable such as wind speed or temperature should show a clear shift when expressed in local time. In essence, this option when used withtimeVariation()
may help determine whether the variation in a pollutant is driven by man-made emissions or natural processes. -
"wd"
splits the data by 8 wind sectors and requires a columnwd
: "NE", "E", "SE", "S", "SW", "W", "NW", "N".
Note that all the date-based types, e.g., "month"
/"year"
are derived from
a column date
. If a user already has a column with a name of one of the
date-based types it will not be used.
Value
Returns the data frame, x
, with columns appended as defined by
type
and name
.
Author(s)
David Carslaw
Karl Ropkins ("daylight"
option)
Examples
## split data by day of the week
mydata <- cutData(mydata, type = "weekday")
names(mydata)
head(mydata)
Scale key handling for openair
Description
General function for producing scale keys for other openair functions. The
function is a crude modification of the lattice::draw.colorkey()
function
developed by Deepayan Sarkar as part of the lattice package, and allows
additional key labelling to added, and provides some additional control of
the appearance and scaling.
Usage
drawOpenKey(key, draw = FALSE, vp = NULL)
Arguments
key |
List defining the scale key structure to be produced. Most options
are identical to original Original
Note: Additional options include:
Notes:
|
draw |
Option to return the key object or plot it directly. The default, FALSE, should always be used within openair calls. |
vp |
View port to be used when plotting key. The default, (Note: |
Details
The drawOpenKey()
function produces scale keys for other openair functions.
Most drawOpenKey()
options are identical to those of
lattice::draw.colorkey()
. For example, scale key size and position are
controlled via height
, width
and space
. Likewise, the axis labelling
can be set in and formatted by labels
. See lattice::draw.colorkey()
for
further details.
Additional scale labelling may be added above and below the scale using
header
and footer
options within key
. As in other openair
functions,
automatic text formatting can be enabled via auto.key
.
(Note: Currently, the formatting of header
and footer
text are fixed to
the same style as labels
(the scale axis) and cannot be defined locally.)
The relationship between header
, footer
and the scale key itself can be
controlled using fit
options. These can be set in key$fit
to apply
uniform control or individually in key$header$fit
and/or key$footer$fit
to control locally.
The appearance of the scale can be controlled using plot.style
.
Value
The function is a modification of lattice::draw.colorkey()
and
returns a scale key using a similar mechanism to that used in in the
original function as developed by Deepayan Sarkar.
Note
The help, advice and extreme patience of Deepayan Sarkar are gratefully acknowledged.
Author(s)
lattice::draw.colorkey()
is part of the lattice
package,
developed by Deepayan Sarkar. Additional modifications by Karl Ropkins.
References
Deepayan Sarkar (2010). lattice: Lattice Graphics. R package version 0.18-5. http://r-forge.r-project.org/projects/lattice/
Examples
##########
# example 1
##########
# paddle style scale key used by windRose
windRose(mydata, )
# adding text and changing style and position via key
# note:
# some simple key control also possible directly
# For example, below does same as
# windRose(mydata, key.position="right")
windRose(mydata,
key = list(space = "right")
)
# however:
# more detailed control possible working with
# key and drawOpenKey. For example,
windRose(mydata,
key = list(
header = "Title", footer = "wind speed",
plot.style = c("ticks", "border"),
fit = "all", height = 1,
space = "top"
)
)
CERC Atmospheric Dispersion Modelling System (ADMS) data import function(s) for openair
Description
Function(s) to import various ADMS file types into openair. Currently handles
".met", ".bgd", ".mop" and ".pst" file structures. Uses utils::read.csv()
to read in data, format for R and openair and apply some file structure
testing.
Usage
importADMS(
file = file.choose(),
file.type = "unknown",
drop.case = TRUE,
drop.input.dates = TRUE,
keep.units = TRUE,
simplify.names = TRUE,
test.file.structure = TRUE,
drop.delim = TRUE,
add.prefixes = TRUE,
names = NULL,
all = FALSE,
...
)
Arguments
file |
The ADMS file to be imported. Default, |
file.type |
Type of ADMS file to be imported. With default, "unknown",
the import uses the file extension to identify the file type and, where
recognised, uses this to identify the file structure and import method to
be applied. Where file extension is not recognised the choice may be forced
by setting |
drop.case |
Option to convert all data names to lower case. Default,
|
drop.input.dates |
Option to remove ADMS "hour", "day", and "year" data
columns after generating openair "date" timeseries. Default, |
keep.units |
Option to retain ADMS data units. Default, |
simplify.names |
Option to simplify data names in accordance with common
|
test.file.structure |
Option to test file structure before trying to
import. Default, |
drop.delim |
Option to remove delim columns from the data frame. ADMS
.mop files include two columns, "INPUT_DATA:" and "PROCESSED_DATA:", to
separate model input and output types. Default, |
add.prefixes |
Option to add prefixes to data names. ADMS .mop files
include a number of input and process data types with shared names.
Prefixes can be automatically added to these so individual data can be
readily identified in the R/openair environment. Default, |
names |
Option applied by |
all |
For .MOP files, return all variables or not. If |
... |
Arguments passed on to
|
Details
The importADMS
function were developed to help import various ADMS
file types into openair. In most cases the parent import function should work
in default configuration, e.g. mydata <- importADMS()
. The function
currently recognises four file formats: .bgd
, .met
, .mop
and .pst
. Where other file extensions have been set but the file
structure is known, the import call can be forced by, e.g, mydata <- importADMS(file.type="bgd")
. Other options can be adjusted to provide fine
control of the data structuring and renaming.
Value
In standard use importADMS()
returns a data frame for use in
openair. By comparison to the original file, the resulting data frame is
modified as follows:
Time and date information will combined in a single column "date",
formatted as a conventional timeseries (as.POSIX*
). If
drop.input.dates
is enabled data series combined to generated the
new "date" data series will also be removed.
If simplify.names
is enabled common chemical names may be
simplified, and some other parameters may be reset to openair standards
(e.g. "ws", "wd" and "temp") according to operations defined in
simplifyNamesADMS
. A summary of simplification operations can be
obtained using, e.g., the call importADMS(simplify.names)
.
If drop.case
is enabled all upper case characters in names will be
converted to lower case.
If keep.units
is enabled data units information may also be retained
as part of the data frame comment if available.
With .mop
files, input and processed data series names may also been
modified on the basis of drop.delim
and add.prefixes
settings
Note
Times are assumed to be in GMT. Zero wind directions reset to 360 as
part of .mop
file import.
Author(s)
Karl Ropkins, David Carslaw and Matthew Williams (CERC).
See Also
Other import functions:
importAURN()
,
importEurope()
,
importImperial()
,
importMeta()
,
importTraj()
,
importUKAQ()
Examples
##########
# example 1
##########
# To be confirmed
# all current simplify.names operations
importADMS(simplify.names)
# to see what simplify.names does to adms data series name PHI
new.name <- importADMS(simplify.names, names = "PHI")
new.name
Import data from individual UK Air Pollution Networks
Description
These functions act as wrappers for importUKAQ()
to import air pollution
data from a range of UK networks including the Automatic Urban and Rural
Network (AURN), the individual England (AQE), Scotland (SAQN), Wales (WAQN)
and Northern Ireland (NI) Networks, and many "locally managed" monitoring
networks across England. While importUKAQ()
allows for data to be imported
more flexibly, including across multiple monitoring networks, these functions
are provided for convenience and back-compatibility.
Usage
importAURN(
site = "my1",
year = 2009,
data_type = "hourly",
pollutant = "all",
hc = FALSE,
meta = FALSE,
meta_columns = c("site_type", "latitude", "longitude"),
meteo = TRUE,
ratified = FALSE,
to_narrow = FALSE,
verbose = FALSE,
progress = TRUE
)
importAQE(
site = "yk13",
year = 2018,
data_type = "hourly",
pollutant = "all",
meta = FALSE,
meta_columns = c("site_type", "latitude", "longitude"),
meteo = TRUE,
ratified = FALSE,
to_narrow = FALSE,
verbose = FALSE,
progress = TRUE
)
importSAQN(
site = "gla4",
year = 2009,
data_type = "hourly",
pollutant = "all",
meta = FALSE,
meta_columns = c("site_type", "latitude", "longitude"),
meteo = TRUE,
ratified = FALSE,
to_narrow = FALSE,
verbose = FALSE,
progress = TRUE
)
importWAQN(
site = "card",
year = 2018,
data_type = "hourly",
pollutant = "all",
meta = FALSE,
meta_columns = c("site_type", "latitude", "longitude"),
meteo = TRUE,
ratified = FALSE,
to_narrow = FALSE,
verbose = FALSE,
progress = TRUE
)
importNI(
site = "bel0",
year = 2018,
data_type = "hourly",
pollutant = "all",
meta = FALSE,
meta_columns = c("site_type", "latitude", "longitude"),
meteo = TRUE,
ratified = FALSE,
to_narrow = FALSE,
verbose = FALSE,
progress = TRUE
)
importLocal(
site = "ad1",
year = 2018,
data_type = "hourly",
pollutant = "all",
meta = FALSE,
meta_columns = c("site_type", "latitude", "longitude"),
to_narrow = FALSE,
verbose = FALSE,
progress = TRUE
)
Arguments
site |
Site code of the site to import, e.g., |
year |
Year(s) to import. To import a series of years use, e.g.,
|
data_type |
The type of data to be returned, defaulting to
|
pollutant |
Pollutants to import. If omitted will import all pollutants
from a site. To import only NOx and NO2 for example use |
hc |
Include hydrocarbon measurements in the imported data? Defaults to
|
meta |
Append metadata columns to data for each selected |
meta_columns |
The specific columns to append when |
meteo |
Append modelled meteorological data, if available? Defaults to
|
ratified |
Append |
to_narrow |
Return the data in a "narrow"/"long"/"tidy" format? By
default the returned data is "wide" and has a column for each
pollutant/variable. When |
verbose |
Print messages to the console if hourly data cannot be
imported? Default is |
progress |
Show a progress bar when many sites/years are being imported?
Defaults to |
Importing UK Air Pollution Data
This family of functions has been written to make it easy to import data from across several UK air quality networks. Ricardo have provided .RData files (R workspaces) of all individual sites and years, as well as up to date meta data. These files are updated on a daily basis. This approach requires a link to the Internet to work.
There are several advantages over the web portal approach where .csv files are downloaded.
First, it is quick to select a range of sites, pollutants and periods (see examples below).
Second, storing the data as .RData objects is very efficient as they are about four times smaller than .csv files — which means the data downloads quickly and saves bandwidth.
Third, the function completely avoids any need for data manipulation or setting time formats, time zones etc. The function also has the advantage that the proper site name is imported and used in openair functions.
Users should take care if using data from both openair and web portals (for example, UK AIR). One key difference is that the data provided by openair is date beginning, whereas the web portal provides date ending. Hourly concentrations may therefore appear offset by an hour, for example.
The data are imported by stacking sites on top of one another and will have
field names site
, code
(the site code) and pollutant
.
By default, the function returns hourly average data. However, annual,
monthly, daily and 15 minute data (for SO2) can be returned using the
option data_type
. Annual and monthly data provide whole network
information including data capture statistics.
All units are expressed in mass terms for gaseous species (ug/m3 for NO, NO2, NOx (as NO2), SO2 and hydrocarbons; and mg/m3 for CO). PM10 concentrations are provided in gravimetric units of ug/m3 or scaled to be comparable with these units. Over the years a variety of instruments have been used to measure particulate matter and the technical issues of measuring PM10 are complex. In recent years the measurements rely on FDMS (Filter Dynamics Measurement System), which is able to measure the volatile component of PM. In cases where the FDMS system is in use there will be a separate volatile component recorded as 'v10' and non-volatile component 'nv10', which is already included in the absolute PM10 measurement. Prior to the use of FDMS the measurements used TEOM (Tapered Element Oscillating. Microbalance) and these concentrations have been multiplied by 1.3 to provide an estimate of the total mass including the volatile fraction.
Some sites report hourly and daily PM10 and / or PM2.5. When data_type = "daily"
and there are both hourly and 'proper' daily measurements
available, these will be returned as e.g. "pm2.5" and "gr_pm2.5"; the
former corresponding to data based on original hourly measurements and the
latter corresponding to daily gravimetric measurements.
The function returns modelled hourly values of wind speed (ws
), wind
direction (wd
) and ambient temperature (air_temp
) if available
(generally from around 2010). These values are modelled using the WRF model
operated by Ricardo.
The BAM (Beta-Attenuation Monitor) instruments that have been incorporated into the network throughout its history have been scaled by 1.3 if they have a heated inlet (to account for loss of volatile particles) and 0.83 if they do not have a heated inlet. The few TEOM instruments in the network after 2008 have been scaled using VCM (Volatile Correction Model) values to account for the loss of volatile particles. The object of all these scaling processes is to provide a reasonable degree of comparison between data sets and with the reference method and to produce a consistent data record over the operational period of the network, however there may be some discontinuity in the time series associated with instrument changes.
No corrections have been made to the PM2.5 data. The volatile component of FDMS PM2.5 (where available) is shown in the 'v2.5' column.
See Also
Other import functions:
importADMS()
,
importEurope()
,
importImperial()
,
importMeta()
,
importTraj()
,
importUKAQ()
Import air quality data from European database until February 2024
Description
This function is a simplified version of the saqgetr
package (see
https://github.com/skgrange/saqgetr) for accessing European air quality
data. As saqgetr
was retired in February 2024, this function has also been
retired, but can still access European air quality data up until that
retirement date. Consider using the EEA Air Quality Download Service instead
(https://eeadmz1-downloads-webapp.azurewebsites.net/).
Usage
importEurope(
site = "debw118",
year = 2018,
tz = "UTC",
meta = FALSE,
to_narrow = FALSE,
progress = TRUE
)
Arguments
site |
The code of the site(s). |
year |
Year or years to import. To import a sequence of years from 1990
to 2000 use |
tz |
Not used |
meta |
Should meta data be returned? If |
to_narrow |
By default the returned data has a column for each
pollutant/variable. When |
progress |
Show a progress bar when many sites/years are being imported?
Defaults to |
Value
a tibble
See Also
Other import functions:
importADMS()
,
importAURN()
,
importImperial()
,
importMeta()
,
importTraj()
,
importUKAQ()
Examples
# import data for Stuttgart Am Neckartor (S)
## Not run:
stuttgart <- importEurope("debw118", year = 2010:2019, meta = TRUE)
## End(Not run)
Import data from Imperial College London networks
Description
Function for importing hourly mean data from Imperial College London networks, formerly the King's College London networks. Files are imported from a remote server operated by Imperial College London that provides air quality data files as R data objects.
Usage
importImperial(
site = "my1",
year = 2009,
pollutant = "all",
meta = FALSE,
meteo = FALSE,
extra = FALSE,
units = "mass",
to_narrow = FALSE,
progress = TRUE
)
importKCL(
site = "my1",
year = 2009,
pollutant = "all",
met = FALSE,
units = "mass",
extra = FALSE,
meta = FALSE,
to_narrow = FALSE,
progress = TRUE
)
Arguments
site |
Site code of the network site to import e.g. "my1" is Marylebone
Road. Several sites can be imported with |
year |
Year(s) to import. To import a series of years use, e.g.,
|
pollutant |
Pollutants to import. If omitted will import all pollutants
from a site. To import only NOx and NO2 for example use |
meta |
Append metadata columns to data for each selected |
meteo , met |
Should meteorological data be added to the import data? The
default is |
extra |
Defaults to |
units |
By default the returned data frame expresses the units in mass
terms (ug/m3 for NOx, NO2, O3, SO2; mg/m3 for CO). Use |
to_narrow |
Return the data in a "narrow"/"long"/"tidy" format? By
default the returned data is "wide" and has a column for each
pollutant/variable. When |
progress |
Show a progress bar when many sites/years are being imported?
Defaults to |
Details
The importImperial()
function has been written to make it easy to import
data from the Imperial College London air pollution networks. Imperial have
provided .RData files (R workspaces) of all individual sites and years for
the Imperial networks. These files are updated on a weekly basis. This
approach requires a link to the Internet to work.
There are several advantages over the web portal approach where .csv files are downloaded. First, it is quick to select a range of sites, pollutants and periods (see examples below). Second, storing the data as .RData objects is very efficient as they are about four times smaller than .csv files — which means the data downloads quickly and saves bandwidth. Third, the function completely avoids any need for data manipulation or setting time formats, time zones etc. Finally, it is easy to import many years of data beyond the current limit of about 64,000 lines. The final point makes it possible to download several long time series in one go. The function also has the advantage that the proper site name is imported and used in 'openair“ functions.
The site codes and pollutant names can be upper or lower case. The function will issue a warning when data less than six months old is downloaded, which may not be ratified.
The data are imported by stacking sites on top of one another and will have
field names date
, site
, code
(the site code) and
pollutant(s). Sometimes it is useful to have columns of site data. This can
be done using the reshape()
function — see examples below.
The situation for particle measurements is not straightforward given the
variety of methods used to measure particle mass and changes in their use
over time. The importImperial()
function imports two measures of PM10
where available. PM10_raw
are TEOM measurements with a 1.3 factor
applied to take account of volatile losses. The PM10
data is a current
best estimate of a gravimetric equivalent measure as described below. NOTE!
many sites have several instruments that measure PM10 or PM2.5. In the case
of FDMS measurements, these are given as separate site codes (see below). For
example "MY1" will be TEOM with VCM applied and "MY7" is the FDMS data.
Where FDMS data are used the volatile and non-volatile components are separately reported i.e. v10 = volatile PM10, v2.5 = volatile PM2.5, nv10 = non-volatile PM10 and nv2.5 = non-volatile PM2.5. Therefore, PM10 = v10 + nv10 and PM2.5 = v2.5 + nv2.5.
For the assessment of the EU Limit Values, PM10 needs to be measured using the reference method or one shown to be equivalent to the reference method. Defra carried out extensive trials between 2004 and 2006 to establish which types of particulate analysers in use in the UK were equivalent. These trials found that measurements made using Partisol, FDMS, BAM and SM200 instruments were shown to be equivalent to the PM10 reference method. However, correction factors need to be applied to measurements from the SM200 and BAM instruments. Importantly, the TEOM was demonstrated as not being equivalent to the reference method due to the loss of volatile PM, even when the 1.3 correction factor was applied. The Volatile Correction Model (VCM) was developed for Defra at King's College to allow measurements of PM10 from TEOM instruments to be converted to reference equivalent; it uses the measurements of volatile PM made using nearby FDMS instruments to correct the measurements made by the TEOM. It passed the equivalence testing using the same methodology used in the Defra trials and is now the recommended method for correcting TEOM measurements (Defra, 2009). VCM correction of TEOM measurements can only be applied after 1st January 2004, when sufficiently widespread measurements of volatile PM became available. The 1.3 correction factor is now considered redundant for measurements of PM10 made after 1st January 2004. Further information on the VCM can be found at http://www.volatile-correction-model.info/.
All PM10 statistics on the LondonAir web site, including the bulletins and
statistical tools (and in the RData objects downloaded using
importImperial()
), now report PM10 results as reference equivalent. For
PM10 measurements made by BAM and SM200 analysers the applicable correction
factors have been applied. For measurements from TEOM analysers the 1.3
factor has been applied up to 1st January 2004, then the VCM method has been
used to convert to reference equivalent.
The meteorological data are meant to represent 'typical' conditions in London, but users may prefer to use their own data. The data provide a an estimate of general meteorological conditions across Greater London. For meteorological species (wd, ws, rain, solar) each data point is formed by averaging measurements from a subset of LAQN monitoring sites that have been identified as having minimal disruption from local obstacles and a long term reliable dataset. The exact sites used varies between species, but include between two and five sites per species. Therefore, the data should represent 'London scale' meteorology, rather than local conditions.
importKCL()
is equivalent to importImperial()
and is provided for
back-compatibility reasons only. New users should use importImperial()
.
Value
Returns a data frame of hourly mean values with date in POSIXct class and time zone GMT.
Author(s)
David Carslaw and Ben Barratt
See Also
Other import functions:
importADMS()
,
importAURN()
,
importEurope()
,
importMeta()
,
importTraj()
,
importUKAQ()
Examples
## import all pollutants from Marylebone Rd from 1990:2009
## Not run:
mary <- importImperial(site = "my1", year = 2000:2009)
## End(Not run)
## import nox, no2, o3 from Marylebone Road and North Kensington for 2000
## Not run:
thedata <-
importImperial(
site = c("my1", "kc1"),
year = 2000,
pollutant = c("nox", "no2", "o3")
)
## End(Not run)
## import met data too...
## Not run:
my1 <- importImperial(site = "my1", year = 2008, meteo = TRUE)
## End(Not run)
Import monitoring site meta data for UK and European networks
Description
Function to import meta data for air quality monitoring sites. By default,
the function will return the site latitude, longitude and site type, as well
as the code used in functions like importUKAQ()
, importImperial()
and
importEurope()
. Additional information may optionally be returned.
Usage
importMeta(source = "aurn", all = FALSE, year = NA, duplicate = FALSE)
Arguments
source |
One or more air quality networks for which data is available through openair. Available networks include:
There are two additional options provided for convenience:
|
all |
When |
year |
If a single year is selected, only sites that were open at some
point in that year are returned. If |
duplicate |
Some UK air quality sites are part of multiple networks, so
could appear more than once when |
Details
This function imports site meta data from several networks in the UK and Europe:
-
"aurn"
, The UK Automatic Urban and Rural Network. -
"aqe"
, The Air Quality England Network. -
"saqn"
, The Scottish Air Quality Network. -
"waqn"
, The Welsh Air Quality Network. -
"ni"
, The Northern Ireland Air Quality Network. -
"local"
, Locally managed air quality networks in England. -
"imperial"
, Imperial College London (formerly King's College London) networks. -
"europe"
, Hourly European data (Air Quality e-Reporting) based on a simplified version of the{saqgetr}
package.
By default, the function will return the site latitude, longitude and site
type. If the option all = TRUE
is used, much more detailed information is
returned. The following metadata columns are available in the complete dataset:
-
source: The network with which the site is associated. Note that some monitoring sites are part of multiple networks (e.g., the AURN & SAQN) so the same site may feature twice under different sources.
-
code: The site code, used to import data from specific sites of interest.
-
site: The site name, which is more human-readable than the site code.
-
site_type: A description of the site environment. Read more at https://uk-air.defra.gov.uk/networks/site-types.
-
latitude and longitude: The coordinates of the monitoring station, using the World Geodetic System (https://epsg.io/4326).
-
start_date and end_date: The opening and closing dates of the monitoring station. If
by_pollutant = TRUE
, these dates are instead the first and last dates at which specific pollutants were measured. A missing value,NA
, indicates that monitoring is ongoing. -
ratified_to: The date to which data has been ratified (i.e., 'quality checked'). Data after this date is subject to change.
-
zone and agglomeration: The UK is divided into agglomeration zones (large urban areas) and non-agglomeration zones for air quality assessment, which are given in these columns.
-
local_authority: The local authority in which the monitoring station is found.
-
provider and code: The specific provider of the locally managed dataset (e.g.,
"londonair"
).
Thanks go to Trevor Davies (Ricardo), Dr Stuart Grange (EMPA) and Dr Ben Barratt (KCL) and for making these data available.
Value
A data frame with meta data.
Author(s)
David Carslaw
See Also
the networkMap()
function from the openairmaps
package which can
visualise site metadata on an interactive map.
Other import functions:
importADMS()
,
importAURN()
,
importEurope()
,
importImperial()
,
importTraj()
,
importUKAQ()
Examples
## Not run:
# basic info:
meta <- importMeta(source = "aurn")
# more detailed information:
meta <- importMeta(source = "aurn", all = TRUE)
# from the Scottish Air Quality Network:
meta <- importMeta(source = "saqn", all = TRUE)
# from multiple networks:
meta <- importMeta(source = c("aurn", "aqe", "local"))
## End(Not run)
Import pre-calculated HYSPLIT 96-hour back trajectories
Description
Function to import pre-calculated back trajectories using the NOAA HYSPLIT
model. The trajectories have been calculated for a select range of locations
which will expand in time. They cover the last 20 years or so and can be used
together with other openair
functions.
Usage
importTraj(site = "london", year = 2009, local = NA, progress = TRUE)
Arguments
site |
Site code of the network site to import e.g. "london". Only one site can be imported at a time. The following sites are typically available from 2000-2012, although some UK ozone sites go back to 1988 (code, location, lat, lon, year):
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
year |
Year or years to import. To import a sequence of years from
1990 to 2000 use | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
local |
File path to .RData trajectory files run by user and
not stored on the Ricardo web server. These files would have been
generated from the Hysplit trajectory code shown in the appendix
of the openair manual. An example would be | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
progress |
Show a progress bar when many receptors/years are being
imported? Defaults to |
Details
This function imports pre-calculated back trajectories using the HYSPLIT trajectory model (Hybrid Single Particle Lagrangian Integrated Trajectory Model. Back trajectories provide some very useful information for air quality data analysis. However, while they are commonly calculated by researchers it is generally difficult for them to be calculated on a routine basis and used easily. In addition, the availability of back trajectories over several years can be very useful, but again difficult to calculate.
Trajectories are run at 3-hour intervals and stored in yearly files (see below). The trajectories are started at ground-level (10m) and propagated backwards in time.
These trajectories have been calculated using the Global NOAA-NCEP/NCAR reanalysis data archives. The global data are on a latitude-longitude grid (2.5 degree). Note that there are many different meteorological data sets that can be used to run HYSPLIT e.g. including ECMWF data. However, in order to make it practicable to run and store trajectories for many years and sites, the NOAA-NCEP/NCAR reanalysis data is most useful. In addition, these archives are available for use widely, which is not the case for many other data sets e.g. ECMWF. HYSPLIT calculated trajectories based on archive data may be distributed without permission. For those wanting, for example, to consider higher resolution meteorological data sets it may be better to run the trajectories separately.
We are extremely grateful to NOAA for making HYSPLIT available to produce back trajectories in an open way. We ask that you cite HYSPLIT if used in published work.
Users can supply their own trajectory files to plot in openair. These files must have the following fields: date, lat, lon and hour.inc (see details below).
The files consist of the following information:
- date
This is the arrival point time and is repeated the number of times equal to the length of the back trajectory — typically 96 hours (except early on in the file). The format is
POSIXct
. It is this field that should be used to link with air quality data. See example below.- receptor
Receptor number, currently only 1.
- year
The year
- month
Month 1-12
- day
Day of the month 1-31
- hour
Hour of the day 0-23 GMT
- hour.inc
Number of hours back in time e.g. 0 to -96.
- lat
Latitude in decimal format.
- lon
Longitude in decimal format.
- height
Height of trajectory (m).
- pressure
Pressure of trajectory (kPa).
Value
Returns a data frame with pre-calculated back trajectories.
Note
The trajectories were run using the February 2011 HYSPLIT model. The function is primarily written to investigate a single site at a time for a single year. The trajectory files are quite large and care should be exercised when importing several years and/or sites.
Author(s)
David Carslaw
See Also
Other import functions:
importADMS()
,
importAURN()
,
importEurope()
,
importImperial()
,
importMeta()
,
importUKAQ()
Other trajectory analysis functions:
trajCluster()
,
trajLevel()
,
trajPlot()
Examples
## import trajectory data for London in 2009
## Not run:
mytraj <- importTraj(site = "london", year = 2009)
## End(Not run)
## combine with measurements
## Not run:
theData <- importAURN(site = "kc1", year = 2009)
mytraj <- merge(mytraj, theData, by = "date")
## End(Not run)
Import data from the UK Air Pollution Networks
Description
Functions for importing air pollution data from a range of UK networks
including the Automatic Urban and Rural Network (AURN), the individual
England (AQE), Scotland (SAQN), Wales (WAQN) and Northern Ireland (NI)
Networks, and many "locally managed" monitoring networks across England.
Files are imported from a remote server operated by Ricardo that provides air
quality data files as R data objects. For an up to date list of available
sites that can be imported, see importMeta()
.
Usage
importUKAQ(
site = "my1",
year = 2022,
source = NULL,
data_type = "hourly",
pollutant = "all",
hc = FALSE,
meta = FALSE,
meta_columns = c("site_type", "latitude", "longitude"),
meteo = TRUE,
ratified = FALSE,
to_narrow = FALSE,
verbose = FALSE,
progress = TRUE
)
Arguments
site |
Site code of the site to import, e.g., |
year |
Year(s) to import. To import a series of years use, e.g.,
|
source |
The network to which the
|
data_type |
The type of data to be returned, defaulting to
|
pollutant |
Pollutants to import. If omitted will import all pollutants
from a site. To import only NOx and NO2 for example use |
hc |
Include hydrocarbon measurements in the imported data? Defaults to
|
meta |
Append metadata columns to data for each selected |
meta_columns |
The specific columns to append when |
meteo |
Append modelled meteorological data, if available? Defaults to
|
ratified |
Append |
to_narrow |
Return the data in a "narrow"/"long"/"tidy" format? By
default the returned data is "wide" and has a column for each
pollutant/variable. When |
verbose |
Print messages to the console if hourly data cannot be
imported? Default is |
progress |
Show a progress bar when many sites/years are being imported?
Defaults to |
Value
a tibble
Importing UK Air Pollution Data
This family of functions has been written to make it easy to import data from across several UK air quality networks. Ricardo have provided .RData files (R workspaces) of all individual sites and years, as well as up to date meta data. These files are updated on a daily basis. This approach requires a link to the Internet to work.
There are several advantages over the web portal approach where .csv files are downloaded.
First, it is quick to select a range of sites, pollutants and periods (see examples below).
Second, storing the data as .RData objects is very efficient as they are about four times smaller than .csv files — which means the data downloads quickly and saves bandwidth.
Third, the function completely avoids any need for data manipulation or setting time formats, time zones etc. The function also has the advantage that the proper site name is imported and used in openair functions.
Users should take care if using data from both openair and web portals (for example, UK AIR). One key difference is that the data provided by openair is date beginning, whereas the web portal provides date ending. Hourly concentrations may therefore appear offset by an hour, for example.
The data are imported by stacking sites on top of one another and will have
field names site
, code
(the site code) and pollutant
.
By default, the function returns hourly average data. However, annual,
monthly, daily and 15 minute data (for SO2) can be returned using the
option data_type
. Annual and monthly data provide whole network
information including data capture statistics.
All units are expressed in mass terms for gaseous species (ug/m3 for NO, NO2, NOx (as NO2), SO2 and hydrocarbons; and mg/m3 for CO). PM10 concentrations are provided in gravimetric units of ug/m3 or scaled to be comparable with these units. Over the years a variety of instruments have been used to measure particulate matter and the technical issues of measuring PM10 are complex. In recent years the measurements rely on FDMS (Filter Dynamics Measurement System), which is able to measure the volatile component of PM. In cases where the FDMS system is in use there will be a separate volatile component recorded as 'v10' and non-volatile component 'nv10', which is already included in the absolute PM10 measurement. Prior to the use of FDMS the measurements used TEOM (Tapered Element Oscillating. Microbalance) and these concentrations have been multiplied by 1.3 to provide an estimate of the total mass including the volatile fraction.
Some sites report hourly and daily PM10 and / or PM2.5. When data_type = "daily"
and there are both hourly and 'proper' daily measurements
available, these will be returned as e.g. "pm2.5" and "gr_pm2.5"; the
former corresponding to data based on original hourly measurements and the
latter corresponding to daily gravimetric measurements.
The function returns modelled hourly values of wind speed (ws
), wind
direction (wd
) and ambient temperature (air_temp
) if available
(generally from around 2010). These values are modelled using the WRF model
operated by Ricardo.
The BAM (Beta-Attenuation Monitor) instruments that have been incorporated into the network throughout its history have been scaled by 1.3 if they have a heated inlet (to account for loss of volatile particles) and 0.83 if they do not have a heated inlet. The few TEOM instruments in the network after 2008 have been scaled using VCM (Volatile Correction Model) values to account for the loss of volatile particles. The object of all these scaling processes is to provide a reasonable degree of comparison between data sets and with the reference method and to produce a consistent data record over the operational period of the network, however there may be some discontinuity in the time series associated with instrument changes.
No corrections have been made to the PM2.5 data. The volatile component of FDMS PM2.5 (where available) is shown in the 'v2.5' column.
Author(s)
David Carslaw, Trevor Davies, and Jack Davison
See Also
Other import functions:
importADMS()
,
importAURN()
,
importEurope()
,
importImperial()
,
importMeta()
,
importTraj()
Examples
## Not run:
# import a single site from the AURN
importUKAQ("my1", year = 2022)
# import sites from another network
importUKAQ(c("bn1", "bn2"), year = 2022, source = "aqe")
# import sites across multiple networks
importUKAQ(c("my1", "bn1", "bn2"),
year = 2022,
source = c("aurn", "aqe", "aqe")
)
# get "long" format hourly data with a ratification flag
importUKAQ(
"card",
source = "waqn",
year = 2022,
to_narrow = TRUE,
ratified = TRUE
)
# import other data types, filtering by pollutant
importUKAQ(
data_type = "annual",
pollutant = c("no2", "pm2.5", "pm10"),
source = c("aurn", "aqe")
)
## End(Not run)
Linear relations between pollutants
Description
This function considers linear relationships between two pollutants. The relationships are calculated on different times bases using a linear model. The slope and 95% confidence interval in slope relationships by time unit are plotted in many ways. The function is particularly useful when considering whether relationships are consistent with emissions inventories.
Usage
linearRelation(
mydata,
x = "nox",
y = "no2",
period = "month",
condition = FALSE,
n = 20,
rsq.thresh = 0,
ylab = paste0("slope from ", y, " = m.", x, " + c"),
auto.text = TRUE,
cols = "grey30",
date.breaks = 5,
plot = TRUE,
...
)
Arguments
mydata |
A data frame minimally containing |
x |
First pollutant that when plotted would appear on the x-axis of a
relationship e.g. |
y |
Second pollutant that when plotted would appear on the y-axis of a
relationship e.g. |
period |
A range of different time periods can be analysed. The default
is |
condition |
For |
n |
The minimum number of points to be sent to the linear model.
Because there may only be a few points e.g. hours where two pollutants are
available over one week, |
rsq.thresh |
The minimum correlation coefficient (R2) allowed. If the
relationship between |
ylab |
y-axis title, specified by the user. |
auto.text |
Either |
cols |
Colour for the points and uncertainty intervals. |
date.breaks |
Number of major x-axis intervals to use. The function will
try and choose a sensible number of dates/times as well as formatting the
date/time appropriately to the range being considered. This does not
always work as desired automatically. The user can therefore increase or
decrease the number of intervals by adjusting the value of
|
plot |
Should a plot be produced? |
... |
Other graphical parameters. A useful one to remove the strip with
the date range on at the top of the plot is to set |
Details
The relationships between pollutants can yield some very useful information
about source emissions and how they change. A scatterPlot between two
pollutants is the usual way to investigate the relationship. A linear
regression is useful to test the strength of the relationship. However,
considerably more information can be gleaned by considering different time
periods, such as how the relationship between two pollutants vary over time,
by day of the week, diurnally and so on. The linearRelation
function
does just that - it fits a linear relationship between two pollutants over a
wide range of time periods determined by period
.
linearRelation
function is particularly useful if background
concentrations are first removed from roadside concentrations, as the
increment will relate more directly with changes in emissions. In this
respect, using linearRelation
can provide valuable information on how
emissions may have changed over time, by hour of the day etc. Using the
function in this way will require users to do some basic manipulation with
their data first.
If a data frame is supplied that contains nox
, no2
and
o3
, the y
can be chosen as y = "ox"
. In function will
therefore consider total oxidant slope (sum of NO2 + O3), which can provide
valuable information on likely vehicle primary NO emissions. Note, however,
that most roadside sites do not have ozone measurements and
calcFno2()
is the alternative.
Value
an openair object
Author(s)
David Carslaw
See Also
Examples
# monthly relationship between NOx and SO2 - note rapid fall in
# ratio at the beginning of the series
linearRelation(mydata, x = "nox", y = "so2")
# monthly relationship between NOx and SO2 - note rapid fall in
# ratio at the beginning of the series
## Not run:
linearRelation(mydata, x = "nox", y = "ox")
## End(Not run)
# diurnal oxidant slope by year # clear change in magnitude
# starting 2003, but the diurnal profile has also changed: the
# morning and evening peak hours are more important, presumably
# due to change in certain vehicle types
## Not run:
linearRelation(mydata, x = "nox", y = "ox", period = "hour", condition = TRUE)
## End(Not run)
# PM2.5/PM10 ratio, but only plot where monthly R2 >= 0.8
## Not run:
linearRelation(mydata, x = "pm10", y = "pm25", rsq.thresh = 0.8)
## End(Not run)
Calculate common model evaluation statistics
Description
Function to calculate common numerical model evaluation statistics with flexible conditioning.
Usage
modStats(
mydata,
mod = "mod",
obs = "obs",
statistic = c("n", "FAC2", "MB", "MGE", "NMB", "NMGE", "RMSE", "r", "COE", "IOA"),
type = "default",
rank.name = NULL,
...
)
Arguments
mydata |
A data frame. |
mod |
Name of a variable in |
obs |
Name of a variable in |
statistic |
The statistic to be calculated. See details below for a description of each. |
type |
It is also possible to choose More than one type can be considered e.g. |
rank.name |
Simple model ranking can be carried out if |
... |
Arguments passed on to
|
Details
This function is under development and currently provides some common model evaluation statistics. These include (to be mathematically defined later):
-
n
, the number of complete pairs of data. -
FAC2
, fraction of predictions within a factor of two. -
MB
, the mean bias. -
MGE
, the mean gross error. -
NMB
, the normalised mean bias. -
NMGE
, the normalised mean gross error. -
RMSE
, the root mean squared error. -
r
, the Pearson correlation coefficient. Note, can also supply and argumentmethod
e.g.method = "spearman"
. Also returned is the P value of the correlation coefficient,P
, which may present as0
for very low values. -
COE
, the Coefficient of Efficiency based on Legates and McCabe (1999, 2012). There have been many suggestions for measuring model performance over the years, but the COE is a simple formulation which is easy to interpret.A perfect model has a COE = 1. As noted by Legates and McCabe although the COE has no lower bound, a value of COE = 0.0 has a fundamental meaning. It implies that the model is no more able to predict the observed values than does the observed mean. Therefore, since the model can explain no more of the variation in the observed values than can the observed mean, such a model can have no predictive advantage.
For negative values of COE, the model is less effective than the observed mean in predicting the variation in the observations.
-
IOA
, the Index of Agreement based on Willmott et al. (2011), which spans between -1 and +1 with values approaching +1 representing better model performance.An IOA of 0.5, for example, indicates that the sum of the error-magnitudes is one half of the sum of the observed-deviation magnitudes. When IOA = 0.0, it signifies that the sum of the magnitudes of the errors and the sum of the observed-deviation magnitudes are equivalent. When IOA = -0.5, it indicates that the sum of the error-magnitudes is twice the sum of the perfect model-deviation and observed-deviation magnitudes. Values of IOA near -1.0 can mean that the model-estimated deviations about O are poor estimates of the observed deviations; but, they also can mean that there simply is little observed variability - so some caution is needed when the IOA approaches -1.
All statistics are based on complete pairs of mod
and obs
.
Conditioning is possible through setting type
, which can be a vector
e.g. type = c("weekday", "season")
.
Value
Returns a data frame with model evaluation statistics.
Author(s)
David Carslaw
References
Legates DR, McCabe GJ. (1999). Evaluating the use of goodness-of-fit measures in hydrologic and hydroclimatic model validation. Water Resources Research 35(1): 233-241.
Legates DR, McCabe GJ. (2012). A refined index of model performance: a rejoinder, International Journal of Climatology.
Willmott, C.J., Robeson, S.M., Matsuura, K., 2011. A refined index of model performance. International Journal of Climatology.
See Also
Other model evaluation functions:
TaylorDiagram()
,
conditionalEval()
,
conditionalQuantile()
Examples
## the example below is somewhat artificial --- assuming the observed
## values are given by NOx and the predicted values by NO2.
modStats(mydata, mod = "no2", obs = "nox")
## evaluation stats by season
modStats(mydata, mod = "no2", obs = "nox", type = "season")
Example air quality monitoring data for openair
Description
The mydata
dataset is provided as an example dataset as part of the openair
package. The dataset contains hourly measurements of air pollutant
concentrations, wind speed and wind direction collected at the Marylebone
(London) air quality monitoring supersite between 1st January 1998 and 23rd
June 2005.
Usage
mydata
Format
An object of class tbl_df
(inherits from tbl
, data.frame
) with 65533 rows and 10 columns.
Details
- date
Observation date/time stamp in year-month-day hour:minute:second format (POSIXct).
- ws
Wind speed, in m/s, as numeric vector.
- wd
Wind direction, in degrees from North, as a numeric vector.
- nox
Oxides of nitrogen concentration, in ppb, as a numeric vector.
- no2
Nitrogen dioxide concentration, in ppb, as a numeric vector.
- o3
Ozone concentration, in ppb, as a numeric vector.
- pm10
Particulate PM10 fraction measurement, in ug/m3 (raw TEOM), as a numeric vector.
- so2
Sulfur dioxide concentration, in ppb, as a numeric vector.
- co
Carbon monoxide concentration, in ppm, as a numeric vector.
- pm25
Particulate PM2.5 fraction measurement, in ug/m3, as a numeric vector.
Note
openair functions generally require data frames with
a field "date" that can be in either POSIXct
or Date
format
Source
mydata
was compiled from data archived in the London Air Quality
Archive. See https://londonair.org.uk for site details.
Examples
# basic structure
head(mydata)
Pre-defined openair colours and definition of user-defined colours
Description
This in primarily an internal openair function to make it easy for users to select particular colour schemes, or define their own range of colours of a user-defined length.
Usage
openColours(scheme = "default", n = 100)
Arguments
scheme |
Any one of the pre-defined |
n |
number of colours required. |
Value
A character vector of hex codes
Schemes
The following schemes are made available by openColours()
:
Sequential Colours:
"default", "increment", "brewer1", "heat", "jet", "turbo", "hue", "greyscale".
Simplified versions of the
viridis
colours: "viridis", "plasma", "magma", "inferno", "cividis", and "turbo".Simplified versions of the
RColorBrewer
sequential palettes: "Blues", "BuGn", "BuPu", "GnBu", "Greens", "Greys", "Oranges", "OrRd", "PuBu", "PuBuGn", "PuRd", "Purples", "RdPu", "Reds", "YlGn", "YlGnBu", "YlOrBr", "YlOrRd".
Diverging Palettes:
Simplified versions of the
RColorBrewer
diverging palettes: "BrBG", "PiYG", "PRGn", "PuOr", "RdBu", "RdGy", "RdYlBu", "RdYlGn", "Spectral".
Qualitative Palettes:
Simplified versions of the
RColorBrewer
qualitative palettes: "Accent", "Dark2", "Paired", "Pastel1", "Pastel2", "Set1", "Set2", "Set3"."okabeito" (or "cbPalette"), a colour-blind safe palette based on the work of Masataka Okabe and Kei Ito (https://jfly.uni-koeln.de/color/)
"tol.bright" (or "tol"), "tol.muted" and "tol.light", colour-blind safe palettes based on the work of Paul Tol.
"tableau" and "observable", aliases for the "Tableau10" (https://www.tableau.com/blog/colors-upgrade-tableau-10-56782) and "Observable10" (https://observablehq.com/blog/crafting-data-colors) colour palettes. These could be useful for consistency between openair plots and with figures made in Tableau or Observable Plot.
UK Government Palettes:
"daqi" and "daqi.bands", the colours associated with the UK daily air quality index; "daqi" (a palette of 10 colours, corresponding to each index value) or "daqi.bands" (4 colours, corresponding to each band - Low, Moderate, High, and Very High). These colours were taken directly from https://uk-air.defra.gov.uk/air-pollution/daqi and may be useful in figures like
calendarPlot()
."gaf.cat", "gaf.focus" and "gaf.seq", colours recommended by the UK Government Analysis function (https://analysisfunction.civilservice.gov.uk/policy-store/data-visualisation-colours-in-charts/). "gaf.cat" will return the 'categorical' palette (max 6 colours), "gaf.focus" the 'focus' palette (max 2 colours), and "gaf.seq" the 'sequential' palette.
Details
Because of the way many of the schemes have been developed they only exist
over certain number of colour gradations (typically 3–10) — see
?brewer.pal
for actual details. If less than or more than the required
number of colours is supplied then openair
will interpolate the colours.
Each of the pre-defined schemes have merits and their use will depend on a
particular situation. For showing incrementing concentrations, e.g., high
concentrations emphasised, then "default", "heat", "jet", "turbo", and
"increment" are very useful. See also the description of RColorBrewer
schemes for the option scheme
.
To colour-code categorical-type problems, e.g., colours for different pollutants, "hue" and "brewer1" are useful.
When publishing in black and white, "greyscale" is often convenient. With most openair functions, as well as generating a greyscale colour gradient, it also resets strip background and other coloured text and lines to greyscale values.
Failing that, the user can define their own schemes based on R colour
names. To see the full list of names, type colors()
into R.
Author(s)
David Carslaw
Jack Davison
References
https://uk-air.defra.gov.uk/air-pollution/daqi
https://analysisfunction.civilservice.gov.uk/policy-store/data-visualisation-colours-in-charts/
Examples
# to return 5 colours from the "jet" scheme:
cols <- openColours("jet", 5)
cols
# to interpolate between named colours e.g. 10 colours from yellow to
# green to red:
cols <- openColours(c("yellow", "green", "red"), 10)
cols
Function to plot percentiles by wind direction
Description
percentileRose
plots percentiles by wind direction with flexible
conditioning. The plot can display multiple percentile lines or filled areas.
Usage
percentileRose(
mydata,
pollutant = "nox",
wd = "wd",
type = "default",
percentile = c(25, 50, 75, 90, 95),
smooth = FALSE,
method = "default",
cols = "default",
angle = 10,
mean = TRUE,
mean.lty = 1,
mean.lwd = 3,
mean.col = "grey",
fill = TRUE,
intervals = NULL,
angle.scale = 45,
auto.text = TRUE,
key.header = NULL,
key.footer = "percentile",
key.position = "bottom",
key = TRUE,
alpha = 1,
plot = TRUE,
...
)
Arguments
mydata |
A data frame minimally containing |
pollutant |
Mandatory. A pollutant name corresponding to a variable in a
data frame should be supplied e.g. |
wd |
Name of wind direction field. |
type |
It is also possible to choose Type can be up length two e.g. |
percentile |
The percentile value(s) to plot. Must be between 0–100. If
|
smooth |
Should the wind direction data be smoothed using a cyclic spline? |
method |
When |
cols |
Colours to be used for plotting. Options include
“default”, “increment”, “heat”, “jet” and
|
angle |
Default angle of “spokes” is when |
mean |
Show the mean by wind direction as a line? |
mean.lty |
Line type for mean line. |
mean.lwd |
Line width for mean line. |
mean.col |
Line colour for mean line. |
fill |
Should the percentile intervals be filled (default) or should
lines be drawn ( |
intervals |
User-supplied intervals for the scale e.g. |
angle.scale |
Sometimes the placement of the scale may interfere with an
interesting feature. The user can therefore set |
auto.text |
Either |
key.header |
Adds additional text/labels to the scale key. For example,
passing the options |
key.footer |
see |
key.position |
Location where the scale key is to plotted. Allowed
arguments currently include |
key |
Fine control of the scale key via |
alpha |
The alpha transparency to use for the plotting surface (a value
between 0 and 1 with zero being fully transparent and 1 fully opaque).
Setting a value below 1 can be useful when plotting surfaces on a map using
the package |
plot |
Should a plot be produced? |
... |
Other graphical parameters are passed onto |
Details
percentileRose
calculates percentile levels of a pollutant and plots
them by wind direction. One or more percentile levels can be calculated and
these are displayed as either filled areas or as lines.
The wind directions are rounded to the nearest 10 degrees, consistent with
surface data from the UK Met Office before a smooth is fitted. The levels by
wind direction are optionally calculated using a cyclic smooth cubic spline
using the option smooth
. If smooth = FALSE
then the data are
shown in 10 degree sectors.
The percentileRose
function compliments other similar functions
including windRose()
, pollutionRose()
,
polarFreq()
or polarPlot()
. It is most useful for
showing the distribution of concentrations by wind direction and often can
reveal different sources e.g. those that only affect high percentile
concentrations such as a chimney stack.
Similar to other functions, flexible conditioning is available through the
type
option. It is easy for example to consider multiple percentile
values for a pollutant by season, year and so on. See examples below.
percentileRose
also offers great flexibility with the scale used and
the user has fine control over both the range, interval and colour.
Value
an openair object
Author(s)
David Carslaw
References
Ashbaugh, L.L., Malm, W.C., Sadeh, W.Z., 1985. A residence time probability analysis of sulfur concentrations at ground canyon national park. Atmospheric Environment 19 (8), 1263-1270.
See Also
Other polar directional analysis functions:
polarAnnulus()
,
polarCluster()
,
polarDiff()
,
polarFreq()
,
polarPlot()
,
pollutionRose()
,
windRose()
Examples
# basic percentile plot
percentileRose(mydata, pollutant = "o3")
# 50/95th percentiles of ozone, with different colours
percentileRose(mydata, pollutant = "o3", percentile = c(50, 95), col = "brewer1")
## Not run:
# percentiles of ozone by year, with different colours
percentileRose(mydata, type = "year", pollutant = "o3", col = "brewer1")
# percentile concentrations by season and day/nighttime..
percentileRose(mydata, type = c("season", "daylight"), pollutant = "o3", col = "brewer1")
## End(Not run)
Bivariate polarAnnulus plot
Description
Typically plots the concentration of a pollutant by wind direction and as a function of time as an annulus. The function is good for visualising how concentrations of pollutants vary by wind direction and a time period e.g. by month, day of week.
Usage
polarAnnulus(
mydata,
pollutant = "nox",
resolution = "fine",
local.tz = NULL,
period = "hour",
type = "default",
statistic = "mean",
percentile = NA,
limits = NULL,
cols = "default",
width = "normal",
min.bin = 1,
exclude.missing = TRUE,
date.pad = FALSE,
force.positive = TRUE,
k = c(20, 10),
normalise = FALSE,
key.header = statistic,
key.footer = pollutant,
key.position = "right",
key = TRUE,
auto.text = TRUE,
alpha = 1,
plot = TRUE,
...
)
Arguments
mydata |
A data frame minimally containing |
pollutant |
Mandatory. A pollutant name corresponding to a variable in a
data frame should be supplied e.g. |
resolution |
Two plot resolutions can be set: “normal” and “fine” (the default). |
local.tz |
Should the results be calculated in local time that includes
a treatment of daylight savings time (DST)? The default is not to consider
DST issues, provided the data were imported without a DST offset. Emissions
activity tends to occur at local time e.g. rush hour is at 8 am every day.
When the clocks go forward in spring, the emissions are effectively
released into the atmosphere typically 1 hour earlier during the summertime
i.e. when DST applies. When plotting diurnal profiles, this has the effect
of “smearing-out” the concentrations. Sometimes, a useful approach
is to express time as local time. This correction tends to produce
better-defined diurnal profiles of concentration (or other variables) and
allows a better comparison to be made with emissions/activity data. If set
to |
period |
This determines the temporal period to consider. Options are “hour” (the default, to plot diurnal variations), “season” to plot variation throughout the year, “weekday” to plot day of the week variation and “trend” to plot the trend by wind direction. |
type |
It is also possible to choose Type can be up length two e.g. Also note that for the |
statistic |
The statistic that should be applied to each wind
speed/direction bin. Can be “mean” (default), “median”,
“max” (maximum), “frequency”. “stdev” (standard
deviation), “weighted.mean” or “cpf” (Conditional Probability
Function). Because of the smoothing involved, the colour scale for some of
these statistics is only to provide an indication of overall pattern and
should not be interpreted in concentration units e.g. for |
percentile |
If |
limits |
The function does its best to choose sensible limits
automatically. However, there are circumstances when the user will wish to
set different ones. An example would be a series of plots showing each year
of data separately. The limits are set in the form |
cols |
Colours to be used for plotting. Options include
“default”, “increment”, “heat”, “jet” and
|
width |
The width of the annulus; can be “normal” (the default), “thin” or “fat”. |
min.bin |
The minimum number of points allowed in a wind speed/wind
direction bin. The default is 1. A value of two requires at least 2 valid
records in each bin an so on; bins with less than 2 valid records are set
to NA. Care should be taken when using a value > 1 because of the risk of
removing real data points. It is recommended to consider your data with
care. Also, the |
exclude.missing |
Setting this option to |
date.pad |
For |
force.positive |
The default is |
k |
The smoothing value supplied to |
normalise |
If |
key.header |
Adds additional text/labels to the scale key. For example,
passing the options |
key.footer |
see |
key.position |
Location where the scale key is to plotted. Allowed
arguments currently include |
key |
Fine control of the scale key via |
auto.text |
Either |
alpha |
The alpha transparency to use for the plotting surface (a value
between 0 and 1 with zero being fully transparent and 1 fully opaque).
Setting a value below 1 can be useful when plotting surfaces on a map using
the package |
plot |
Should a plot be produced? |
... |
Other graphical parameters passed onto |
Details
The polarAnnulus
function shares many of the properties of the
polarPlot
. However, polarAnnulus
is focussed on displaying
information on how concentrations of a pollutant (values of another variable)
vary with wind direction and time. Plotting as an annulus helps to reduce
compression of information towards the centre of the plot. The circular plot
is easy to interpret because wind direction is most easily understood in
polar rather than Cartesian coordinates.
The inner part of the annulus represents the earliest time and the outer part of the annulus the latest time. The time dimension can be shown in many ways including "trend", "hour" (hour or day), "season" (month of the year) and "weekday" (day of the week). Taking hour as an example, the plot will show how concentrations vary by hour of the day and wind direction. Such plots can be very useful for understanding how different source influences affect a location.
For type = "trend"
the amount of smoothing does not vary linearly with
the length of the time series i.e. a certain amount of smoothing per unit
interval in time. This is a deliberate choice because should one be
interested in a subset (in time) of data, more detail will be provided for
the subset compared with the full data set. This allows users to investigate
specific periods in more detail. Full flexibility is given through the
smoothing parameter k
.
Value
an openair object
Author(s)
David Carslaw
See Also
Other polar directional analysis functions:
percentileRose()
,
polarCluster()
,
polarDiff()
,
polarFreq()
,
polarPlot()
,
pollutionRose()
,
windRose()
Examples
# diurnal plot for PM10 at Marylebone Rd
## Not run:
polarAnnulus(mydata,
pollutant = "pm10",
main = "diurnal variation in pm10 at Marylebone Road"
)
## End(Not run)
# seasonal plot for PM10 at Marylebone Rd
## Not run:
polarAnnulus(mydata, poll = "pm10", period = "season")
## End(Not run)
# trend in coarse particles (PMc = PM10 - PM2.5), calculate PMc first
mydata$pmc <- mydata$pm10 - mydata$pm25
## Not run:
polarAnnulus(mydata,
poll = "pmc", period = "trend",
main = "trend in pmc at Marylebone Road"
)
## End(Not run)
K-means clustering of bivariate polar plots
Description
Function for identifying clusters in bivariate polar plots (polarPlot()
);
identifying clusters in the original data for subsequent processing.
Usage
polarCluster(
mydata,
pollutant = "nox",
x = "ws",
wd = "wd",
n.clusters = 6,
after = NA,
cols = "Paired",
angle.scale = 315,
units = x,
auto.text = TRUE,
plot = TRUE,
plot.data = FALSE,
...
)
Arguments
mydata |
A data frame minimally containing |
pollutant |
Mandatory. A pollutant name corresponding to a variable in a
data frame should be supplied e.g. |
x |
Name of variable to plot against wind direction in polar coordinates, the default is wind speed, “ws”. |
wd |
Name of wind direction field. |
n.clusters |
Number of clusters to use. If |
after |
The function can be applied to differences between polar plot
surfaces (see polarDiff for details). If an |
cols |
Colours to be used for plotting. Options include
“default”, “increment”, “heat”, “jet” and
|
angle.scale |
Sometimes the placement of the scale may interfere with an
interesting feature. The user can therefore set |
units |
The units shown on the polar axis scale. |
auto.text |
Either |
plot |
Should a plot be produced? |
plot.data |
By default, the |
... |
Arguments passed on to
|
Details
Bivariate polar plots generated using the polarPlot
function provide a
very useful graphical technique for identifying and characterising different
air pollution sources. While bivariate polar plots provide a useful graphical
indication of potential sources, their location and wind-speed or other
variable dependence, they do have several limitations. Often, a ‘feature’
will be detected in a plot but the subsequent analysis of data meeting
particular wind speed/direction criteria will be based only on the judgement
of the investigator concerning the wind speed-direction intervals of
interest. Furthermore, the identification of a feature can depend on the
choice of the colour scale used, making the process somewhat arbitrary.
polarCluster
applies Partition Around Medoids (PAM) clustering
techniques to polarPlot()
surfaces to help identify potentially interesting
features for further analysis. Details of PAM can be found in the
cluster
package (a core R package that will be pre-installed on all R
systems). PAM clustering is similar to k-means but has several advantages
e.g. is more robust to outliers. The clustering is based on the equal
contribution assumed from the u and v wind components and the associated
concentration. The data are standardized before clustering takes place.
The function works best by first trying different numbers of clusters and
plotting them. This is achieved by setting n.clusters
to be of length
more than 1. For example, if n.clusters = 2:10
then a plot will be
output showing the 9 cluster levels 2 to 10.
The clustering can also be applied to differences in polar plot surfaces (see
polarDiff()
). On this case a second data frame (after
) should be
supplied.
Note that clustering is computationally intensive and the function can take a
long time to run — particularly when the number of clusters is increased.
For this reason it can be a good idea to run a few clusters first to get a
feel for it e.g. n.clusters = 2:5
.
Once the number of clusters has been decided, the user can then run
polarCluster
to return the original data frame together with a new
column cluster
, which gives the cluster number as a character (see
example). Note that any rows where the value of pollutant
is NA
are ignored so that the returned data frame may have fewer rows than the
original.
Note that there are no automatic ways in ensuring the most appropriate number of clusters as this is application dependent. However, there is often a-priori information available on what different features in polar plots correspond to. Nevertheless, the appropriateness of different clusters is best determined by post-processing the data. The Carslaw and Beevers (2012) paper discusses these issues in more detail.
Note that unlike most other openair
functions only a single
type
“default” is allowed.
Value
an openair object. The object includes four main
components: call
, the command used to generate the plot;
data
, by default the original data frame with a new field
cluster
identifying the cluster, clust_stats
giving the
contributions made by each cluster to number of measurements, their
percentage and the percentage by pollutant; and plot
, the plot
itself. Note that any rows where the value of pollutant
is NA
are ignored so that the returned data frame may have fewer rows than the
original.
If the clustering is carried out considering differences, i.e., an
after
data frame is supplied, the output also includes the
after
data frame with cluster identified.
Author(s)
David Carslaw
References
Carslaw, D.C., Beevers, S.D, Ropkins, K and M.C. Bell (2006). Detecting and quantifying aircraft and other on-airport contributions to ambient nitrogen oxides in the vicinity of a large international airport. Atmospheric Environment. 40/28 pp 5424-5434.
Carslaw, D.C., & Beevers, S.D. (2013). Characterising and understanding emission sources using bivariate polar plots and k-means clustering. Environmental Modelling & Software, 40, 325-329. doi:10.1016/j.envsoft.2012.09.005
See Also
Other polar directional analysis functions:
percentileRose()
,
polarAnnulus()
,
polarDiff()
,
polarFreq()
,
polarPlot()
,
pollutionRose()
,
windRose()
Other cluster analysis functions:
timeProp()
,
trajCluster()
Examples
## Not run:
## plot 2-8 clusters. Warning! This can take several minutes...
polarCluster(mydata, pollutant = "nox", n.clusters = 2:8)
# basic plot with 6 clusters
results <- polarCluster(mydata, pollutant = "nox", n.clusters = 6)
## get results, could read into a new data frame to make it easier to refer to
## e.g. results <- results$data...
head(results$data)
## how many points are there in each cluster?
table(results$data$cluster)
## plot clusters 3 and 4 as a timeVariation plot using SAME colours as in
## cluster plot
timeVariation(subset(results$data, cluster %in% c("3", "4")),
pollutant = "nox",
group = "cluster", col = openColours("Paired", 6)[c(3, 4)]
)
## End(Not run)
Polar plots considering changes in concentrations between two time periods
Description
This function provides a way of showing the differences in concentrations between two time periods as a polar plot. There are several uses of this function, but the most common will be to see how source(s) may have changed between two periods.
Usage
polarDiff(
before,
after,
pollutant = "nox",
type = "default",
x = "ws",
limits = NULL,
auto.text = TRUE,
plot = TRUE,
...
)
Arguments
before , after |
Data frames representing the "before" and "after" cases.
See |
pollutant |
Mandatory. A pollutant name corresponding to a variable in a
data frame should be supplied e.g. |
type |
It is also possible to choose Type can be up length two e.g. |
x |
Name of variable to plot against wind direction in polar coordinates, the default is wind speed, “ws”. |
limits |
The function does its best to choose sensible limits
automatically. However, there are circumstances when the user will wish to
set different ones. An example would be a series of plots showing each year
of data separately. The limits are set in the form |
auto.text |
Either |
plot |
Should a plot be produced? |
... |
Arguments passed on to
|
Details
While the function is primarily intended to compare two time periods at the same location, it can be used for any two data sets that contain the same pollutant. For example, data from two sites that are close to one another, or two co-located instruments.
The analysis works by calculating the polar plot surface for the before
and
after
periods and then subtracting the before
surface from the after
surface.
Value
an openair plot.
See Also
Other polar directional analysis functions:
percentileRose()
,
polarAnnulus()
,
polarCluster()
,
polarFreq()
,
polarPlot()
,
pollutionRose()
,
windRose()
Examples
## Not run:
before_data <- selectByDate(mydata, year = 2002)
after_data <- selectByDate(mydata, year = 2003)
polarDiff(before_data, after_data, pollutant = "no2")
# with some options
polarDiff(before_data, after_data, pollutant = "no2", cols = "RdYlBu", limits = c(-20, 20))
## End(Not run)
Function to plot wind speed/direction frequencies and other statistics
Description
polarFreq
primarily plots wind speed-direction frequencies in
‘bins’. Each bin is colour-coded depending on the frequency of
measurements. Bins can also be used to show the concentration of pollutants
using a range of commonly used statistics.
Usage
polarFreq(
mydata,
pollutant = NULL,
statistic = "frequency",
ws.int = 1,
wd.nint = 36,
grid.line = 5,
breaks = NULL,
cols = "default",
trans = TRUE,
type = "default",
min.bin = 1,
ws.upper = NA,
offset = 10,
border.col = "transparent",
key.header = statistic,
key.footer = pollutant,
key.position = "right",
key = TRUE,
auto.text = TRUE,
alpha = 1,
plot = TRUE,
...
)
Arguments
mydata |
A data frame minimally containing |
pollutant |
Mandatory. A pollutant name corresponding to a variable in
a data frame should be supplied e.g. |
statistic |
The statistic that should be applied to each wind
speed/direction bin. Can be “frequency”, “mean”,
“median”, “max” (maximum), “stdev” (standard
deviation) or “weighted.mean”. The option “frequency” (the
default) is the simplest and plots the frequency of wind speed/direction
in different bins. The scale therefore shows the counts in each bin. The
option “mean” will plot the mean concentration of a pollutant (see
next point) in wind speed/direction bins, and so on. Finally,
“weighted.mean” will plot the concentration of a pollutant weighted
by wind speed/direction. Each segment therefore provides the percentage
overall contribution to the total concentration. More information is given
in the examples. Note that for options other than “frequency”, it
is necessary to also provide the name of a pollutant. See function
|
ws.int |
Wind speed interval assumed. In some cases e.g. a low met mast, an interval of 0.5 may be more appropriate. |
wd.nint |
Number of intervals of wind direction. |
grid.line |
Radial spacing of grid lines. |
breaks |
The user can provide their own scale. |
cols |
Colours to be used for plotting. Options include
“default”, “increment”, “heat”, “jet” and
|
trans |
Should a transformation be applied? Sometimes when producing
plots of this kind they can be dominated by a few high points. The default
therefore is |
type |
It is also possible to choose Type can be up length two e.g. |
min.bin |
The minimum number of points allowed in a wind speed/wind
direction bin. The default is 1. A value of two requires at least 2 valid
records in each bin an so on; bins with less than 2 valid records are set
to NA. Care should be taken when using a value > 1 because of the risk of
removing real data points. It is recommended to consider your data with
care. Also, the |
ws.upper |
A user-defined upper wind speed to use. This is useful for
ensuring a consistent scale between different plots. For example, to
always ensure that wind speeds are displayed between 1-10, set
|
offset |
|
border.col |
The colour of the boundary of each wind speed/direction bin. The default is transparent. Another useful choice sometimes is "white". |
key.header |
Adds additional text/labels to the scale key. For example,
passing the options |
key.footer |
see |
key.position |
Location where the scale key is to plotted. Allowed
arguments currently include |
key |
Fine control of the scale key via |
auto.text |
Either |
alpha |
The alpha transparency to use for the plotting surface (a value
between 0 and 1 with zero being fully transparent and 1 fully opaque).
Setting a value below 1 can be useful when plotting surfaces on a map using
the package |
plot |
Should a plot be produced? |
... |
Other graphical parameters passed onto |
Details
polarFreq
is its default use provides details of wind speed and
direction frequencies. In this respect it is similar to
windRose()
, but considers wind direction intervals of 10 degrees
and a user-specified wind speed interval. The frequency of wind
speeds/directions formed by these ‘bins’ is represented on a colour
scale.
The polarFreq
function is more flexible than either
windRose()
or polarPlot()
. It can, for example, also
consider pollutant concentrations (see examples below). Instead of the
number of data points in each bin, the concentration can be shown. Further,
a range of statistics can be used to describe each bin - see
statistic
above. Plotting mean concentrations is useful for source
identification and is the same as polarPlot()
but without
smoothing, which may be preferable for some data. Plotting with
statistic = "weighted.mean"
is particularly useful for understanding
the relative importance of different source contributions. For example, high
mean concentrations may be observed for high wind speed conditions, but the
weighted mean concentration may well show that the contribution to overall
concentrations is very low.
polarFreq
also offers great flexibility with the scale used and the
user has fine control over both the range, interval and colour.
Value
an openair object
Author(s)
David Carslaw
See Also
Other polar directional analysis functions:
percentileRose()
,
polarAnnulus()
,
polarCluster()
,
polarDiff()
,
polarPlot()
,
pollutionRose()
,
windRose()
Examples
# basic wind frequency plot
polarFreq(mydata)
# wind frequencies by year
## Not run:
polarFreq(mydata, type = "year")
## End(Not run)
# mean SO2 by year, showing only bins with at least 2 points
## Not run:
polarFreq(mydata, pollutant = "so2", type = "year", statistic = "mean", min.bin = 2)
## End(Not run)
# weighted mean SO2 by year, showing only bins with at least 2 points
## Not run:
polarFreq(mydata,
pollutant = "so2", type = "year", statistic = "weighted.mean",
min.bin = 2
)
## End(Not run)
# windRose for just 2000 and 2003 with different colours
## Not run:
polarFreq(subset(mydata, format(date, "%Y") %in% c(2000, 2003)),
type = "year", cols = "turbo"
)
## End(Not run)
# user defined breaks from 0-700 in intervals of 100 (note linear scale)
## Not run:
polarFreq(mydata, breaks = seq(0, 700, 100))
## End(Not run)
# more complicated user-defined breaks - useful for highlighting bins
# with a certain number of data points
## Not run:
polarFreq(mydata, breaks = c(0, 10, 50, 100, 250, 500, 700))
## End(Not run)
# source contribution plot and use of offset option
## Not run:
polarFreq(mydata,
pollutant = "pm25",
statistic = "weighted.mean", offset = 50, ws.int = 25, trans = FALSE
)
## End(Not run)
Function for plotting bivariate polar plots with smoothing.
Description
Function for plotting pollutant concentration in polar coordinates showing
concentration by wind speed (or another numeric variable) and direction. Mean
concentrations are calculated for wind speed-direction ‘bins’ (e.g.
0-1, 1-2 m/s,... and 0-10, 10-20 degrees etc.). To aid interpretation,
gam
smoothing is carried out using mgcv
.
Usage
polarPlot(
mydata,
pollutant = "nox",
x = "ws",
wd = "wd",
type = "default",
statistic = "mean",
limits = NULL,
exclude.missing = TRUE,
uncertainty = FALSE,
percentile = NA,
cols = "default",
weights = c(0.25, 0.5, 0.75),
min.bin = 1,
mis.col = "grey",
upper = NA,
angle.scale = 315,
units = x,
force.positive = TRUE,
k = 100,
normalise = FALSE,
key.header = statistic,
key.footer = pollutant,
key.position = "right",
key = TRUE,
auto.text = TRUE,
ws_spread = 1.5,
wd_spread = 5,
x_error = NA,
y_error = NA,
kernel = "gaussian",
formula.label = TRUE,
tau = 0.5,
alpha = 1,
plot = TRUE,
...
)
Arguments
mydata |
A data frame minimally containing |
pollutant |
Mandatory. A pollutant name corresponding to a variable in a
data frame should be supplied e.g. |
x |
Name of variable to plot against wind direction in polar coordinates, the default is wind speed, “ws”. |
wd |
Name of wind direction field. |
type |
It is also possible to choose Type can be up length two e.g. |
statistic |
The statistic that should be applied to each wind
speed/direction bin. Because of the smoothing involved, the colour scale
for some of these statistics is only to provide an indication of overall
pattern and should not be interpreted in concentration units e.g. for
|
limits |
The function does its best to choose sensible limits
automatically. However, there are circumstances when the user will wish to
set different ones. An example would be a series of plots showing each year
of data separately. The limits are set in the form |
exclude.missing |
Setting this option to |
uncertainty |
Should the uncertainty in the calculated surface be shown?
If |
percentile |
If
|
cols |
Colours to be used for plotting. Options include
“default”, “increment”, “heat”, “jet” and
|
weights |
At the edges of the plot there may only be a few data points
in each wind speed-direction interval, which could in some situations
distort the plot if the concentrations are high. An alternative to down-weighting these points they can be removed
altogether using |
min.bin |
The minimum number of points allowed in a wind speed/wind
direction bin. The default is 1. A value of two requires at least 2 valid
records in each bin an so on; bins with less than 2 valid records are set
to NA. Care should be taken when using a value > 1 because of the risk of
removing real data points. It is recommended to consider your data with
care. Also, the |
mis.col |
When |
upper |
This sets the upper limit wind speed to be used. Often there are only a relatively few data points at very high wind speeds and plotting all of them can reduce the useful information in the plot. |
angle.scale |
Sometimes the placement of the scale may interfere with an
interesting feature. The user can therefore set |
units |
The units shown on the polar axis scale. |
force.positive |
The default is |
k |
This is the smoothing parameter used by the |
normalise |
If |
key.header |
Adds additional text/labels to the scale key. For example,
passing the options |
key.footer |
see |
key.position |
Location where the scale key is to plotted. Allowed
arguments currently include |
key |
Fine control of the scale key via |
auto.text |
Either |
ws_spread |
The value of sigma used for Gaussian kernel weighting of
wind speed when |
wd_spread |
The value of sigma used for Gaussian kernel weighting of
wind direction when |
x_error |
The |
y_error |
The |
kernel |
Type of kernel used for the weighting procedure for when
correlation or regression techniques are used. Only |
formula.label |
When pair-wise statistics such as regression slopes are
calculated and plotted, should a formula label be displayed?
|
tau |
The quantile to be estimated when |
alpha |
The alpha transparency to use for the plotting surface (a value
between 0 and 1 with zero being fully transparent and 1 fully opaque).
Setting a value below 1 can be useful when plotting surfaces on a map using
the package |
plot |
Should a plot be produced? |
... |
Other graphical parameters passed onto |
Details
The bivariate polar plot is a useful diagnostic tool for quickly gaining an idea of potential sources. Wind speed is one of the most useful variables to use to separate source types (see references). For example, ground-level concentrations resulting from buoyant plumes from chimney stacks tend to peak under higher wind speed conditions. Conversely, ground-level, non-buoyant plumes such as from road traffic, tend to have highest concentrations under low wind speed conditions. Other sources such as from aircraft engines also show differing characteristics by wind speed.
The function has been developed to allow variables other than wind speed to be plotted with wind direction in polar coordinates. The key issue is that the other variable plotted against wind direction should be discriminating in some way. For example, temperature can help reveal high-level sources brought down to ground level in unstable atmospheric conditions, or show the effect a source emission dependent on temperature e.g. biogenic isoprene.
The plots can vary considerably depending on how much smoothing is done. The
approach adopted here is based on the very flexible and capable mgcv
package that uses Generalized Additive Models. While methods do exist to
find an optimum level of smoothness, they are not necessarily useful. The
principal aim of polarPlot
is as a graphical analysis rather than for
quantitative purposes. In this respect the smoothing aims to strike a balance
between revealing interesting (real) features and overly noisy data. The
defaults used in polarPlot()
are based on the analysis of data from many
different sources. More advanced users may wish to modify the code and adopt
other smoothing approaches.
Various statistics are possible to consider e.g. mean, maximum, median.
statistic = "max"
is often useful for revealing sources. Pair-wise
statistics between two pollutants can also be calculated.
The function can also be used to compare two pollutant species through a
range of pair-wise statistics (see help on statistic
) and Grange et al.
(2016) (open-access publication link below).
Wind direction is split up into 10 degree intervals and the other variable (e.g. wind speed) 30 intervals. These 2D bins are then used to calculate the statistics.
These plots often show interesting features at higher wind speeds (see
references below). For these conditions there can be very few measurements
and therefore greater uncertainty in the calculation of the surface. There
are several ways in which this issue can be tackled. First, it is possible to
avoid smoothing altogether and use polarFreq()
. Second, the effect of
setting a minimum number of measurements in each wind speed-direction bin can
be examined through min.bin
. It is possible that a single point at high
wind speed conditions can strongly affect the surface prediction. Therefore,
setting min.bin = 3
, for example, will remove all wind speed-direction bins
with fewer than 3 measurements before fitting the surface. Third, consider
setting uncertainty = TRUE
. This option will show the predicted surface
together with upper and lower 95% confidence intervals, which take account of
the frequency of measurements.
Variants on polarPlot
include polarAnnulus()
and polarFreq()
.
Value
an openair object. data
contains four set
columns: cond
, conditioning based on type
; u
and v
, the
translational vectors based on ws
and wd
; and the local pollutant
estimate.
Author(s)
David Carslaw
References
Ashbaugh, L.L., Malm, W.C., Sadeh, W.Z., 1985. A residence time probability analysis of sulfur concentrations at ground canyon national park. Atmospheric Environment 19 (8), 1263-1270.
Carslaw, D.C., Beevers, S.D, Ropkins, K and M.C. Bell (2006). Detecting and quantifying aircraft and other on-airport contributions to ambient nitrogen oxides in the vicinity of a large international airport. Atmospheric Environment. 40/28 pp 5424-5434.
Carslaw, D.C., & Beevers, S.D. (2013). Characterising and understanding emission sources using bivariate polar plots and k-means clustering. Environmental Modelling & Software, 40, 325-329. DOI: 10.1016/j.envsoft.2012.09.005.
Henry, R.C., Chang, Y.S., Spiegelman, C.H., 2002. Locating nearby sources of air pollution by nonparametric regression of atmospheric concentrations on wind direction. Atmospheric Environment 36 (13), 2237-2244.
Henry, R., Norris, G.A., Vedantham, R., Turner, J.R., 2009. Source region identification using Kernel smoothing. Environ. Sci. Technol. 43 (11), 4090e4097. DOI: 10.1021/es8011723.
Uria-Tellaetxe, I. and D.C. Carslaw (2014). Source identification using a conditional bivariate Probability function. Environmental Modelling & Software, Vol. 59, 1-9.
Westmoreland, E.J., N. Carslaw, D.C. Carslaw, A. Gillah and E. Bates (2007). Analysis of air quality within a street canyon using statistical and dispersion modelling techniques. Atmospheric Environment. Vol. 41(39), pp. 9195-9205.
Yu, K.N., Cheung, Y.P., Cheung, T., Henry, R.C., 2004. Identifying the impact of large urban airports on local air quality by nonparametric regression. Atmospheric Environment 38 (27), 4501-4507.
Grange, S. K., Carslaw, D. C., & Lewis, A. C. 2016. Source apportionment advances with bivariate polar plots, correlation, and regression techniques. Atmospheric Environment. 145, 128-134. DOI: 10.1016/j.atmosenv.2016.09.016.
See Also
Other polar directional analysis functions:
percentileRose()
,
polarAnnulus()
,
polarCluster()
,
polarDiff()
,
polarFreq()
,
pollutionRose()
,
windRose()
Examples
# basic plot
polarPlot(mydata, pollutant = "nox")
## Not run:
# polarPlots by year on same scale
polarPlot(mydata, pollutant = "so2", type = "year", main = "polarPlot of so2")
# set minimum number of bins to be used to see if pattern remains similar
polarPlot(mydata, pollutant = "nox", min.bin = 3)
# plot by day of the week
polarPlot(mydata, pollutant = "pm10", type = "weekday")
# show the 95% confidence intervals in the surface fitting
polarPlot(mydata, pollutant = "so2", uncertainty = TRUE)
# Pair-wise statistics
# Pearson correlation
polarPlot(mydata, pollutant = c("pm25", "pm10"), statistic = "r")
# Robust regression slope, takes a bit of time
polarPlot(mydata, pollutant = c("pm25", "pm10"), statistic = "robust.slope")
# Least squares regression works too but it is not recommended, use robust
# regression
# polarPlot(mydata, pollutant = c("pm25", "pm10"), statistic = "slope")
## End(Not run)
Pollution rose variation of the traditional wind rose plot
Description
The traditional wind rose plot that plots wind speed and wind direction by different intervals. The pollution rose applies the same plot structure but substitutes other measurements, most commonly a pollutant time series, for wind speed.
Usage
pollutionRose(
mydata,
pollutant = "nox",
key.footer = pollutant,
key.position = "right",
key = TRUE,
breaks = 6,
paddle = FALSE,
seg = 0.9,
normalise = FALSE,
alpha = 1,
plot = TRUE,
...
)
Arguments
mydata |
A data frame containing fields |
pollutant |
Mandatory. A pollutant name corresponding to a variable in a
data frame should be supplied e.g. |
key.footer |
Adds additional text/labels below the scale key. See
|
key.position |
Location where the scale key is to plotted. Allowed arguments currently include “top”, “right”, “bottom” and “left”. |
key |
Fine control of the scale key via |
breaks |
Most commonly, the number of break points for pollutant
concentrations. The default, 6, attempts to breaks the supplied data at
approximately 6 sensible break points. However, |
paddle |
Either |
seg |
When |
normalise |
If |
alpha |
The alpha transparency to use for the plotting surface (a value
between 0 and 1 with zero being fully transparent and 1 fully opaque).
Setting a value below 1 can be useful when plotting surfaces on a map using
the package |
plot |
Should a plot be produced? |
... |
Arguments passed on to
|
Details
pollutionRose()
is a windRose()
wrapper which brings pollutant
forward in the argument list, and attempts to sensibly rescale break points
based on the pollutant
data range by by-passing ws.int
.
By default, pollutionRose()
will plot a pollution rose of nox
using
"wedge" style segments and placing the scale key to the right of the plot.
It is possible to compare two wind speed-direction data sets using
pollutionRose()
. There are many reasons for doing so e.g. to see how one
site compares with another or for meteorological model evaluation. In this
case, ws
and wd
are considered to the the reference data sets
with which a second set of wind speed and wind directions are to be compared
(ws2
and wd2
). The first set of values is subtracted from the
second and the differences compared. If for example, wd2
was biased
positive compared with wd
then pollutionRose
will show the bias
in polar coordinates. In its default use, wind direction bias is colour-coded
to show negative bias in one colour and positive bias in another.
Value
an openair object. Summarised proportions can be
extracted directly using the $data
operator, e.g.
object$data
for output <- windRose(mydata)
. This returns a
data frame with three set columns: cond
, conditioning based on
type
; wd
, the wind direction; and calm
, the
statistic
for the proportion of data unattributed to any specific
wind direction because it was collected under calm conditions; and then
several (one for each range binned for the plot) columns giving proportions
of measurements associated with each ws
or pollutant
range
plotted as a discrete panel.
See Also
Other polar directional analysis functions:
percentileRose()
,
polarAnnulus()
,
polarCluster()
,
polarDiff()
,
polarFreq()
,
polarPlot()
,
windRose()
Examples
# pollutionRose of nox
pollutionRose(mydata, pollutant = "nox")
## source apportionment plot - contribution to mean
## Not run:
pollutionRose(mydata, pollutant = "pm10", type = "year", statistic = "prop.mean")
## End(Not run)
## example of comparing 2 met sites
## first we will make some new ws/wd data with a postive bias
mydata$ws2 <- mydata$ws + 2 * rnorm(nrow(mydata)) + 1
mydata$wd2 <- mydata$wd + 30 * rnorm(nrow(mydata)) + 30
## need to correct negative wd
id <- which(mydata$wd2 < 0)
mydata$wd2[id] <- mydata$wd2[id] + 360
## results show postive bias in wd and ws
pollutionRose(mydata, ws = "ws", wd = "wd", ws2 = "ws2", wd2 = "wd2")
Automatic text formatting for openair
Description
Workhorse function that automatically applies routine text formatting to common expressions and data names used in openair.
Usage
quickText(text, auto.text = TRUE)
Arguments
text |
A character vector. |
auto.text |
A logical option. The default, |
Details
quickText
is routine formatting lookup table. It screens the
supplied character vector text
and automatically applies formatting
to any recognised character sub-series. The function is used in a number of
openair
functions and can also be used directly by users to format
text components of their own graphs (see below).
Value
The function returns an expression for graphical evaluation.
Author(s)
Karl Ropkins.
Examples
# example 1
## see axis formatting in an openair plot, e.g.:
scatterPlot(mydata, x = "no2", y = "pm10")
# example 2
## using quickText in other plots
plot(mydata$no2, mydata$pm10,
xlab = quickText("my no2 label"),
ylab = quickText("pm10 [ ug.m-3 ]")
)
Calculate rolling mean pollutant values
Description
This is a utility function mostly designed to calculate rolling mean statistics relevant to some pollutant limits, e.g., 8 hour rolling means for ozone and 24 hour rolling means for PM10. However, the function has a more general use in helping to display rolling mean values in flexible ways with the rolling window width left, right or centre aligned. The function will try and fill in missing time gaps to get a full time sequence but return a data frame with the same number of rows supplied.
Usage
rollingMean(
mydata,
pollutant = "o3",
width = 8L,
type = "default",
data.thresh = 75,
align = c("centre", "center", "left", "right"),
new.name = NULL,
...
)
Arguments
mydata |
A data frame containing a |
pollutant |
The name of a pollutant, e.g., |
width |
The averaging period (rolling window width) to use, e.g., |
type |
Used for splitting the data further. Passed to |
data.thresh |
The % data capture threshold. No values are calculated if
data capture over the period of interest is less than this value. For
example, with |
align |
Specifies how the moving window should be aligned. |
new.name |
The name given to the new column. If not supplied it will create a name based on the name of the pollutant and the averaging period used. |
... |
Additional parameters passed to |
Author(s)
David Carslaw
Examples
# rolling 8-hour mean for ozone
mydata <- rollingMean(mydata,
pollutant = "o3", width = 8, new.name =
"rollingo3", data.thresh = 75, align = "right"
)
Rolling regression for pollutant source characterisation.
Description
This function calculates rolling regressions for input data with a set window width. The principal use of the function is to identify "dilution lines" where the ratio between two pollutant concentrations is invariant. The original idea is based on the work of Bentley (2004).
Usage
runRegression(mydata, x = "nox", y = "pm10", run.len = 3, date.pad = TRUE)
Arguments
mydata |
A data frame with columns for |
x |
The column name of the |
y |
The column name of the |
run.len |
The window width to be used for a rolling regression. A value of 3 for example for hourly data will consider 3 one-hour time sequences. |
date.pad |
Should gaps in time series be filled before calculations are made? |
Details
The intended use is to apply the approach to air pollution data to extract
consecutive points in time where the ratio between two pollutant
concentrations changes by very little. By filtering the output for high R2
values (typically more than 0.90 to 0.95), conditions where local source
dilution is dominant can be isolated for post processing. The function is
more fully described and used in the openair
online manual, together
with examples.
Value
A tibble with date
and calculated regression coefficients and
other information to plot dilution lines.
References
For original inspiration:
Bentley, S. T. (2004). Graphical techniques for constraining estimates of aerosol emissions from motor vehicles using air monitoring network data. Atmospheric Environment,(10), 1491–1500. https://doi.org/10.1016/j.atmosenv.2003.11.033
Example for vehicle emissions high time resolution data:
Farren, N. J., Schmidt, C., Juchem, H., Pöhler, D., Wilde, S. E., Wagner, R. L., Wilson, S., Shaw, M. D., & Carslaw, D. C. (2023). Emission ratio determination from road vehicles using a range of remote emission sensing techniques. Science of The Total Environment, 875. https://doi.org/10.1016/j.scitotenv.2023.162621.
See Also
Other time series and trend functions:
TheilSen()
,
calendarPlot()
,
smoothTrend()
,
timePlot()
,
timeProp()
,
timeVariation()
,
trendLevel()
Examples
# Just use part of a year of data
output <- runRegression(selectByDate(mydata, year = 2004, month = 1:3),
x = "nox", y = "pm10", run.len = 3
)
output
Flexible scatter plots
Description
Scatter plots with conditioning and three main approaches: conventional scatterPlot, hexagonal binning and kernel density estimates. The former also has options for fitting smooth fits and linear models with uncertainties shown.
Usage
scatterPlot(
mydata,
x = "nox",
y = "no2",
z = NA,
method = "scatter",
group = NA,
avg.time = "default",
data.thresh = 0,
statistic = "mean",
percentile = NA,
type = "default",
smooth = FALSE,
spline = FALSE,
linear = FALSE,
ci = TRUE,
mod.line = FALSE,
cols = "hue",
plot.type = "p",
key = TRUE,
key.title = group,
key.columns = 1,
key.position = "right",
strip = TRUE,
log.x = FALSE,
log.y = FALSE,
x.inc = NULL,
y.inc = NULL,
limits = NULL,
windflow = NULL,
y.relation = "same",
x.relation = "same",
ref.x = NULL,
ref.y = NULL,
k = NA,
dist = 0.02,
map = FALSE,
auto.text = TRUE,
plot = TRUE,
...
)
Arguments
mydata |
A data frame containing at least two numeric variables to plot. |
x |
Name of the x-variable to plot. Note that x can be a date field or a
factor. For example, |
y |
Name of the numeric y-variable to plot. |
z |
Name of the numeric z-variable to plot for |
method |
Methods include “scatter” (conventional scatter plot),
“hexbin” (hexagonal binning using the |
group |
The grouping variable to use, if any. Setting this to a variable in the data frame has the effect of plotting several series in the same panel using different symbols/colours etc. If set to a variable that is a character or factor, those categories or factor levels will be used directly. If set to a numeric variable, it will split that variable in to quantiles. |
avg.time |
This defines the time period to average to. Can be
“sec”, “min”, “hour”, “day”, “DSTday”,
“week”, “month”, “quarter” or “year”. For much
increased flexibility a number can precede these options followed by a
space. For example, a timeAverage of 2 months would be |
data.thresh |
The data capture threshold to use (\
the data using |
statistic |
The statistic to apply when aggregating the data; default is
the mean. Can be one of "mean", "max", "min", "median", "frequency", "sd",
"percentile". Note that "sd" is the standard deviation and "frequency" is
the number (frequency) of valid records in the period. "percentile" is the
percentile level (\
"percentile" option - see below. Not used if |
percentile |
The percentile level in percent used when |
type |
It is also possible to choose Type can be up length two e.g. |
smooth |
A smooth line is fitted to the data if |
spline |
A smooth spline is fitted to the data if |
linear |
A linear model is fitted to the data if |
ci |
Should the confidence intervals for the smooth/linear fit be shown? |
mod.line |
If |
cols |
Colours to be used for plotting. Options include
“default”, “increment”, “heat”, “jet” and
|
plot.type |
|
key |
Should a key be drawn? The default is |
key.title |
The title of the key (if used). |
key.columns |
Number of columns to be used in the key. With many
pollutants a single column can make to key too wide. The user can thus
choose to use several columns by setting |
key.position |
Location where the scale key is to plotted. Allowed arguments currently include “top”, “right”, “bottom” and “left”. |
strip |
Should a strip be drawn? The default is |
log.x |
Should the x-axis appear on a log scale? The default is
|
log.y |
Should the y-axis appear on a log scale? The default is
|
x.inc |
The x-interval to be used for binning data when |
y.inc |
The y-interval to be used for binning data when |
limits |
For |
windflow |
This option allows a scatter plot to show the wind
speed/direction shows as an arrow. The option is a list e.g. The maximum length of the arrow plotted is a fraction of the plot dimension
with the longest arrow being This option works best where there are not too many data to ensure over-plotting does not become a problem. |
y.relation |
This determines how the y-axis scale is plotted. “same” ensures all panels use the same scale and “free” will use panel-specific scales. The latter is a useful setting when plotting data with very different values. |
x.relation |
This determines how the x-axis scale is plotted. “same” ensures all panels use the same scale and “free” will use panel-specific scales. The latter is a useful setting when plotting data with very different values. |
ref.x |
See |
ref.y |
A list with details of the horizontal lines to be added
representing reference line(s). For example, |
k |
Smoothing parameter supplied to |
dist |
When plotting smooth surfaces ( |
map |
Should a base map be drawn? This option is under development. |
auto.text |
Either |
plot |
Should a plot be produced? |
... |
Other graphical parameters are passed onto For |
Details
scatterPlot()
is the basic function for plotting scatter plots in flexible
ways in openair
. It is flexible enough to consider lots of
conditioning variables and takes care of fitting smooth or linear
relationships to the data.
There are four main ways of plotting the relationship between two variables,
which are set using the method
option. The default "scatter"
will plot a conventional scatterPlot. In cases where there are lots of data
and over-plotting becomes a problem, then method = "hexbin"
or
method = "density"
can be useful. The former requires the
hexbin
package to be installed.
There is also a method = "level"
which will bin the x
and
y
data according to the intervals set for x.inc
and
y.inc
and colour the bins according to levels of a third variable,
z
. Sometimes however, a far better understanding of the relationship
between three variables (x
, y
and z
) is gained by
fitting a smooth surface through the data. See examples below.
A smooth fit is shown if smooth = TRUE
which can help show the overall
form of the data e.g. whether the relationship appears to be linear or not.
Also, a linear fit can be shown using linear = TRUE
as an option.
The user has fine control over the choice of colours and symbol type used.
Another way of reducing the number of points used in the plots which can
sometimes be useful is to aggregate the data. For example, hourly data can be
aggregated to daily data. See timePlot()
for examples here.
By default plots are shown with a colour key at the bottom and in the case of
conditioning, strips on the top of each plot. Sometimes this may be overkill
and the user can opt to remove the key and/or the strip by setting key
and/or strip
to FALSE
. One reason to do this is to maximise the
plotting area and therefore the information shown.
Value
an openair object
Author(s)
David Carslaw
See Also
linearRelation()
, timePlot()
and
timeAverage()
for details on selecting averaging times and other
statistics in a flexible way
Examples
# load openair data if not loaded already
dat2004 <- selectByDate(mydata, year = 2004)
# basic use, single pollutant
scatterPlot(dat2004, x = "nox", y = "no2")
## Not run:
# scatterPlot by year
scatterPlot(mydata, x = "nox", y = "no2", type = "year")
## End(Not run)
# scatterPlot by day of the week, removing key at bottom
scatterPlot(dat2004,
x = "nox", y = "no2", type = "weekday", key =
FALSE
)
# example of the use of continuous where colour is used to show
# different levels of a third (numeric) variable
# plot daily averages and choose a filled plot symbol (pch = 16)
# select only 2004
## Not run:
scatterPlot(dat2004, x = "nox", y = "no2", z = "co", avg.time = "day", pch = 16)
# show linear fit, by year
scatterPlot(mydata,
x = "nox", y = "no2", type = "year", smooth =
FALSE, linear = TRUE
)
# do the same, but for daily means...
scatterPlot(mydata,
x = "nox", y = "no2", type = "year", smooth =
FALSE, linear = TRUE, avg.time = "day"
)
# log scales
scatterPlot(mydata,
x = "nox", y = "no2", type = "year", smooth =
FALSE, linear = TRUE, avg.time = "day", log.x = TRUE, log.y = TRUE
)
# also works with the x-axis in date format (alternative to timePlot)
scatterPlot(mydata,
x = "date", y = "no2", avg.time = "month",
key = FALSE
)
## multiple types and grouping variable and continuous colour scale
scatterPlot(mydata, x = "nox", y = "no2", z = "o3", type = c("season", "weekend"))
# use hexagonal binning
library(hexbin)
# basic use, single pollutant
scatterPlot(mydata, x = "nox", y = "no2", method = "hexbin")
# scatterPlot by year
scatterPlot(mydata,
x = "nox", y = "no2", type = "year", method =
"hexbin"
)
## bin data and plot it - can see how for high NO2, O3 is also high
scatterPlot(mydata, x = "nox", y = "no2", z = "o3", method = "level", dist = 0.02)
## fit surface for clearer view of relationship - clear effect of
## increased O3
scatterPlot(mydata,
x = "nox", y = "no2", z = "o3", method = "level",
x.inc = 10, y.inc = 2, smooth = TRUE
)
## End(Not run)
Subset a data frame based on date
Description
Utility function to filter a data frame by a date range or specific date periods (month, year, etc.). All options are applied in turn, meaning this function can be used to select quite complex dates simply.
Usage
selectByDate(
mydata,
start = "1/1/2008",
end = "31/12/2008",
year = 2008,
month = 1,
day = "weekday",
hour = 1
)
Arguments
mydata |
A data frame containing a |
start |
A start date or date-time string in the form d/m/yyyy, m/d/yyyy, d/m/yyyy HH:MM, m/d/yyyy HH:MM, d/m/yyyy HH:MM:SS or m/d/yyyy HH:MM:SS. |
end |
See |
year |
A year or years to select e.g. |
month |
A month or months to select. Can either be numeric e.g. |
day |
A day name or or days to select. |
hour |
An hour or hours to select from 0-23 e.g. |
Author(s)
David Carslaw
Examples
## select all of 1999
data.1999 <- selectByDate(mydata, start = "1/1/1999", end = "31/12/1999 23:00")
head(data.1999)
tail(data.1999)
# or...
data.1999 <- selectByDate(mydata, start = "1999-01-01", end = "1999-12-31 23:00")
# easier way
data.1999 <- selectByDate(mydata, year = 1999)
# more complex use: select weekdays between the hours of 7 am to 7 pm
sub.data <- selectByDate(mydata, day = "weekday", hour = 7:19)
# select weekends between the hours of 7 am to 7 pm in winter (Dec, Jan, Feb)
sub.data <- selectByDate(mydata,
day = "weekend", hour = 7:19, month =
c("dec", "jan", "feb")
)
Function to extract run lengths greater than a threshold
Description
This is a utility function to extract runs of values above a certain threshold. For example, for a data frame of hourly NOx values we would like to extract all those hours where the concentration is at least 500 for contiguous periods of 5 or more hours.
Usage
selectRunning(
mydata,
pollutant = "nox",
criterion = ">",
run.len = 5L,
threshold = 500,
type = "default",
name = "criterion",
result = c("yes", "no"),
mode = c("flag", "filter"),
...
)
Arguments
mydata |
A data frame with a |
pollutant |
Name of variable to process. |
criterion |
Condition to select run lengths e.g. |
run.len |
Run length for extracting contiguous values of |
threshold |
The threshold value for |
type |
Used for splitting the data further. Passed to |
name |
The name of the column to be appended to the data frame when
|
result |
A vector of length 2, defining how to label the run lengths
when |
mode |
Changes how the function behaves. When |
... |
Additional parameters passed to |
Details
This function is useful, for example, for selecting pollution episodes from a
data frame where concentrations remain elevated for a certain period of time.
It may also be of more general use when analysing air pollution and
atmospheric composition data. For example, selectRunning()
could be used to
extract continuous periods of rainfall — which could be important for
particle concentrations.
Value
A data frame
Author(s)
David Carslaw
Examples
# extract those hours where there are at least 5 consecutive NOx
# concentrations above 500 units
mydata <- selectRunning(mydata, run.len = 5, threshold = 500)
# make a polar plot of those conditions, which shows that those
# conditions are dominated by low wind speeds, not
# in-canyon recirculation
## Not run:
polarPlot(mydata, pollutant = "nox", type = "criterion")
## End(Not run)
Calculate nonparametric smooth trends
Description
Use non-parametric methods to calculate time series trends
Usage
smoothTrend(
mydata,
pollutant = "nox",
deseason = FALSE,
type = "default",
statistic = "mean",
avg.time = "month",
percentile = NA,
data.thresh = 0,
simulate = FALSE,
n = 200,
autocor = FALSE,
cols = "brewer1",
shade = "grey95",
xlab = "year",
y.relation = "same",
ref.x = NULL,
ref.y = NULL,
key.columns = length(percentile),
name.pol = pollutant,
ci = TRUE,
alpha = 0.2,
date.breaks = 7,
auto.text = TRUE,
k = NULL,
plot = TRUE,
...
)
Arguments
mydata |
A data frame containing the field |
pollutant |
The parameter for which a trend test is required. Mandatory. |
deseason |
Should the data be de-deasonalized first? If |
type |
It is also possible to choose Type can be up length two e.g. |
statistic |
Statistic used for calculating monthly values. Default is
“mean”, but can also be “percentile”. See |
avg.time |
Can be “month” (the default), “season” or “year”. Determines the time over which data should be averaged. Note that for “year”, six or more years are required. For “season” the data are plit up into spring: March, April, May etc. Note that December is considered as belonging to winter of the following year. |
percentile |
Percentile value(s) to use if |
data.thresh |
The data capture threshold to use (%) when aggregating the
data using |
simulate |
Should simulations be carried out to determine the
Mann-Kendall tau and p-value. The default is |
n |
Number of bootstrap simulations if |
autocor |
Should autocorrelation be considered in the trend uncertainty
estimates? The default is |
cols |
Colours to use. Can be a vector of colours e.g. |
shade |
The colour used for marking alternate years. Use “white” or “transparent” to remove shading. |
xlab |
x-axis label, by default “year”. |
y.relation |
This determines how the y-axis scale is plotted. "same"
ensures all panels use the same scale and "free" will use panel-specific
scales. The latter is a useful setting when plotting data with very
different values. ref.x See |
ref.x |
See |
ref.y |
A list with details of the horizontal lines to be added
representing reference line(s). For example, |
key.columns |
Number of columns used if a key is drawn when using the
option |
name.pol |
Names to be given to the pollutant(s). This is useful if you want to give a fuller description of the variables, maybe also including subscripts etc. |
ci |
Should confidence intervals be plotted? The default is |
alpha |
The alpha transparency of shaded confidence intervals - if plotted. A value of 0 is fully transparent and 1 is fully opaque. |
date.breaks |
Number of major x-axis intervals to use. The function will
try and choose a sensible number of dates/times as well as formatting the
date/time appropriately to the range being considered. This does not always
work as desired automatically. The user can therefore increase or decrease
the number of intervals by adjusting the value of |
auto.text |
Either |
k |
This is the smoothing parameter used by the |
plot |
Should a plot be produced? |
... |
Other graphical parameters are passed onto |
Details
The smoothTrend()
function provides a flexible way of estimating the trend
in the concentration of a pollutant or other variable. Monthly mean values
are calculated from an hourly (or higher resolution) or daily time series.
There is the option to deseasonalise the data if there is evidence of a
seasonal cycle.
smoothTrend()
uses a Generalized Additive Model (GAM) from the
mgcv::gam()
package to find the most appropriate level of smoothing. The
function is particularly suited to situations where trends are not monotonic
(see discussion with TheilSen()
for more details on this). The
smoothTrend()
function is particularly useful as an exploratory technique
e.g. to check how linear or non-linear trends are.
95% confidence intervals are shown by shading. Bootstrap estimates of the
confidence intervals are also available through the simulate
option.
Residual resampling is used.
Trends can be considered in a very wide range of ways, controlled by setting
type
- see examples below.
Value
an openair object
Author(s)
David Carslaw
See Also
Other time series and trend functions:
TheilSen()
,
calendarPlot()
,
runRegression()
,
timePlot()
,
timeProp()
,
timeVariation()
,
trendLevel()
Examples
# trend plot for nox
smoothTrend(mydata, pollutant = "nox")
# trend plot by each of 8 wind sectors
## Not run:
smoothTrend(mydata, pollutant = "o3", type = "wd", ylab = "o3 (ppb)")
## End(Not run)
# several pollutants, no plotting symbol
## Not run:
smoothTrend(mydata, pollutant = c("no2", "o3", "pm10", "pm25"), pch = NA)
## End(Not run)
# percentiles
## Not run:
smoothTrend(mydata,
pollutant = "o3", statistic = "percentile",
percentile = 95
)
## End(Not run)
# several percentiles with control over lines used
## Not run:
smoothTrend(mydata,
pollutant = "o3", statistic = "percentile",
percentile = c(5, 50, 95), lwd = c(1, 2, 1), lty = c(5, 1, 5)
)
## End(Not run)
Divide up a data frame by time
Description
This function partitions a data frame up into different time segments. It
produces a new column called controlled by name
that can be used in many
openair
functions. Note that there must be one more labels
than there are
dates
.
Usage
splitByDate(
mydata,
dates = "1/1/2003",
labels = c("before", "after"),
name = "split.by",
format = c("%d/%m/%Y", "%Y/%m/%d", "%d/%m/%Y %H:%M:%S",
"%Y/%m/%d %H:%M:%S")
)
Arguments
mydata |
A data frame containing a |
dates |
A date or dates to split data by. Can be passed as R date(time)
objects or as characters. If passed as a character, |
labels |
Labels for each time partition. Should always be one more
|
name |
The name to give the new column to identify the periods split.
Defaults to |
format |
When |
Author(s)
David Carslaw
Examples
# split data up into "before" and "after"
mydata <- splitByDate(mydata,
dates = "1/04/2000",
labels = c("before", "after")
)
# split data into 3 partitions
mydata <- splitByDate(mydata,
dates = c("1/1/2000", "1/3/2003"),
labels = c("before", "during", "after")
)
# if you have modelled data - could split into modelled and measured by the
# break date
dummy <- data.frame(
date = Sys.Date() + (-5:5),
nox = 100 + seq(-50, 50, 10)
)
splitByDate(dummy,
dates = Sys.Date(),
labels = c("measured", "modelled"),
name = "data_type"
)
Function to rapidly provide an overview of air quality data
Description
This function provides a quick graphical and numerical summary of data. The
location presence/absence of data are shown, with summary statistics and
plots of variable distributions. summaryPlot()
can also provide summaries
of a single pollutant across many sites.
Usage
summaryPlot(
mydata,
na.len = 24,
clip = TRUE,
percentile = 0.99,
type = "histogram",
pollutant = "nox",
period = "years",
avg.time = "day",
print.datacap = TRUE,
breaks = NULL,
plot.type = "l",
col.trend = "darkgoldenrod2",
col.data = "lightblue",
col.mis = "#A60A12",
col.hist = "forestgreen",
cols = NULL,
date.breaks = 7,
auto.text = TRUE,
plot = TRUE,
debug = FALSE,
...
)
Arguments
mydata |
A data frame to be summarised. Must contain a |
na.len |
Missing data are only shown with at least |
clip |
When data contain outliers, the histogram or density plot can
fail to show the distribution of the main body of data. Setting |
percentile |
This is used to clip the data. For example, |
type |
|
pollutant |
|
period |
|
avg.time |
This defines the time period to average the time series
plots. Can be "sec", "min", "hour", "day" (the default), "week", "month",
"quarter" or "year". For much increased flexibility a number can precede
these options followed by a space. For example, a |
print.datacap |
Should the data capture % be shown for each period? |
breaks |
Number of histogram bins. Sometime useful but not easy to set a single value for a range of very different variables. |
plot.type |
The |
col.trend |
Colour to be used to show the monthly trend of the data,
shown as a shaded region. Type |
col.data |
Colour to be used to show the presence of data. Type
|
col.mis |
Colour to be used to show missing data. |
col.hist |
Colour for the histogram or density plot. |
cols |
Predefined colour scheme, currently only enabled for
|
date.breaks |
Number of major x-axis intervals to use. The function will
try and choose a sensible number of dates/times as well as formatting the
date/time appropriately to the range being considered. This does not
always work as desired automatically. The user can therefore increase or
decrease the number of intervals by adjusting the value of |
auto.text |
Either |
plot |
Should a plot be produced? |
debug |
Should data types be printed to the console? |
... |
Other graphical parameters. Commonly used examples include the
axis and title labelling options (such as |
Details
summaryPlot()
produces two panels of plots: one showing the
presence/absence of data and the other the distributions. The left panel
shows time series and codes the presence or absence of data in different
colours. By stacking the plots one on top of another it is easy to compare
different pollutants/variables. Overall statistics are given for each
variable: mean, maximum, minimum, missing hours (also expressed as a
percentage), median and the 95th percentile. For each year the data capture
rate (expressed as a percentage of hours in that year) is also given.
The right panel shows either a histogram or a density plot depending on the
choice of type
. Density plots avoid the issue of arbitrary bin sizes that
can sometimes provide a misleading view of the data distribution. Density
plots are often more appropriate, but their effectiveness will depend on the
data in question.
summaryPlot()
will only show data that are numeric or integer type. This is
useful for checking that data have been imported properly. For example, if
for some reason a column representing wind speed erroneously had one or more
fields with characters in, the whole column would be either character or
factor type. The absence of a wind speed variable in the summaryPlot()
plot
would therefore indicate a problem with the input data. In this particular
case, the user should go back to the source data and remove the characters or
remove them using R functions.
If there is a field site
, which would generally mean there is more than one
site, summaryPlot()
will provide information on a
single pollutant across all sites, rather than provide details on all
pollutants at a single site. In this case the user should also provide a
name of a pollutant e.g. pollutant = "nox"
. If a pollutant is not provided
the first numeric field will automatically be chosen.
It is strongly recommended that the summaryPlot()
function is
applied to all new imported data sets to ensure the data are imported as
expected.
Author(s)
David Carslaw
Examples
# do not clip density plot data
## Not run:
summaryPlot(mydata, clip = FALSE)
## End(Not run)
# exclude highest 5 % of data etc.
## Not run:
summaryPlot(mydata, percentile = 0.95)
## End(Not run)
# show missing data where there are at least 96 contiguous missing
# values (4 days)
## Not run:
summaryPlot(mydata, na.len = 96)
## End(Not run)
# show data in green
## Not run:
summaryPlot(mydata, col.data = "green")
## End(Not run)
# show missing data in yellow
## Not run:
summaryPlot(mydata, col.mis = "yellow")
## End(Not run)
# show density plot line in black
## Not run:
summaryPlot(mydata, col.dens = "black")
## End(Not run)
Function to calculate time averages for data frames
Description
Function to flexibly aggregate or expand data frames by different time periods, calculating vector-averaged wind direction where appropriate. The averaged periods can also take account of data capture rates.
Usage
timeAverage(
mydata,
avg.time = "day",
data.thresh = 0,
statistic = "mean",
type = "default",
percentile = NA,
start.date = NA,
end.date = NA,
interval = NA,
vector.ws = FALSE,
fill = FALSE,
progress = TRUE,
...
)
Arguments
mydata |
A data frame containing a |
avg.time |
This defines the time period to average to. Can be Note that |
data.thresh |
The data capture threshold to use (%). A value of zero
means that all available data will be used in a particular period
regardless if of the number of values available. Conversely, a value of 100
will mean that all data will need to be present for the average to be
calculated, else it is recorded as |
statistic |
The statistic to apply when aggregating the data; default is
the mean. Can be one of |
type |
|
percentile |
The percentile level used when |
start.date |
A string giving a start date to use. This is sometimes
useful if a time series starts between obvious intervals. For example, for
a 1-minute time series that starts |
end.date |
A string giving an end date to use. This is sometimes useful
to make sure a time series extends to a known end point and is useful when
|
interval |
The This option can sometimes be useful with |
vector.ws |
Should vector averaging be carried out on wind speed if
available? The default is |
fill |
When time series are expanded, i.e., when a time interval is less
than the original time series, data are 'padded out' with |
progress |
Show a progress bar when many groups make up |
... |
Additional arguments for other functions calling |
Details
This function calculates time averages for a data frame. It also treats wind direction correctly through vector-averaging. For example, the average of 350 degrees and 10 degrees is either 0 or 360 - not 180. The calculations therefore average the wind components.
When a data capture threshold is set through data.thresh
it is necessary
for timeAverage()
to know what the original time interval of the input time
series is. The function will try and calculate this interval based on the
most common time gap (and will print the assumed time gap to the screen).
This works fine most of the time but there are occasions where it may not
e.g. when very few data exist in a data frame or the data are monthly (i.e.
non-regular time interval between months). In this case the user can
explicitly specify the interval through interval
in the same format as
avg.time
e.g. interval = "month"
. It may also be useful to set
start.date
and end.date
if the time series do not span the entire period
of interest. For example, if a time series ended in October and annual means
are required, setting end.date
to the end of the year will ensure that the
whole period is covered and that data.thresh
is correctly calculated. The
same also goes for a time series that starts later in the year where
start.date
should be set to the beginning of the year.
timeAverage()
should be useful in many circumstances where it is necessary
to work with different time average data. For example, hourly air pollution
data and 15-minute meteorological data. To merge the two data sets
timeAverage()
can be used to make the meteorological data 1-hour means
first. Alternatively, timeAverage()
can be used to expand the hourly data
to 15 minute data - see example below.
For the research community timeAverage()
should be useful for dealing with
outputs from instruments where there are a range of time periods used.
It is also very useful for plotting data using timePlot()
. Often the data
are too dense to see patterns and setting different averaging periods easily
helps with interpretation.
Value
Returns a data frame with date in class POSIXct
.
Author(s)
David Carslaw
See Also
timePlot()
that plots time series data and uses timeAverage()
to
aggregate data where necessary.
calcPercentile()
that wraps timeAverage()
to allow multiple
percentiles to be calculated at once.
Examples
# daily average values
daily <- timeAverage(mydata, avg.time = "day")
# daily average values ensuring at least 75 % data capture
# i.e., at least 18 valid hours
## Not run:
daily <- timeAverage(mydata, avg.time = "day", data.thresh = 75)
## End(Not run)
# 2-weekly averages
## Not run:
fortnight <- timeAverage(mydata, avg.time = "2 week")
## End(Not run)
# make a 15-minute time series from an hourly one
## Not run:
min15 <- timeAverage(mydata, avg.time = "15 min", fill = TRUE)
## End(Not run)
# average by grouping variable
## Not run:
dat <- importAURN(c("kc1", "my1"), year = 2011:2013)
timeAverage(dat, avg.time = "year", type = "site")
# can also retain site code
timeAverage(dat, avg.time = "year", type = c("site", "code"))
# or just average all the data, dropping site/code
timeAverage(dat, avg.time = "year")
## End(Not run)
Plot time series
Description
Plot time series quickly, perhaps for multiple pollutants, grouped or in separate panels.
Usage
timePlot(
mydata,
pollutant = "nox",
group = FALSE,
stack = FALSE,
normalise = NULL,
avg.time = "default",
data.thresh = 0,
statistic = "mean",
percentile = NA,
date.pad = FALSE,
type = "default",
cols = "brewer1",
plot.type = "l",
key = TRUE,
log = FALSE,
windflow = NULL,
smooth = FALSE,
ci = TRUE,
y.relation = "same",
ref.x = NULL,
ref.y = NULL,
key.columns = 1,
key.position = "bottom",
name.pol = pollutant,
date.breaks = 7,
date.format = NULL,
auto.text = TRUE,
plot = TRUE,
...
)
Arguments
mydata |
A data frame of time series. Must include a |
pollutant |
Name of variable to plot. Two or more pollutants can be
plotted, in which case a form like |
group |
If more than one pollutant is chosen, should they all be plotted
on the same graph together? The default is |
stack |
If |
normalise |
Should variables be normalised? The default is is not to
normalise the data. |
avg.time |
This defines the time period to average to. Can be
“sec”, “min”, “hour”, “day”, “DSTday”,
“week”, “month”, “quarter” or “year”. For much
increased flexibility a number can precede these options followed by a
space. For example, a timeAverage of 2 months would be |
data.thresh |
The data capture threshold to use when aggregating the
data using |
statistic |
The statistic to apply when aggregating the data; default is
the mean. Can be one of “mean”, “max”, “min”,
“median”, “frequency”, “sd”, “percentile”. Note
that “sd” is the standard deviation and “frequency” is the
number (frequency) of valid records in the period. “percentile” is
the percentile level between 0-100, which can be set using the
“percentile” option - see below. Not used if |
percentile |
The percentile level in percent used when |
date.pad |
Should missing data be padded-out? This is useful where a
data frame consists of two or more "chunks" of data with time gaps between
them. By setting |
type |
It is also possible to choose Only one |
cols |
Colours to be used for plotting. Options include
“default”, “increment”, “heat”, “jet” and
|
plot.type |
The |
key |
Should a key be drawn? The default is |
log |
Should the y-axis appear on a log scale? The default is
|
windflow |
This option allows a scatter plot to show the wind
speed/direction as an arrow. The option is a list e.g. The maximum length of the arrow plotted is a fraction of the plot dimension
with the longest arrow being This option works best where there are not too many data to ensure over-plotting does not become a problem. |
smooth |
Should a smooth line be applied to the data? The default is
|
ci |
If a smooth fit line is applied, then |
y.relation |
This determines how the y-axis scale is plotted. "same" ensures all panels use the same scale and "free" will use panel-specific scales. The latter is a useful setting when plotting data with very different values. |
ref.x |
See |
ref.y |
A list with details of the horizontal lines to be added
representing reference line(s). For example, |
key.columns |
Number of columns to be used in the key. With many
pollutants a single column can make to key too wide. The user can thus
choose to use several columns by setting |
key.position |
Location where the scale key is to plotted. Can include “top”, “bottom”, “right” and “left”. |
name.pol |
This option can be used to give alternative names for the
variables plotted. Instead of taking the column headings as names, the user
can supply replacements. For example, if a column had the name “nox”
and the user wanted a different description, then setting |
date.breaks |
Number of major x-axis intervals to use. The function will
try and choose a sensible number of dates/times as well as formatting the
date/time appropriately to the range being considered. This does not always
work as desired automatically. The user can therefore increase or decrease
the number of intervals by adjusting the value of |
date.format |
This option controls the date format on the x-axis. While
|
auto.text |
Either |
plot |
Should a plot be produced? |
... |
Other graphical parameters are passed onto |
Details
The timePlot
is the basic time series plotting function in
openair
. Its purpose is to make it quick and easy to plot time series
for pollutants and other variables. The other purpose is to plot potentially
many variables together in as compact a way as possible.
The function is flexible enough to plot more than one variable at once. If
more than one variable is chosen plots it can either show all variables on
the same plot (with different line types) on the same scale, or (if
group = FALSE
) each variable in its own panels with its own scale.
The general preference is not to plot two variables on the same graph with
two different y-scales. It can be misleading to do so and difficult with more
than two variables. If there is in interest in plotting several variables
together that have very different scales, then it can be useful to normalise
the data first, which can be down be setting the normalise
option.
The user has fine control over the choice of colours, line width and line types used. This is useful for example, to emphasise a particular variable with a specific line type/colour/width.
timePlot
works very well with selectByDate()
, which is used for
selecting particular date ranges quickly and easily. See examples below.
By default plots are shown with a colour key at the bottom and in the case of
multiple pollutants or sites, strips on the left of each plot. Sometimes this
may be overkill and the user can opt to remove the key and/or the strip by
setting key
and/or strip
to FALSE
. One reason to do this
is to maximise the plotting area and therefore the information shown.
Value
an openair object
Author(s)
David Carslaw
See Also
Other time series and trend functions:
TheilSen()
,
calendarPlot()
,
runRegression()
,
smoothTrend()
,
timeProp()
,
timeVariation()
,
trendLevel()
Examples
# basic use, single pollutant
timePlot(mydata, pollutant = "nox")
# two pollutants in separate panels
## Not run:
timePlot(mydata, pollutant = c("nox", "no2"))
## End(Not run)
# two pollutants in the same panel with the same scale
## Not run:
timePlot(mydata, pollutant = c("nox", "no2"), group = TRUE)
## End(Not run)
# alternative by normalising concentrations and plotting on the same
scale
## Not run:
timePlot(mydata,
pollutant = c("nox", "co", "pm10", "so2"), group = TRUE, avg.time =
"year", normalise = "1/1/1998", lwd = 3, lty = 1
)
## End(Not run)
# examples of selecting by date
# plot for nox in 1999
## Not run:
timePlot(selectByDate(mydata, year = 1999), pollutant = "nox")
## End(Not run)
# select specific date range for two pollutants
## Not run:
timePlot(selectByDate(mydata, start = "6/8/2003", end = "13/8/2003"),
pollutant = c("no2", "o3")
)
## End(Not run)
# choose different line styles etc
## Not run:
timePlot(mydata, pollutant = c("nox", "no2"), lty = 1)
## End(Not run)
# choose different line styles etc
## Not run:
timePlot(selectByDate(mydata, year = 2004, month = 6),
pollutant =
c("nox", "no2"), lwd = c(1, 2), col = "black"
)
## End(Not run)
# different averaging times
# daily mean O3
## Not run:
timePlot(mydata, pollutant = "o3", avg.time = "day")
## End(Not run)
# daily mean O3 ensuring each day has data capture of at least 75%
## Not run:
timePlot(mydata, pollutant = "o3", avg.time = "day", data.thresh = 75)
## End(Not run)
# 2-week average of O3 concentrations
## Not run:
timePlot(mydata, pollutant = "o3", avg.time = "2 week")
## End(Not run)
Time series plot with categories shown as a stacked bar chart
Description
This function shows time series plots as stacked bar charts. The different
categories in the bar chart are made up from a character or factor variable
in a data frame. The function is primarily developed to support the plotting
of cluster analysis output from polarCluster()
and
trajCluster()
that consider local and regional (back trajectory)
cluster analysis respectively. However, the function has more general use for
understanding time series data.
Usage
timeProp(
mydata,
pollutant = "nox",
proportion = "cluster",
avg.time = "day",
type = "default",
normalise = FALSE,
cols = "Set1",
date.breaks = 7,
date.format = NULL,
key.columns = 1,
key.position = "right",
key.title = proportion,
auto.text = TRUE,
plot = TRUE,
...
)
Arguments
mydata |
A data frame containing the fields |
pollutant |
Name of the pollutant to plot contained in |
proportion |
The splitting variable that makes up the bars in the bar
chart e.g. |
avg.time |
This defines the time period to average to. Can be
“sec”, “min”, “hour”, “day”, “DSTday”,
“week”, “month”, “quarter” or “year”. For much
increased flexibility a number can precede these options followed by a
space. For example, a timeAverage of 2 months would be Note that |
type |
It is also possible to choose
|
normalise |
If |
cols |
Colours to be used for plotting. Options include
“default”, “increment”, “heat”, “jet” and
|
date.breaks |
Number of major x-axis intervals to use. The function will
try and choose a sensible number of dates/times as well as formatting the
date/time appropriately to the range being considered. This does not
always work as desired automatically. The user can therefore increase or
decrease the number of intervals by adjusting the value of
|
date.format |
This option controls the date format on the x-axis. While
|
key.columns |
Number of columns to be used in the key. With many
pollutants a single column can make to key too wide. The user can thus
choose to use several columns by setting |
key.position |
Location where the scale key is to plotted. Allowed arguments currently include “top”, “right”, “bottom” and “left”. |
key.title |
The title of the key. |
auto.text |
Either |
plot |
Should a plot be produced? |
... |
Other graphical parameters passed onto |
Details
In order to plot time series in this way, some sort of time aggregation is
needed, which is controlled by the option avg.time
.
The plot shows the value of pollutant
on the y-axis (averaged
according to avg.time
). The time intervals are made up of bars split
according to proportion
. The bars therefore show how the total value
of pollutant
is made up for any time interval.
Value
an openair object
Author(s)
David Carslaw
See Also
Other time series and trend functions:
TheilSen()
,
calendarPlot()
,
runRegression()
,
smoothTrend()
,
timePlot()
,
timeVariation()
,
trendLevel()
Other cluster analysis functions:
polarCluster()
,
trajCluster()
Examples
## monthly plot of SO2 showing the contribution by wind sector
timeProp(mydata, pollutant = "so2", avg.time = "month", proportion = "wd")
Diurnal, day of the week and monthly variation
Description
Plots the diurnal, day of the week and monthly variation for different variables, typically pollutant concentrations. Four separate plots are produced.
Usage
timeVariation(
mydata,
pollutant = "nox",
local.tz = NULL,
normalise = FALSE,
xlab = c("hour", "hour", "month", "weekday"),
name.pol = pollutant,
type = "default",
group = NULL,
difference = FALSE,
statistic = "mean",
conf.int = 0.95,
B = 100,
ci = TRUE,
cols = "hue",
ref.y = NULL,
key = NULL,
key.columns = 1,
start.day = 1,
panel.gap = 0.2,
auto.text = TRUE,
alpha = 0.4,
month.last = FALSE,
plot = TRUE,
...
)
Arguments
mydata |
A data frame of hourly (or higher temporal resolution data).
Must include a |
pollutant |
Name of variable to plot. Two or more pollutants can be
plotted, in which case a form like |
local.tz |
Should the results be calculated in local time that includes
a treatment of daylight savings time (DST)? The default is not to consider
DST issues, provided the data were imported without a DST offset. Emissions
activity tends to occur at local time e.g. rush hour is at 8 am every day.
When the clocks go forward in spring, the emissions are effectively
released into the atmosphere typically 1 hour earlier during the summertime
i.e. when DST applies. When plotting diurnal profiles, this has the effect
of “smearing-out” the concentrations. Sometimes, a useful approach
is to express time as local time. This correction tends to produce
better-defined diurnal profiles of concentration (or other variables) and
allows a better comparison to be made with emissions/activity data. If set
to |
normalise |
Should variables be normalised? The default is |
xlab |
x-axis label; one for each sub-plot. |
name.pol |
Names to be given to the pollutant(s). This is useful if you want to give a fuller description of the variables, maybe also including subscripts etc. |
type |
It is also possible to choose Only one |
group |
This sets the grouping variable to be used. For example, if a
data frame had a column |
difference |
If two pollutants are chosen then setting |
statistic |
Can be “mean” (default) or “median”. If the
statistic is ‘mean’ then the mean line and the 95\
interval in the mean are plotted by default. If the statistic is
‘median’ then the median line is plotted together with the 5/95 and
25/75th quantiles are plotted. Users can control the confidence intervals
with |
conf.int |
The confidence intervals to be plotted. If |
B |
Number of bootstrap replicates to use. Can be useful to reduce this value when there are a large number of observations available to increase the speed of the calculations without affecting the 95\ interval calculations by much. |
ci |
Should confidence intervals be shown? The default is |
cols |
Colours to be used for plotting. Options include
“default”, “increment”, “heat”, “jet” and
|
ref.y |
A list with details of the horizontal lines to be added
representing reference line(s). For example, |
key |
By default |
key.columns |
Number of columns to be used in the key. With many
pollutants a single column can make to key too wide. The user can thus
choose to use several columns by setting |
start.day |
What day of the week should the plots start on? The user can
change the start day by supplying an integer between 0 and 6. Sunday = 0,
Monday = 1, ... For example to start the weekday plots on a Saturday,
choose |
panel.gap |
The gap between panels in the hour-day plot. |
auto.text |
Either |
alpha |
The alpha transparency used for plotting confidence intervals. 0 is fully transparent and 1 is opaque. The default is 0.4 |
month.last |
Should the order of the plots be changed so the plot showing monthly means be the last plot for a logical hierarchy of averaging periods? |
plot |
Should a plot be produced? |
... |
Other graphical parameters passed onto |
Details
The variation of pollutant concentrations by hour of the day and day of the week etc. can reveal many interesting features that relate to source types and meteorology. For traffic sources, there are often important differences in the way vehicles vary by vehicles type e.g. less heavy vehicles at weekends.
The timeVariation
function makes it easy to see how concentrations
(and many other variable types) vary by hour of the day and day of the week.
The plots also show the 95\ confidence intervals in the mean are calculated through bootstrap simulations, which will provide more robust estimates of the confidence intervals (particularly when there are relatively few data).
The function can handle multiple pollutants and uses the flexible type
option to provide separate panels for each 'type' — see cutData
for
more details. timeVariation
can also accept a group
option
which is useful if data are stacked. This will work in a similar way to
having multiple pollutants in separate columns.
The user can supply their own ylim
e.g. ylim = c(0, 200)
that
will be used for all plots. ylim
can also be a list of length four to
control the y-limits on each individual plot e.g. ylim = list(c(-100,500), c(200, 300), c(-400,400), c(50,70))
. These pairs
correspond to the hour, weekday, month and day-hour plots respectively.
The option difference
will calculate the difference in means of two
pollutants together with bootstrap estimates of the 95\
in the difference in the mean. This works in two ways: either two pollutants
are supplied in separate columns e.g. pollutant = c("no2", "o3")
, or
there are two unique values of group
. The difference is calculated as
the second pollutant minus the first and is labelled as such. Considering
differences in this way can provide many useful insights and is particularly
useful for model evaluation when information is needed about where a model
differs from observations by many different time scales. The manual contains
various examples of using difference = TRUE
.
Note also that the timeVariation
function works well on a subset of
data and in conjunction with other plots. For example, a
polarPlot()
may highlight an interesting feature for a particular
wind speed/direction range. By filtering for those conditions
timeVariation
can help determine whether the temporal variation of
that feature differs from other features — and help with source
identification.
In addition, timeVariation
will work well with other variables if
available. Examples include meteorological and traffic flow data.
Depending on the choice of statistic, a subheading is added. Users can
control the text in the subheading through the use of sub
e.g.
sub = ""
will remove any subheading.
Value
an openair object. The four components of
timeVariation are: day.hour
, hour
, day
and
month
. Associated data.frames can be extracted directly using the
subset
option, e.g. as in plot(object, subset = "day.hour")
,
summary(output, subset = "hour")
, etc., for output <- timeVariation(mydata, "nox")
Author(s)
David Carslaw
See Also
Other time series and trend functions:
TheilSen()
,
calendarPlot()
,
runRegression()
,
smoothTrend()
,
timePlot()
,
timeProp()
,
trendLevel()
Examples
# basic use
timeVariation(mydata, pollutant = "nox")
# for a subset of conditions
## Not run:
timeVariation(subset(mydata, ws > 3 & wd > 100 & wd < 270),
pollutant = "pm10", ylab = "pm10 (ug/m3)"
)
## End(Not run)
# multiple pollutants with concentrations normalised
## Not run:
timeVariation(mydata, pollutant = c("nox", "co"), normalise = TRUE)
## End(Not run)
# show BST/GMT variation (see ?cutData for more details)
# the NOx plot shows the profiles are very similar when expressed in
# local time, showing that the profile is dominated by a local source
# that varies by local time and not by GMT i.e. road vehicle emissions
## Not run:
timeVariation(mydata, pollutant = "nox", type = "dst", local.tz = "Europe/London")
## End(Not run)
## In this case it is better to group the results for clarity:
## Not run:
timeVariation(mydata, pollutant = "nox", group = "dst", local.tz = "Europe/London")
## End(Not run)
# By contrast, a variable such as wind speed shows a clear shift when
# expressed in local time. These two plots can help show whether the
# variation is dominated by man-made influences or natural processes
## Not run:
timeVariation(mydata, pollutant = "ws", group = "dst", local.tz = "Europe/London")
## End(Not run)
## It is also possible to plot several variables and set type. For
## example, consider the NOx and NO2 split by levels of O3:
## Not run:
timeVariation(mydata, pollutant = c("nox", "no2"), type = "o3", normalise = TRUE)
## End(Not run)
## difference in concentrations
## Not run:
timeVariation(mydata, poll = c("pm25", "pm10"), difference = TRUE)
## End(Not run)
# It is also useful to consider how concentrations vary by
# considering two different periods e.g. in intervention
# analysis. In the following plot NO2 has clearly increased but much
# less so at weekends - perhaps suggesting vehicles other than cars
# are important because flows of cars are approximately invariant by
# day of the week
## Not run:
mydata <- splitByDate(mydata, dates = "1/1/2003", labels = c("before Jan. 2003", "After Jan. 2003"))
timeVariation(mydata, pollutant = "no2", group = "split.by", difference = TRUE)
## End(Not run)
## sub plots can be extracted from the openair object
## Not run:
myplot <- timeVariation(mydata, pollutant = "no2")
plot(myplot, subset = "day.hour") # top weekday and plot
## End(Not run)
## individual plots
## plot(myplot, subset="day.hour") for the weekday and hours subplot (top)
## plot(myplot, subset="hour") for the diurnal plot
## plot(myplot, subset="day") for the weekday plot
## plot(myplot, subset="month") for the monthly plot
## numerical results (mean, lower/upper uncertainties)
## myplot$data$day.hour # the weekday and hour data set
## summary(myplot, subset = "hour") #summary of hour data set
## head(myplot, subset = "day") #head/top of day data set
## tail(myplot, subset = "month") #tail/top of month data set
## plot quantiles and median
## Not run:
timeVariation(mydata, stati = "median", poll = "pm10", col = "firebrick")
## with different intervals
timeVariation(mydata,
stati = "median", poll = "pm10", conf.int = c(0.75, 0.99),
col = "firebrick"
)
## End(Not run)
Calculate clusters for back trajectories
Description
This function carries out cluster analysis of HYSPLIT back trajectories. The
function is specifically designed to work with the trajectories imported
using the openair
importTraj
function, which provides
pre-calculated back trajectories at specific receptor locations.
Usage
trajCluster(
traj,
method = "Euclid",
n.cluster = 5,
type = "default",
cols = "Set1",
split.after = FALSE,
map.fill = TRUE,
map.cols = "grey40",
map.border = "black",
map.alpha = 0.4,
map.lwd = 1,
map.lty = 1,
projection = "lambert",
parameters = c(51, 51),
orientation = c(90, 0, 0),
by.type = FALSE,
origin = TRUE,
plot = TRUE,
...
)
Arguments
traj |
An openair trajectory data frame resulting from the use of
|
method |
Method used to calculate the distance matrix for the back trajectories. There are two methods available: “Euclid” and “Angle”. |
n.cluster |
Number of clusters to calculate. |
type |
|
cols |
Colours to be used for plotting. Options include
“default”, “increment”, “heat”, “jet” and
|
split.after |
For |
map.fill |
Should the base map be a filled polygon? Default is to fill countries. |
map.cols |
If |
map.border |
The colour to use for the map outlines/borders. Defaults to
|
map.alpha |
The transparency level of the filled map which takes values from 0 (full transparency) to 1 (full opacity). Setting it below 1 can help view trajectories, trajectory surfaces etc. and a filled base map. |
map.lwd |
The map line width, a positive number, defaulting to |
map.lty |
The map line type. Line types can either be specified as an
integer ( |
projection |
The map projection to be used. Different map projections
are possible through the |
parameters |
From the |
orientation |
From the |
by.type |
The percentage of the total number of trajectories is given
for all data by default. Setting |
origin |
If true a filled circle dot is shown to mark the receptor point. |
plot |
Should a plot be produced? |
... |
Other graphical parameters passed onto |
Details
Two main methods are available to cluster the back trajectories using two different calculations of the distance matrix. The default is to use the standard Euclidian distance between each pair of trajectories. Also available is an angle-based distance matrix based on Sirois and Bottenheim (1995). The latter method is useful when the interest is the direction of the trajectories in clustering.
The distance matrix calculations are made in C++ for speed. For data sets of
up to 1 year both methods should be relatively fast, although the
method = "Angle"
does tend to take much longer to calculate. Further
details of these methods are given in the openair manual.
Value
an openair object. The data
component contains
both traj
(the original data appended with its cluster) and results
(the average trajectory path per cluster, shown in the trajCluster()
plot.)
Author(s)
David Carslaw
References
Sirois, A. and Bottenheim, J.W., 1995. Use of backward trajectories to interpret the 5-year record of PAN and O3 ambient air concentrations at Kejimkujik National Park, Nova Scotia. Journal of Geophysical Research, 100: 2867-2881.
See Also
Other trajectory analysis functions:
importTraj()
,
trajLevel()
,
trajPlot()
Other cluster analysis functions:
polarCluster()
,
timeProp()
Examples
## Not run:
## import trajectories
traj <- importTraj(site = "london", year = 2009)
## calculate clusters
clust <- trajCluster(traj, n.cluster = 5)
head(clust$data) ## note new variable 'cluster'
## use different distance matrix calculation, and calculate by season
traj <- trajCluster(traj, method = "Angle", type = "season", n.cluster = 4)
## End(Not run)
Trajectory level plots with conditioning
Description
This function plots gridded back trajectories. This function requires that
data are imported using the importTraj()
function.
Usage
trajLevel(
mydata,
lon = "lon",
lat = "lat",
pollutant = "height",
type = "default",
smooth = FALSE,
statistic = "frequency",
percentile = 90,
map = TRUE,
lon.inc = 1,
lat.inc = 1,
min.bin = 1,
.combine = NA,
sigma = 1.5,
map.fill = TRUE,
map.res = "default",
map.cols = "grey40",
map.border = "black",
map.alpha = 0.3,
map.lwd = 1,
map.lty = 1,
projection = "lambert",
parameters = c(51, 51),
orientation = c(90, 0, 0),
grid.col = "deepskyblue",
origin = TRUE,
plot = TRUE,
...
)
Arguments
mydata |
Data frame, the result of importing a trajectory file using
|
lon |
Column containing the longitude, as a decimal. |
lat |
Column containing the latitude, as a decimal. |
pollutant |
Pollutant to be plotted. By default the trajectory height is used. |
type |
It is also possible to choose
|
smooth |
Should the trajectory surface be smoothed? |
statistic |
Statistic to use for There are also various ways of plotting concentrations. It is possible to set If If If |
percentile |
The percentile concentration of |
map |
Should a base map be drawn? If |
lon.inc , lat.inc |
The longitude and latitude intervals to be used for binning data. |
min.bin |
The minimum number of unique points in a grid cell. Counts
below |
.combine |
When statistic is "SQTBA" it is possible to combine lots of
receptor locations to derive a single map. |
sigma |
For the SQTBA approach |
map.fill |
Should the base map be a filled polygon? Default is to fill countries. |
map.res |
The resolution of the base map. By default the function uses
the ‘world’ map from the |
map.cols |
If |
map.border |
The colour to use for the map outlines/borders. Defaults to
|
map.alpha |
The transparency level of the filled map which takes values from 0 (full transparency) to 1 (full opacity). Setting it below 1 can help view trajectories, trajectory surfaces etc. and a filled base map. |
map.lwd |
The map line width, a positive number, defaulting to |
map.lty |
The map line type. Line types can either be specified as an
integer ( |
projection |
The map projection to be used. Different map projections
are possible through the |
parameters |
From the |
orientation |
From the |
grid.col |
The colour of the map grid to be used. To remove the grid set
|
origin |
If true a filled circle dot is shown to mark the receptor point. |
plot |
Should a plot be produced? |
... |
other arguments are passed to |
Details
An alternative way of showing the trajectories compared with plotting
trajectory lines is to bin the points into latitude/longitude intervals. For
these purposes trajLevel()
should be used. There are several trajectory
statistics that can be plotted as gridded surfaces. First, statistic
can be
set to "frequency" to show the number of back trajectory points in a grid
square. Grid squares are by default at 1 degree intervals, controlled by
lat.inc
and lon.inc
. Such plots are useful for showing the frequency of
air mass locations. Note that it is also possible to set method = "hexbin"
for plotting frequencies (not concentrations), which will produce a plot by
hexagonal binning.
If statistic = "difference"
the trajectories associated with a
concentration greater than percentile
are compared with the the full set of
trajectories to understand the differences in frequencies of the origin of
air masses of the highest concentration trajectories compared with the
trajectories on average. The comparison is made by comparing the percentage
change in gridded frequencies. For example, such a plot could show that the
top 10\
the east.
If statistic = "pscf"
then the Potential Source Contribution Function is
plotted. The PSCF calculates the probability that a source is located at
latitude i
and longitude j
(Pekney et al., 2006).The basis of
PSCF is that if a source is located at (i,j), an air parcel back trajectory
passing through that location indicates that material from the source can be
collected and transported along the trajectory to the receptor site. PSCF
solves
PSCF = m_{ij}/n_{ij}
where n_{ij}
is the number of times
that the trajectories passed through the cell (i,j) and m_{ij}
is the
number of times that a source concentration was high when the trajectories
passed through the cell (i,j). The criterion for determining m_{ij}
is
controlled by percentile
, which by default is 90. Note also that cells with
few data have a weighting factor applied to reduce their effect.
A limitation of the PSCF method is that grid cells can have the same PSCF value when sample concentrations are either only slightly higher or much higher than the criterion. As a result, it can be difficult to distinguish moderate sources from strong ones. Seibert et al. (1994) computed concentration fields to identify source areas of pollutants. The Concentration Weighted Trajectory (CWT) approach considers the concentration of a species together with its residence time in a grid cell. The CWT approach has been shown to yield similar results to the PSCF approach. The openair manual has more details and examples of these approaches.
A further useful refinement is to smooth the resulting surface, which is
possible by setting smooth = TRUE
.
Value
an openair object
Note
This function is under active development and is likely to change
Author(s)
David Carslaw
References
Pekney, N. J., Davidson, C. I., Zhou, L., & Hopke, P. K. (2006). Application of PSCF and CPF to PMF-Modeled Sources of PM 2.5 in Pittsburgh. Aerosol Science and Technology, 40(10), 952-961.
Seibert, P., Kromp-Kolb, H., Baltensperger, U., Jost, D., 1994. Trajectory analysis of high-alpine air pollution data. NATO Challenges of Modern Society 18, 595-595.
Xie, Y., & Berkowitz, C. M. (2007). The use of conditional probability functions and potential source contribution functions to identify source regions and advection pathways of hydrocarbon emissions in Houston, Texas. Atmospheric Environment, 41(28), 5831-5847.
See Also
Other trajectory analysis functions:
importTraj()
,
trajCluster()
,
trajPlot()
Examples
# show a simple case with no pollutant i.e. just the trajectories
# let's check to see where the trajectories were coming from when
# Heathrow Airport was closed due to the Icelandic volcanic eruption
# 15--21 April 2010.
# import trajectories for London and plot
## Not run:
lond <- importTraj("london", 2010)
## End(Not run)
# more examples to follow linking with concentration measurements...
# import some measurements from KC1 - London
## Not run:
kc1 <- importAURN("kc1", year = 2010)
# now merge with trajectory data by 'date'
lond <- merge(lond, kc1, by = "date")
# trajectory plot, no smoothing - and limit lat/lon area of interest
# use PSCF
trajLevel(subset(lond, lat > 40 & lat < 70 & lon > -20 & lon < 20),
pollutant = "pm10", statistic = "pscf"
)
# can smooth surface, suing CWT approach:
trajLevel(subset(lond, lat > 40 & lat < 70 & lon > -20 & lon < 20),
pollutant = "pm2.5", statistic = "cwt", smooth = TRUE
)
# plot by season:
trajLevel(subset(lond, lat > 40 & lat < 70 & lon > -20 & lon < 20),
pollutant = "pm2.5",
statistic = "pscf", type = "season"
)
## End(Not run)
Trajectory line plots with conditioning
Description
This function plots back trajectories. This function requires that data are
imported using the importTraj
function.
Usage
trajPlot(
mydata,
lon = "lon",
lat = "lat",
pollutant = "height",
type = "default",
map = TRUE,
group = NA,
map.fill = TRUE,
map.res = "default",
map.cols = "grey40",
map.border = "black",
map.alpha = 0.4,
map.lwd = 1,
map.lty = 1,
projection = "lambert",
parameters = c(51, 51),
orientation = c(90, 0, 0),
grid.col = "deepskyblue",
npoints = 12,
origin = TRUE,
plot = TRUE,
...
)
Arguments
mydata |
Data frame, the result of importing a trajectory file using
|
lon |
Column containing the longitude, as a decimal. |
lat |
Column containing the latitude, as a decimal. |
pollutant |
Pollutant to be plotted. By default the trajectory height is used. |
type |
It is also possible to choose
|
map |
Should a base map be drawn? If |
group |
It is sometimes useful to group and colour trajectories according to a grouping variable. See example below. |
map.fill |
Should the base map be a filled polygon? Default is to fill countries. |
map.res |
The resolution of the base map. By default the function uses
the ‘world’ map from the |
map.cols |
If |
map.border |
The colour to use for the map outlines/borders. Defaults to
|
map.alpha |
The transparency level of the filled map which takes values from 0 (full transparency) to 1 (full opacity). Setting it below 1 can help view trajectories, trajectory surfaces etc. and a filled base map. |
map.lwd |
The map line width, a positive number, defaulting to |
map.lty |
The map line type. Line types can either be specified as an
integer ( |
projection |
The map projection to be used. Different map projections
are possible through the |
parameters |
From the |
orientation |
From the |
grid.col |
The colour of the map grid to be used. To remove the grid set
|
npoints |
A dot is placed every |
origin |
If true a filled circle dot is shown to mark the receptor point. |
plot |
Should a plot be produced? |
... |
other arguments are passed to |
Details
Several types of trajectory plot are available. trajPlot
by default will
plot each lat/lon location showing the origin of each trajectory, if no
pollutant
is supplied.
If a pollutant is given, by merging the trajectory data with concentration
data (see example below), the trajectories are colour-coded by the
concentration of pollutant
. With a long time series there can be lots of
overplotting making it difficult to gauge the overall concentration pattern.
In these cases setting alpha
to a low value e.g. 0.1 can help.
The user can also show points instead of lines by plot.type = "p"
.
Note that trajPlot
will plot only the full length trajectories. This should
be remembered when selecting only part of a year to plot.
Author(s)
David Carslaw
See Also
Other trajectory analysis functions:
importTraj()
,
trajCluster()
,
trajLevel()
Examples
# show a simple case with no pollutant i.e. just the trajectories
# let's check to see where the trajectories were coming from when
# Heathrow Airport was closed due to the Icelandic volcanic eruption
# 15--21 April 2010.
# import trajectories for London and plot
## Not run:
lond <- importTraj("london", 2010)
# well, HYSPLIT seems to think there certainly were conditions where trajectories
# orginated from Iceland...
trajPlot(selectByDate(lond, start = "15/4/2010", end = "21/4/2010"))
## End(Not run)
# plot by day, need a column that makes a date
## Not run:
lond$day <- as.Date(lond$date)
trajPlot(selectByDate(lond, start = "15/4/2010", end = "21/4/2010"),
type = "day"
)
## End(Not run)
# or show each day grouped by colour, with some other options set
## Not run:
trajPlot(selectByDate(lond, start = "15/4/2010", end = "21/4/2010"),
group = "day", col = "turbo", lwd = 2, key.pos = "right", key.col = 1
)
## End(Not run)
# more examples to follow linking with concentration measurements...
Plot heat map trends
Description
The trendLevel function provides a way of rapidly showing a large amount of data in a condensed form. In one plot, the variation in the concentration of one pollutant can to shown as a function of three other categorical properties. The default version of the plot uses y = hour of day, x = month of year and type = year to provide information on trends, seasonal effects and diurnal variations. However, x, y and type and summarising statistics can all be modified to provide a range of other similar plots.
Usage
trendLevel(
mydata,
pollutant = "nox",
x = "month",
y = "hour",
type = "year",
rotate.axis = c(90, 0),
n.levels = c(10, 10, 4),
limits = c(0, 100),
cols = "default",
auto.text = TRUE,
key.header = "use.stat.name",
key.footer = pollutant,
key.position = "right",
key = TRUE,
labels = NA,
breaks = NA,
statistic = c("mean", "max", "min", "median", "frequency", "sum", "sd", "percentile"),
percentile = 95,
stat.args = NULL,
stat.safe.mode = TRUE,
drop.unused.types = TRUE,
col.na = "white",
plot = TRUE,
...
)
Arguments
mydata |
The openair data frame to use to generate the |
pollutant |
The name of the data series in |
x , y , type |
The name of the data series to use as the |
rotate.axis |
The rotation to be applied to |
n.levels |
The number of levels to split |
limits |
The colour scale range to use when generating the
|
cols |
The colour set to use to colour the |
auto.text |
Automatic routine text formatting. |
key.header , key.footer |
Adds additional text labels above and/or below
the scale key, respectively. For example, passing the options |
key.position |
Location where the scale key should be plotted. Allowed
arguments currently include |
key |
Fine control of the scale key via |
breaks , labels |
If a categorical colour scale is required then |
statistic |
The statistic to apply when aggregating the data; default is
the mean. Can be one of |
percentile |
The percentile level used when |
stat.args |
Additional options to be used with |
stat.safe.mode |
An addition protection applied when using functions
directly with |
drop.unused.types |
Hide unused/empty |
col.na |
Colour to be used to show missing data. |
plot |
Should a plot be produced? |
... |
Addition options are passed on to |
Details
trendLevel()
allows the use of third party summarising functions via the
statistic
option. Any additional function arguments not included within a
function called using statistic
should be supplied as a list of named
parameters and sent using stat.args
. For example, the encoded option
statistic = "mean"
is equivalent to statistic = mean, stat.args = list(na.rm = TRUE)
or the R command mean(x, na.rm = TRUE)
. Many R
functions and user's own code could be applied in a similar fashion, subject
to the following restrictions: the first argument sent to the function must
be the data series to be analysed; the name 'x' cannot be used for any of the
extra options supplied in stat.args
; and the function should return the
required answer as a numeric or NA
. Note: If the supplied function returns
more than one answer, currently only the first of these is retained and used
by trendLevel()
. All other returned information will be ignored without
warning. If the function terminates with an error when it is sent an empty
data series, the option stat.safe.mode
should not be set to FALSE
or
trendLevel()
may fail. Note: The stat.safe.mode = TRUE
option returns an
NA without warning for empty data series.
Value
an openair object.
Author(s)
Karl Ropkins
David Carslaw
Jack Davison
See Also
Other time series and trend functions:
TheilSen()
,
calendarPlot()
,
runRegression()
,
smoothTrend()
,
timePlot()
,
timeProp()
,
timeVariation()
Examples
# basic use
# default statistic = "mean"
trendLevel(mydata, pollutant = "nox")
# applying same as 'own' statistic
my.mean <- function(x) mean(x, na.rm = TRUE)
trendLevel(mydata, pollutant = "nox", statistic = my.mean)
# alternative for 'third party' statistic
# trendLevel(mydata, pollutant = "nox", statistic = mean,
# stat.args = list(na.rm = TRUE))
## Not run:
# example with categorical scale
trendLevel(mydata,
pollutant = "no2",
border = "white", statistic = "max",
breaks = c(0, 50, 100, 500),
labels = c("low", "medium", "high"),
cols = c("forestgreen", "yellow", "red")
)
## End(Not run)
Traditional wind rose plot
Description
The traditional wind rose plot that plots wind speed and wind direction by different intervals. The pollution rose applies the same plot structure but substitutes other measurements, most commonly a pollutant time series, for wind speed.
Usage
windRose(
mydata,
ws = "ws",
wd = "wd",
ws2 = NA,
wd2 = NA,
ws.int = 2,
angle = 30,
type = "default",
calm.thresh = 0,
bias.corr = TRUE,
cols = "default",
grid.line = NULL,
width = 1,
seg = NULL,
auto.text = TRUE,
breaks = 4,
offset = 10,
normalise = FALSE,
max.freq = NULL,
paddle = TRUE,
key.header = NULL,
key.footer = "(m/s)",
key.position = "bottom",
key = TRUE,
dig.lab = 5,
include.lowest = FALSE,
statistic = "prop.count",
pollutant = NULL,
annotate = TRUE,
angle.scale = 315,
border = NA,
alpha = 1,
plot = TRUE,
...
)
Arguments
mydata |
A data frame containing fields |
ws |
Name of the column representing wind speed. |
wd |
Name of the column representing wind direction. |
ws2 , wd2 |
The user can supply a second set of wind speed and wind
direction values with which the first can be compared. See
|
ws.int |
The Wind speed interval. Default is 2 m/s but for low met masts with low mean wind speeds a value of 1 or 0.5 m/s may be better. |
angle |
Default angle of “spokes” is 30. Other potentially useful
angles are 45 and 10. Note that the width of the wind speed interval may
need adjusting using |
type |
It is also possible to choose Type can be up length two e.g. |
calm.thresh |
By default, conditions are considered to be calm when the
wind speed is zero. The user can set a different threshold for calms be
setting |
bias.corr |
When |
cols |
Colours to be used for plotting. Options include
“default”, “increment”, “heat”, “jet”,
“hue” and user defined. For user defined the user can supply a list
of colour names recognised by R (type |
grid.line |
Grid line interval to use. If |
width |
For |
seg |
When |
auto.text |
Either |
breaks |
Most commonly, the number of break points for wind speed. With
the |
offset |
The size of the 'hole' in the middle of the plot, expressed as a percentage of the polar axis scale, default 10. |
normalise |
If |
max.freq |
Controls the scaling used by setting the maximum value for the radial limits. This is useful to ensure several plots use the same radial limits. |
paddle |
Either |
key.header |
Adds additional text/labels above the scale key. For
example, passing |
key.footer |
Adds additional text/labels below the scale key. See
|
key.position |
Location where the scale key is to plotted. Allowed arguments currently include “top”, “right”, “bottom” and “left”. |
key |
Fine control of the scale key via |
dig.lab |
The number of significant figures at which scientific number formatting is used in break point and key labelling. Default 5. |
include.lowest |
Logical. If |
statistic |
The |
pollutant |
Alternative data series to be sampled instead of wind speed.
The |
annotate |
If |
angle.scale |
The scale is by default shown at a 315 degree angle.
Sometimes the placement of the scale may interfere with an interesting
feature. The user can therefore set |
border |
Border colour for shaded areas. Default is no border. |
alpha |
The alpha transparency to use for the plotting surface (a value
between 0 and 1 with zero being fully transparent and 1 fully opaque).
Setting a value below 1 can be useful when plotting surfaces on a map using
the package |
plot |
Should a plot be produced? |
... |
Other parameters that are passed on to |
Details
For windRose
data are summarised by direction, typically by 45 or 30
(or 10) degrees and by different wind speed categories. Typically, wind
speeds are represented by different width "paddles". The plots show the
proportion (here represented as a percentage) of time that the wind is from a
certain angle and wind speed range.
By default windRose
will plot a windRose in using "paddle" style
segments and placing the scale key below the plot.
The argument pollutant
uses the same plotting structure but
substitutes another data series, defined by pollutant
, for wind speed.
It is recommended to use pollutionRose()
for plotting pollutant
concentrations.
The option statistic = "prop.mean"
provides a measure of the relative
contribution of each bin to the panel mean, and is intended for use with
pollutionRose
.
Value
an openair object. Summarised proportions can be
extracted directly using the $data
operator, e.g. object$data
for output <- windRose(mydata)
. This returns a data frame with three
set columns: cond
, conditioning based on type
; wd
, the
wind direction; and calm
, the statistic
for the proportion of
data unattributed to any specific wind direction because it was collected
under calm conditions; and then several (one for each range binned for the
plot) columns giving proportions of measurements associated with each
ws
or pollutant
range plotted as a discrete panel.
Note
windRose
and pollutionRose
both use drawOpenKey()
to
produce scale keys.
Author(s)
David Carslaw (with some additional contributions by Karl Ropkins)
References
Applequist, S, 2012: Wind Rose Bias Correction. J. Appl. Meteor. Climatol., 51, 1305-1309.
Droppo, J.G. and B.A. Napier (2008) Wind Direction Bias in Generating Wind Roses and Conducting Sector-Based Air Dispersion Modeling, Journal of the Air & Waste Management Association, 58:7, 913-918.
See Also
Other polar directional analysis functions:
percentileRose()
,
polarAnnulus()
,
polarCluster()
,
polarDiff()
,
polarFreq()
,
polarPlot()
,
pollutionRose()
Examples
# basic plot
windRose(mydata)
# one windRose for each year
windRose(mydata, type = "year")
# windRose in 10 degree intervals with gridlines and width adjusted
## Not run:
windRose(mydata, angle = 10, width = 0.2, grid.line = 1)
## End(Not run)