| arrange {SparkR} | R Documentation |
Sort a DataFrame by the specified column(s).
## S4 method for signature 'DataFrame,Column' arrange(x, col, ...) ## S4 method for signature 'DataFrame,character' arrange(x, col, ..., decreasing = FALSE) ## S4 method for signature 'DataFrame,characterOrColumn' orderBy(x, col) arrange(x, col, ...) orderBy(x, col)
x |
A DataFrame to be sorted. |
col |
A character or Column object vector indicating the fields to sort on |
... |
Additional sorting fields |
decreasing |
A logical argument indicating sorting order for columns when a character vector is specified for col |
A DataFrame where all elements are sorted.
Other DataFrame functions: $,
$<-, select,
select,
select,DataFrame,Column-method,
select,DataFrame,list-method,
selectExpr; DataFrame-class,
dataFrame, groupedData;
[, [, [[,
subset; agg,
agg,
count,GroupedData-method,
summarize, summarize;
as.data.frame,
as.data.frame,DataFrame-method;
attach,
attach,DataFrame-method;
cache; collect;
colnames, colnames,
colnames<-, colnames<-,
columns, names,
names<-; coltypes,
coltypes, coltypes<-,
coltypes<-; columns,
dtypes, printSchema,
schema, schema;
count, nrow;
describe, describe,
describe, summary,
summary,
summary,PipelineModel-method;
dim; distinct,
unique; dropna,
dropna, fillna,
fillna, na.omit,
na.omit; dtypes;
except, except;
explain, explain;
filter, filter,
where, where;
first, first;
groupBy, groupBy,
group_by, group_by;
head; insertInto,
insertInto; intersect,
intersect; isLocal,
isLocal; join;
limit, limit;
merge, merge;
mutate, mutate,
transform; ncol;
persist; printSchema;
rbind, rbind,
unionAll, unionAll;
registerTempTable,
registerTempTable; rename,
rename, withColumnRenamed,
withColumnRenamed;
repartition; sample,
sample, sample_frac,
sample_frac;
saveAsParquetFile,
saveAsParquetFile,
write.parquet, write.parquet;
saveAsTable, saveAsTable;
saveDF, saveDF,
write.df, write.df;
selectExpr; showDF,
showDF; show,
show,
show,GroupedData-method;
take; transform,
withColumn, withColumn;
unpersist; write.json,
write.json
## Not run:
##D sc <- sparkR.init()
##D sqlContext <- sparkRSQL.init(sc)
##D path <- "path/to/file.json"
##D df <- read.json(sqlContext, path)
##D arrange(df, df$col1)
##D arrange(df, asc(df$col1), desc(abs(df$col2)))
##D arrange(df, "col1", decreasing = TRUE)
##D arrange(df, "col1", "col2", decreasing = c(TRUE, FALSE))
## End(Not run)