| sparkR.init {SparkR} | R Documentation | 
This function initializes a new SparkContext. For details on how to initialize and use SparkR, refer to SparkR programming guide at http://spark.apache.org/docs/latest/sparkr.html#starting-up-sparkcontext-sqlcontext.
sparkR.init(master = "", appName = "SparkR",
  sparkHome = Sys.getenv("SPARK_HOME"), sparkEnvir = list(),
  sparkExecutorEnv = list(), sparkJars = "", sparkPackages = "")
| master | The Spark master URL | 
| appName | Application name to register with cluster manager | 
| sparkHome | Spark Home directory | 
| sparkEnvir | Named list of environment variables to set on worker nodes | 
| sparkExecutorEnv | Named list of environment variables to be used when launching executors | 
| sparkJars | Character vector of jar files to pass to the worker nodes | 
| sparkPackages | Character vector of packages from spark-packages.org | 
## Not run: 
##D sc <- sparkR.init("local[2]", "SparkR", "/home/spark")
##D sc <- sparkR.init("local[2]", "SparkR", "/home/spark",
##D                  list(spark.executor.memory="1g"))
##D sc <- sparkR.init("yarn-client", "SparkR", "/home/spark",
##D                  list(spark.executor.memory="4g"),
##D                  list(LD_LIBRARY_PATH="/directory of JVM libraries (libjvm.so) on workers/"),
##D                  c("one.jar", "two.jar", "three.jar"),
##D                  c("com.databricks:spark-avro_2.10:2.0.1",
##D                    "com.databricks:spark-csv_2.10:1.3.0"))
## End(Not run)