pyspark.sql.DataFrameWriter.save#
- DataFrameWriter.save(path=None, format=None, mode=None, partitionBy=None, **options)[source]#
- Saves the contents of the - DataFrameto a data source.- The data source is specified by the - formatand a set of- options. If- formatis not specified, the default data source configured by- spark.sql.sources.defaultwill be used.- New in version 1.4.0. - Changed in version 3.4.0: Supports Spark Connect. - Parameters
- pathstr, optional
- the path in a Hadoop supported file system 
- formatstr, optional
- the format used to save 
- modestr, optional
- specifies the behavior of the save operation when data already exists. - append: Append contents of this- DataFrameto existing data.
- overwrite: Overwrite existing data.
- ignore: Silently ignore this operation if data already exists.
- erroror- errorifexists(default case): Throw an exception if data already exists.
 
- partitionBylist, optional
- names of partitioning columns 
- **optionsdict
- all other string options 
 
 - Examples - Write a DataFrame into a JSON file and read it back. - >>> import tempfile >>> with tempfile.TemporaryDirectory(prefix="save") as d: ... # Write a DataFrame into a JSON file ... spark.createDataFrame( ... [{"age": 100, "name": "Hyukjin Kwon"}] ... ).write.mode("overwrite").format("json").save(d) ... ... # Read the JSON file as a DataFrame. ... spark.read.format('json').load(d).show() +---+------------+ |age| name| +---+------------+ |100|Hyukjin Kwon| +---+------------+