pyspark.sql.DataFrameWriter.save¶
-
DataFrameWriter.save(path: Optional[str] = None, format: Optional[str] = None, mode: Optional[str] = None, partitionBy: Union[str, List[str], None] = None, **options: OptionalPrimitiveType) → None¶ Saves the contents of the
DataFrameto a data source.The data source is specified by the
formatand a set ofoptions. Ifformatis not specified, the default data source configured byspark.sql.sources.defaultwill be used.- Parameters
- pathstr, optional
the path in a Hadoop supported file system
- formatstr, optional
the format used to save
- modestr, optional
specifies the behavior of the save operation when data already exists.
append: Append contents of thisDataFrameto existing data.overwrite: Overwrite existing data.ignore: Silently ignore this operation if data already exists.errororerrorifexists(default case): Throw an exception if data already exists.
- partitionBylist, optional
names of partitioning columns
- **optionsdict
all other string options
Examples
>>> df.write.mode("append").save(os.path.join(tempfile.mkdtemp(), 'data'))