pyspark.sql.DataFrameWriter.parquet

DataFrameWriter.parquet(path: str, mode: Optional[str] = None, partitionBy: Union[str, List[str], None] = None, compression: Optional[str] = None) → None

Saves the content of the DataFrame in Parquet format at the specified path.

Parameters
pathstr

the path in any Hadoop supported file system

modestr, optional

specifies the behavior of the save operation when data already exists.

  • append: Append contents of this DataFrame to existing data.

  • overwrite: Overwrite existing data.

  • ignore: Silently ignore this operation if data already exists.

  • error or errorifexists (default case): Throw an exception if data already exists.

partitionBystr or list, optional

names of partitioning columns

Other Parameters
Extra options

For the extra options, refer to Data Source Option in the version you use.

Examples

>>> df.write.parquet(os.path.join(tempfile.mkdtemp(), 'data'))