pyspark.sql.DataFrameWriter.json¶
-
DataFrameWriter.json(path: str, mode: Optional[str] = None, compression: Optional[str] = None, dateFormat: Optional[str] = None, timestampFormat: Optional[str] = None, lineSep: Optional[str] = None, encoding: Optional[str] = None, ignoreNullFields: Union[bool, str, None] = None) → None¶ Saves the content of the
DataFramein JSON format (JSON Lines text format or newline-delimited JSON) at the specified path.- Parameters
- pathstr
the path in any Hadoop supported file system
- modestr, optional
specifies the behavior of the save operation when data already exists.
append: Append contents of thisDataFrameto existing data.overwrite: Overwrite existing data.ignore: Silently ignore this operation if data already exists.errororerrorifexists(default case): Throw an exception if data already exists.
- Other Parameters
- Extra options
For the extra options, refer to Data Source Option in the version you use.
Examples
>>> df.write.json(os.path.join(tempfile.mkdtemp(), 'data'))