pyspark.sql.streaming.DataStreamReader.json

DataStreamReader.json(path: str, schema: Union[pyspark.sql.types.StructType, str, None] = None, primitivesAsString: Union[bool, str, None] = None, prefersDecimal: Union[bool, str, None] = None, allowComments: Union[bool, str, None] = None, allowUnquotedFieldNames: Union[bool, str, None] = None, allowSingleQuotes: Union[bool, str, None] = None, allowNumericLeadingZero: Union[bool, str, None] = None, allowBackslashEscapingAnyCharacter: Union[bool, str, None] = None, mode: Optional[str] = None, columnNameOfCorruptRecord: Optional[str] = None, dateFormat: Optional[str] = None, timestampFormat: Optional[str] = None, multiLine: Union[bool, str, None] = None, allowUnquotedControlChars: Union[bool, str, None] = None, lineSep: Optional[str] = None, locale: Optional[str] = None, dropFieldIfAllNull: Union[bool, str, None] = None, encoding: Optional[str] = None, pathGlobFilter: Union[bool, str, None] = None, recursiveFileLookup: Union[bool, str, None] = None, allowNonNumericNumbers: Union[bool, str, None] = None) → DataFrame

Loads a JSON file stream and returns the results as a DataFrame.

JSON Lines (newline-delimited JSON) is supported by default. For JSON (one record per file), set the multiLine parameter to true.

If the schema parameter is not specified, this function goes through the input once to determine the input schema.

Parameters
pathstr

string represents path to the JSON dataset, or RDD of Strings storing JSON objects.

schemapyspark.sql.types.StructType or str, optional

an optional pyspark.sql.types.StructType for the input schema or a DDL-formatted string (For example col0 INT, col1 DOUBLE).

Other Parameters
Extra options

For the extra options, refer to Data Source Option in the version you use.

Notes

This API is evolving.

Examples

>>> json_sdf = spark.readStream.json(tempfile.mkdtemp(), schema = sdf_schema)
>>> json_sdf.isStreaming
True
>>> json_sdf.schema == sdf_schema
True