pyspark.sql.DataFrameReader.parquet

DataFrameReader.parquet(*paths: str, **options: OptionalPrimitiveType) → DataFrame

Loads Parquet files, returning the result as a DataFrame.

Parameters
pathsstr
Other Parameters
**options

For the extra options, refer to Data Source Option in the version you use.

Examples

>>> df = spark.read.parquet('python/test_support/sql/parquet_partitioned')
>>> df.dtypes
[('name', 'string'), ('year', 'int'), ('month', 'int'), ('day', 'int')]