pyspark.sql.functions.spark_partition_id¶
-
pyspark.sql.functions.
spark_partition_id
() → pyspark.sql.column.Column¶ A column for partition ID.
Notes
This is non deterministic because it depends on data partitioning and task scheduling.
Examples
>>> df.repartition(1).select(spark_partition_id().alias("pid")).collect() [Row(pid=0), Row(pid=0)]