pyspark.sql.SparkSession.range¶
-
SparkSession.
range
(start: int, end: Optional[int] = None, step: int = 1, numPartitions: Optional[int] = None) → pyspark.sql.dataframe.DataFrame¶ Create a
DataFrame
with singlepyspark.sql.types.LongType
column namedid
, containing elements in a range fromstart
toend
(exclusive) with step valuestep
.- Parameters
- startint
the start value
- endint, optional
the end value (exclusive)
- stepint, optional
the incremental step (default: 1)
- numPartitionsint, optional
the number of partitions of the DataFrame
- Returns
Examples
>>> spark.range(1, 7, 2).collect() [Row(id=1), Row(id=3), Row(id=5)]
If only one argument is specified, it will be used as the end value.
>>> spark.range(3).collect() [Row(id=0), Row(id=1), Row(id=2)]