pyspark.RDD.mapPartitions¶
-
RDD.
mapPartitions
(f: Callable[[Iterable[T]], Iterable[U]], preservesPartitioning: bool = False) → pyspark.rdd.RDD[U]¶ Return a new RDD by applying a function to each partition of this RDD.
Examples
>>> rdd = sc.parallelize([1, 2, 3, 4], 2) >>> def f(iterator): yield sum(iterator) >>> rdd.mapPartitions(f).collect() [3, 7]