pyspark.sql.DataFrame.randomSplit

DataFrame.randomSplit(weights: List[float], seed: Optional[int] = None) → List[pyspark.sql.dataframe.DataFrame]

Randomly splits this DataFrame with the provided weights.

Parameters
weightslist

list of doubles as weights with which to split the DataFrame. Weights will be normalized if they don’t sum up to 1.0.

seedint, optional

The seed for sampling.

Examples

>>> splits = df4.randomSplit([1.0, 2.0], 24)
>>> splits[0].count()
2
>>> splits[1].count()
2