pyspark.sql.DataFrame.cube¶
-
DataFrame.
cube
(*cols: ColumnOrName) → GroupedData¶ Create a multi-dimensional cube for the current
DataFrame
using the specified columns, so we can run aggregations on them.Examples
>>> df.cube("name", df.age).count().orderBy("name", "age").show() +-----+----+-----+ | name| age|count| +-----+----+-----+ | null|null| 2| | null| 2| 1| | null| 5| 1| |Alice|null| 1| |Alice| 2| 1| | Bob|null| 1| | Bob| 5| 1| +-----+----+-----+