pyspark.sql.DataFrame.unpersist

DataFrame.unpersist(blocking: bool = False) → pyspark.sql.dataframe.DataFrame

Marks the DataFrame as non-persistent, and remove all blocks for it from memory and disk.

Notes

blocking default has changed to False to match Scala in 2.0.

Cached data is shared across all Spark sessions on the cluster, so unpersisting affects all sessions.