pyspark.sql.DataFrame.unpersist¶
-
DataFrame.unpersist(blocking: bool = False) → pyspark.sql.dataframe.DataFrame¶ Marks the
DataFrameas non-persistent, and remove all blocks for it from memory and disk.Notes
blocking default has changed to
Falseto match Scala in 2.0.Cached data is shared across all Spark sessions on the cluster, so unpersisting affects all sessions.