pyspark.sql.DataFrame.intersectAll

DataFrame.intersectAll(other: pyspark.sql.dataframe.DataFrame) → pyspark.sql.dataframe.DataFrame

Return a new DataFrame containing rows in both this DataFrame and another DataFrame while preserving duplicates.

This is equivalent to INTERSECT ALL in SQL. As standard in SQL, this function resolves columns by position (not by name).

Examples

>>> df1 = spark.createDataFrame([("a", 1), ("a", 1), ("b", 3), ("c", 4)], ["C1", "C2"])
>>> df2 = spark.createDataFrame([("a", 1), ("a", 1), ("b", 3)], ["C1", "C2"])
>>> df1.intersectAll(df2).sort("C1", "C2").show()
+---+---+
| C1| C2|
+---+---+
|  a|  1|
|  a|  1|
|  b|  3|
+---+---+