pyspark.RDD.saveAsHadoopFile¶
-
RDD.
saveAsHadoopFile
(path: str, outputFormatClass: str, keyClass: Optional[str] = None, valueClass: Optional[str] = None, keyConverter: Optional[str] = None, valueConverter: Optional[str] = None, conf: Optional[Dict[str, str]] = None, compressionCodecClass: Optional[str] = None) → None¶ Output a Python RDD of key-value pairs (of form
RDD[(K, V)]
) to any Hadoop file system, using the old Hadoop OutputFormat API (mapred package). Key and value types will be inferred if not specified. Keys and values are converted for output using either user specified converters or “org.apache.spark.api.python.JavaToWritableConverter”. The conf is applied on top of the base Hadoop conf associated with the SparkContext of this RDD to create a merged Hadoop MapReduce job configuration for saving the data.- Parameters
- pathstr
path to Hadoop file
- outputFormatClassstr
fully qualified classname of Hadoop OutputFormat (e.g. “org.apache.hadoop.mapred.SequenceFileOutputFormat”)
- keyClassstr, optional
fully qualified classname of key Writable class (e.g. “org.apache.hadoop.io.IntWritable”, None by default)
- valueClassstr, optional
fully qualified classname of value Writable class (e.g. “org.apache.hadoop.io.Text”, None by default)
- keyConverterstr, optional
fully qualified classname of key converter (None by default)
- valueConverterstr, optional
fully qualified classname of value converter (None by default)
- confdict, optional
(None by default)
- compressionCodecClassstr
fully qualified classname of the compression codec class i.e. “org.apache.hadoop.io.compress.GzipCodec” (None by default)