RobustScaler

class pyspark.ml.feature.RobustScaler(*, lower: float = 0.25, upper: float = 0.75, withCentering: bool = False, withScaling: bool = True, inputCol: Optional[str] = None, outputCol: Optional[str] = None, relativeError: float = 0.001)

RobustScaler removes the median and scales the data according to the quantile range. The quantile range is by default IQR (Interquartile Range, quantile range between the 1st quartile = 25th quantile and the 3rd quartile = 75th quantile) but can be configured. Centering and scaling happen independently on each feature by computing the relevant statistics on the samples in the training set. Median and quantile range are then stored to be used on later data using the transform method. Note that NaN values are ignored in the computation of medians and ranges.

Examples

>>> from pyspark.ml.linalg import Vectors
>>> data = [(0, Vectors.dense([0.0, 0.0]),),
...         (1, Vectors.dense([1.0, -1.0]),),
...         (2, Vectors.dense([2.0, -2.0]),),
...         (3, Vectors.dense([3.0, -3.0]),),
...         (4, Vectors.dense([4.0, -4.0]),),]
>>> df = spark.createDataFrame(data, ["id", "features"])
>>> scaler = RobustScaler()
>>> scaler.setInputCol("features")
RobustScaler...
>>> scaler.setOutputCol("scaled")
RobustScaler...
>>> model = scaler.fit(df)
>>> model.setOutputCol("output")
RobustScalerModel...
>>> model.median
DenseVector([2.0, -2.0])
>>> model.range
DenseVector([2.0, 2.0])
>>> model.transform(df).collect()[1].output
DenseVector([0.5, -0.5])
>>> scalerPath = temp_path + "/robust-scaler"
>>> scaler.save(scalerPath)
>>> loadedScaler = RobustScaler.load(scalerPath)
>>> loadedScaler.getWithCentering() == scaler.getWithCentering()
True
>>> loadedScaler.getWithScaling() == scaler.getWithScaling()
True
>>> modelPath = temp_path + "/robust-scaler-model"
>>> model.save(modelPath)
>>> loadedModel = RobustScalerModel.load(modelPath)
>>> loadedModel.median == model.median
True
>>> loadedModel.range == model.range
True
>>> loadedModel.transform(df).take(1) == model.transform(df).take(1)
True

Methods

clear(param)

Clears a param from the param map if it has been explicitly set.

copy([extra])

Creates a copy of this instance with the same uid and some extra params.

explainParam(param)

Explains a single param and returns its name, doc, and optional default value and user-supplied value in a string.

explainParams()

Returns the documentation of all params with their optionally default values and user-supplied values.

extractParamMap([extra])

Extracts the embedded default param values and user-supplied values, and then merges them with extra values from input into a flat param map, where the latter value is used if there exist conflicts, i.e., with ordering: default param values < user-supplied values < extra.

fit(dataset[, params])

Fits a model to the input dataset with optional parameters.

fitMultiple(dataset, paramMaps)

Fits a model to the input dataset for each param map in paramMaps.

getInputCol()

Gets the value of inputCol or its default value.

getLower()

Gets the value of lower or its default value.

getOrDefault(param)

Gets the value of a param in the user-supplied param map or its default value.

getOutputCol()

Gets the value of outputCol or its default value.

getParam(paramName)

Gets a param by its name.

getRelativeError()

Gets the value of relativeError or its default value.

getUpper()

Gets the value of upper or its default value.

getWithCentering()

Gets the value of withCentering or its default value.

getWithScaling()

Gets the value of withScaling or its default value.

hasDefault(param)

Checks whether a param has a default value.

hasParam(paramName)

Tests whether this instance contains a param with a given (string) name.

isDefined(param)

Checks whether a param is explicitly set by user or has a default value.

isSet(param)

Checks whether a param is explicitly set by user.

load(path)

Reads an ML instance from the input path, a shortcut of read().load(path).

read()

Returns an MLReader instance for this class.

save(path)

Save this ML instance to the given path, a shortcut of ‘write().save(path)’.

set(param, value)

Sets a parameter in the embedded param map.

setInputCol(value)

Sets the value of inputCol.

setLower(value)

Sets the value of lower.

setOutputCol(value)

Sets the value of outputCol.

setParams(self, \*[, lower, upper, …])

Sets params for this RobustScaler.

setRelativeError(value)

Sets the value of relativeError.

setUpper(value)

Sets the value of upper.

setWithCentering(value)

Sets the value of withCentering.

setWithScaling(value)

Sets the value of withScaling.

write()

Returns an MLWriter instance for this ML instance.

Attributes

inputCol

lower

outputCol

params

Returns all params ordered by name.

relativeError

upper

withCentering

withScaling

Methods Documentation

clear(param: pyspark.ml.param.Param) → None

Clears a param from the param map if it has been explicitly set.

copy(extra: Optional[ParamMap] = None) → JP

Creates a copy of this instance with the same uid and some extra params. This implementation first calls Params.copy and then make a copy of the companion Java pipeline component with extra params. So both the Python wrapper and the Java pipeline component get copied.

Parameters
extradict, optional

Extra parameters to copy to the new instance

Returns
JavaParams

Copy of this instance

explainParam(param: Union[str, pyspark.ml.param.Param]) → str

Explains a single param and returns its name, doc, and optional default value and user-supplied value in a string.

explainParams() → str

Returns the documentation of all params with their optionally default values and user-supplied values.

extractParamMap(extra: Optional[ParamMap] = None) → ParamMap

Extracts the embedded default param values and user-supplied values, and then merges them with extra values from input into a flat param map, where the latter value is used if there exist conflicts, i.e., with ordering: default param values < user-supplied values < extra.

Parameters
extradict, optional

extra param values

Returns
dict

merged param map

fit(dataset: pyspark.sql.dataframe.DataFrame, params: Union[ParamMap, List[ParamMap], Tuple[ParamMap], None] = None) → Union[M, List[M]]

Fits a model to the input dataset with optional parameters.

Parameters
datasetpyspark.sql.DataFrame

input dataset.

paramsdict or list or tuple, optional

an optional param map that overrides embedded params. If a list/tuple of param maps is given, this calls fit on each param map and returns a list of models.

Returns
Transformer or a list of Transformer

fitted model(s)

fitMultiple(dataset: pyspark.sql.dataframe.DataFrame, paramMaps: Sequence[ParamMap]) → Iterator[Tuple[int, M]]

Fits a model to the input dataset for each param map in paramMaps.

Parameters
datasetpyspark.sql.DataFrame

input dataset.

paramMapscollections.abc.Sequence

A Sequence of param maps.

Returns
_FitMultipleIterator

A thread safe iterable which contains one model for each param map. Each call to next(modelIterator) will return (index, model) where model was fit using paramMaps[index]. index values may not be sequential.

getInputCol() → str

Gets the value of inputCol or its default value.

getLower() → float

Gets the value of lower or its default value.

getOrDefault(param: Union[str, pyspark.ml.param.Param[T]]) → Union[Any, T]

Gets the value of a param in the user-supplied param map or its default value. Raises an error if neither is set.

getOutputCol() → str

Gets the value of outputCol or its default value.

getParam(paramName: str)pyspark.ml.param.Param

Gets a param by its name.

getRelativeError() → float

Gets the value of relativeError or its default value.

getUpper() → float

Gets the value of upper or its default value.

getWithCentering() → bool

Gets the value of withCentering or its default value.

getWithScaling() → bool

Gets the value of withScaling or its default value.

hasDefault(param: Union[str, pyspark.ml.param.Param[Any]]) → bool

Checks whether a param has a default value.

hasParam(paramName: str) → bool

Tests whether this instance contains a param with a given (string) name.

isDefined(param: Union[str, pyspark.ml.param.Param[Any]]) → bool

Checks whether a param is explicitly set by user or has a default value.

isSet(param: Union[str, pyspark.ml.param.Param[Any]]) → bool

Checks whether a param is explicitly set by user.

classmethod load(path: str) → RL

Reads an ML instance from the input path, a shortcut of read().load(path).

classmethod read() → pyspark.ml.util.JavaMLReader[RL]

Returns an MLReader instance for this class.

save(path: str) → None

Save this ML instance to the given path, a shortcut of ‘write().save(path)’.

set(param: pyspark.ml.param.Param, value: Any) → None

Sets a parameter in the embedded param map.

setInputCol(value: str)pyspark.ml.feature.RobustScaler

Sets the value of inputCol.

setLower(value: float)pyspark.ml.feature.RobustScaler

Sets the value of lower.

setOutputCol(value: str)pyspark.ml.feature.RobustScaler

Sets the value of outputCol.

setParams(self, \*, lower=0.25, upper=0.75, withCentering=False, withScaling=True, inputCol=None, outputCol=None, relativeError=0.001)

Sets params for this RobustScaler.

setRelativeError(value: float)pyspark.ml.feature.RobustScaler

Sets the value of relativeError.

setUpper(value: float)pyspark.ml.feature.RobustScaler

Sets the value of upper.

setWithCentering(value: bool)pyspark.ml.feature.RobustScaler

Sets the value of withCentering.

setWithScaling(value: bool)pyspark.ml.feature.RobustScaler

Sets the value of withScaling.

write() → pyspark.ml.util.JavaMLWriter

Returns an MLWriter instance for this ML instance.

Attributes Documentation

inputCol = Param(parent='undefined', name='inputCol', doc='input column name.')
lower = Param(parent='undefined', name='lower', doc='Lower quantile to calculate quantile range')
outputCol = Param(parent='undefined', name='outputCol', doc='output column name.')
params

Returns all params ordered by name. The default implementation uses dir() to get all attributes of type Param.

relativeError = Param(parent='undefined', name='relativeError', doc='the relative target precision for the approximate quantile algorithm. Must be in the range [0, 1]')
upper = Param(parent='undefined', name='upper', doc='Upper quantile to calculate quantile range')
withCentering = Param(parent='undefined', name='withCentering', doc='Whether to center data with median')
withScaling = Param(parent='undefined', name='withScaling', doc='Whether to scale the data to quantile range')