KMeansModel

class pyspark.ml.clustering.KMeansModel(java_model: Optional[JavaObject] = None)

Model fitted by KMeans.

Methods

clear(param)

Clears a param from the param map if it has been explicitly set.

clusterCenters()

Get the cluster centers, represented as a list of NumPy arrays.

copy([extra])

Creates a copy of this instance with the same uid and some extra params.

explainParam(param)

Explains a single param and returns its name, doc, and optional default value and user-supplied value in a string.

explainParams()

Returns the documentation of all params with their optionally default values and user-supplied values.

extractParamMap([extra])

Extracts the embedded default param values and user-supplied values, and then merges them with extra values from input into a flat param map, where the latter value is used if there exist conflicts, i.e., with ordering: default param values < user-supplied values < extra.

getDistanceMeasure()

Gets the value of distanceMeasure or its default value.

getFeaturesCol()

Gets the value of featuresCol or its default value.

getInitMode()

Gets the value of initMode

getInitSteps()

Gets the value of initSteps

getK()

Gets the value of k

getMaxBlockSizeInMB()

Gets the value of maxBlockSizeInMB or its default value.

getMaxIter()

Gets the value of maxIter or its default value.

getOrDefault(param)

Gets the value of a param in the user-supplied param map or its default value.

getParam(paramName)

Gets a param by its name.

getPredictionCol()

Gets the value of predictionCol or its default value.

getSeed()

Gets the value of seed or its default value.

getSolver()

Gets the value of solver or its default value.

getTol()

Gets the value of tol or its default value.

getWeightCol()

Gets the value of weightCol or its default value.

hasDefault(param)

Checks whether a param has a default value.

hasParam(paramName)

Tests whether this instance contains a param with a given (string) name.

isDefined(param)

Checks whether a param is explicitly set by user or has a default value.

isSet(param)

Checks whether a param is explicitly set by user.

load(path)

Reads an ML instance from the input path, a shortcut of read().load(path).

predict(value)

Predict label for the given features.

read()

Returns an MLReader instance for this class.

save(path)

Save this ML instance to the given path, a shortcut of ‘write().save(path)’.

set(param, value)

Sets a parameter in the embedded param map.

setFeaturesCol(value)

Sets the value of featuresCol.

setPredictionCol(value)

Sets the value of predictionCol.

transform(dataset[, params])

Transforms the input dataset with optional parameters.

write()

Returns an GeneralMLWriter instance for this ML instance.

Attributes

distanceMeasure

featuresCol

hasSummary

Indicates whether a training summary exists for this model instance.

initMode

initSteps

k

maxBlockSizeInMB

maxIter

params

Returns all params ordered by name.

predictionCol

seed

solver

summary

Gets summary (cluster assignments, cluster sizes) of the model trained on the training set.

tol

weightCol

Methods Documentation

clear(param: pyspark.ml.param.Param) → None

Clears a param from the param map if it has been explicitly set.

clusterCenters() → List[numpy.ndarray]

Get the cluster centers, represented as a list of NumPy arrays.

copy(extra: Optional[ParamMap] = None) → JP

Creates a copy of this instance with the same uid and some extra params. This implementation first calls Params.copy and then make a copy of the companion Java pipeline component with extra params. So both the Python wrapper and the Java pipeline component get copied.

Parameters
extradict, optional

Extra parameters to copy to the new instance

Returns
JavaParams

Copy of this instance

explainParam(param: Union[str, pyspark.ml.param.Param]) → str

Explains a single param and returns its name, doc, and optional default value and user-supplied value in a string.

explainParams() → str

Returns the documentation of all params with their optionally default values and user-supplied values.

extractParamMap(extra: Optional[ParamMap] = None) → ParamMap

Extracts the embedded default param values and user-supplied values, and then merges them with extra values from input into a flat param map, where the latter value is used if there exist conflicts, i.e., with ordering: default param values < user-supplied values < extra.

Parameters
extradict, optional

extra param values

Returns
dict

merged param map

getDistanceMeasure() → str

Gets the value of distanceMeasure or its default value.

getFeaturesCol() → str

Gets the value of featuresCol or its default value.

getInitMode() → str

Gets the value of initMode

getInitSteps() → int

Gets the value of initSteps

getK() → int

Gets the value of k

getMaxBlockSizeInMB() → float

Gets the value of maxBlockSizeInMB or its default value.

getMaxIter() → int

Gets the value of maxIter or its default value.

getOrDefault(param: Union[str, pyspark.ml.param.Param[T]]) → Union[Any, T]

Gets the value of a param in the user-supplied param map or its default value. Raises an error if neither is set.

getParam(paramName: str)pyspark.ml.param.Param

Gets a param by its name.

getPredictionCol() → str

Gets the value of predictionCol or its default value.

getSeed() → int

Gets the value of seed or its default value.

getSolver() → str

Gets the value of solver or its default value.

getTol() → float

Gets the value of tol or its default value.

getWeightCol() → str

Gets the value of weightCol or its default value.

hasDefault(param: Union[str, pyspark.ml.param.Param[Any]]) → bool

Checks whether a param has a default value.

hasParam(paramName: str) → bool

Tests whether this instance contains a param with a given (string) name.

isDefined(param: Union[str, pyspark.ml.param.Param[Any]]) → bool

Checks whether a param is explicitly set by user or has a default value.

isSet(param: Union[str, pyspark.ml.param.Param[Any]]) → bool

Checks whether a param is explicitly set by user.

classmethod load(path: str) → RL

Reads an ML instance from the input path, a shortcut of read().load(path).

predict(value: pyspark.ml.linalg.Vector) → int

Predict label for the given features.

classmethod read() → pyspark.ml.util.JavaMLReader[RL]

Returns an MLReader instance for this class.

save(path: str) → None

Save this ML instance to the given path, a shortcut of ‘write().save(path)’.

set(param: pyspark.ml.param.Param, value: Any) → None

Sets a parameter in the embedded param map.

setFeaturesCol(value: str)pyspark.ml.clustering.KMeansModel

Sets the value of featuresCol.

setPredictionCol(value: str)pyspark.ml.clustering.KMeansModel

Sets the value of predictionCol.

transform(dataset: pyspark.sql.dataframe.DataFrame, params: Optional[ParamMap] = None) → pyspark.sql.dataframe.DataFrame

Transforms the input dataset with optional parameters.

Parameters
datasetpyspark.sql.DataFrame

input dataset

paramsdict, optional

an optional param map that overrides embedded params.

Returns
pyspark.sql.DataFrame

transformed dataset

write() → pyspark.ml.util.GeneralJavaMLWriter

Returns an GeneralMLWriter instance for this ML instance.

Attributes Documentation

distanceMeasure = Param(parent='undefined', name='distanceMeasure', doc="the distance measure. Supported options: 'euclidean' and 'cosine'.")
featuresCol = Param(parent='undefined', name='featuresCol', doc='features column name.')
hasSummary

Indicates whether a training summary exists for this model instance.

initMode: pyspark.ml.param.Param[str] = Param(parent='undefined', name='initMode', doc='The initialization algorithm. This can be either "random" to choose random points as initial cluster centers, or "k-means||" to use a parallel variant of k-means++')
initSteps: pyspark.ml.param.Param[int] = Param(parent='undefined', name='initSteps', doc='The number of steps for k-means|| initialization mode. Must be > 0.')
k: pyspark.ml.param.Param[int] = Param(parent='undefined', name='k', doc='The number of clusters to create. Must be > 1.')
maxBlockSizeInMB = Param(parent='undefined', name='maxBlockSizeInMB', doc='maximum memory in MB for stacking input data into blocks. Data is stacked within partitions. If more than remaining data size in a partition then it is adjusted to the data size. Default 0.0 represents choosing optimal value, depends on specific algorithm. Must be >= 0.')
maxIter = Param(parent='undefined', name='maxIter', doc='max number of iterations (>= 0).')
params

Returns all params ordered by name. The default implementation uses dir() to get all attributes of type Param.

predictionCol = Param(parent='undefined', name='predictionCol', doc='prediction column name.')
seed = Param(parent='undefined', name='seed', doc='random seed.')
solver: pyspark.ml.param.Param[str] = Param(parent='undefined', name='solver', doc='The solver algorithm for optimization. Supported options: auto, row, block.')
summary

Gets summary (cluster assignments, cluster sizes) of the model trained on the training set. An exception is thrown if no summary exists.

tol = Param(parent='undefined', name='tol', doc='the convergence tolerance for iterative algorithms (>= 0).')
weightCol = Param(parent='undefined', name='weightCol', doc='weight column name. If this is not set or empty, we treat all instance weights as 1.0.')