pyspark.pandas.groupby.GroupBy.std

GroupBy.std(ddof: int = 1) → FrameLike

Compute standard deviation of groups, excluding missing values.

Parameters
ddofint, default 1

Delta Degrees of Freedom. The divisor used in calculations is N - ddof, where N represents the number of elements.

Examples

>>> df = ps.DataFrame({"A": [1, 2, 1, 2], "B": [True, False, False, True],
...                    "C": [3, 4, 3, 4], "D": ["a", "b", "b", "a"]})
>>> df.groupby("A").std()
          B    C
A
1  0.707107  0.0
2  0.707107  0.0