pyspark.pandas.DataFrame.sum¶
-
DataFrame.
sum
(axis: Union[int, str, None] = None, skipna: bool = True, numeric_only: bool = None, min_count: int = 0) → Union[int, float, bool, str, bytes, decimal.Decimal, datetime.date, datetime.datetime, None, Series]¶ Return the sum of the values.
- Parameters
- axis{index (0), columns (1)}
Axis for the function to be applied on.
- skipnabool, default True
Exclude NA/null values when computing the result.
Added skipna to exclude .
- numeric_onlybool, default None
Include only float, int, boolean columns. False is not supported. This parameter is mainly for pandas compatibility.
- min_countint, default 0
- The required number of valid values to perform the operation. If fewer than
min_count
non-NA values are present the result will be NA.
- Returns
- sumscalar for a Series, and a Series for a DataFrame.
Examples
>>> df = ps.DataFrame({'a': [1, 2, 3, np.nan], 'b': [0.1, np.nan, 0.3, np.nan]}, ... columns=['a', 'b'])
On a DataFrame:
>>> df.sum() a 6.0 b 0.4 dtype: float64
>>> df.sum(axis=1) 0 1.1 1 2.0 2 3.3 3 0.0 dtype: float64
>>> df.sum(min_count=3) a 6.0 b NaN dtype: float64
>>> df.sum(axis=1, min_count=1) 0 1.1 1 2.0 2 3.3 3 NaN dtype: float64
On a Series:
>>> df['a'].sum() 6.0
>>> df['a'].sum(min_count=3) 6.0 >>> df['b'].sum(min_count=3) nan