pyspark.pandas.Series.sort_index

Series.sort_index(axis: Union[int, str] = 0, level: Union[int, List[int], None] = None, ascending: bool = True, inplace: bool = False, kind: str = None, na_position: str = 'last', ignore_index: bool = False) → Optional[pyspark.pandas.series.Series]

Sort object by labels (along an axis)

Parameters
axisindex, columns to direct sorting. Currently, only axis = 0 is supported.
levelint or level name or list of ints or list of level names

if not None, sort on values in specified index level(s)

ascendingboolean, default True

Sort ascending vs. descending

inplacebool, default False

if True, perform operation in-place

kindstr, default None

pandas-on-Spark does not allow specifying the sorting algorithm at the moment, default None

na_position{‘first’, ‘last’}, default ‘last’

first puts NaNs at the beginning, last puts NaNs at the end. Not implemented for MultiIndex.

ignore_indexbool, default False

If True, the resulting axis will be labeled 0, 1, …, n - 1.

Returns
sorted_objSeries

Examples

>>> s = ps.Series([2, 1, np.nan], index=['b', 'a', np.nan])
>>> s.sort_index()  
a       1.0
b       2.0
None    NaN
dtype: float64
>>> s.sort_index(ignore_index=True)
0    1.0
1    2.0
2    NaN
dtype: float64
>>> s.sort_index(ascending=False)  
b       2.0
a       1.0
None    NaN
dtype: float64
>>> s.sort_index(na_position='first')  
None    NaN
a       1.0
b       2.0
dtype: float64
>>> s.sort_index(inplace=True)
>>> s  
a       1.0
b       2.0
None    NaN
dtype: float64

Multi-index series.

>>> s = ps.Series(range(4), index=[['b', 'b', 'a', 'a'], [1, 0, 1, 0]], name='0')
>>> s.sort_index()
a  0    3
   1    2
b  0    1
   1    0
Name: 0, dtype: int64
>>> s.sort_index(level=1)  
a  0    3
b  0    1
a  1    2
b  1    0
Name: 0, dtype: int64
>>> s.sort_index(level=[1, 0])
a  0    3
b  0    1
a  1    2
b  1    0
Name: 0, dtype: int64