class RankingMetrics[T] extends Logging with Serializable
Evaluator for ranking algorithms.
Java users should use RankingMetrics$.of
to create a RankingMetrics instance.
- Annotations
- @Since( "1.2.0" )
- Alphabetic
- By Inheritance
- RankingMetrics
- Serializable
- Serializable
- Logging
- AnyRef
- Any
- Hide All
- Show All
- Public
- All
Instance Constructors
-
new
RankingMetrics(predictionAndLabels: RDD[_ <: Product])(implicit arg0: ClassTag[T])
- predictionAndLabels
an RDD of (predicted ranking, ground truth set) pair or (predicted ranking, ground truth set, . relevance value of ground truth set). Since 3.4.0, it supports ndcg evaluation with relevance value.
- Annotations
- @Since( "1.2.0" )
Value Members
-
final
def
!=(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
-
final
def
##(): Int
- Definition Classes
- AnyRef → Any
-
final
def
==(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
-
final
def
asInstanceOf[T0]: T0
- Definition Classes
- Any
-
def
clone(): AnyRef
- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws( ... ) @native()
-
final
def
eq(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
-
def
equals(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
-
def
finalize(): Unit
- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws( classOf[java.lang.Throwable] )
-
final
def
getClass(): Class[_]
- Definition Classes
- AnyRef → Any
- Annotations
- @native()
-
def
hashCode(): Int
- Definition Classes
- AnyRef → Any
- Annotations
- @native()
-
def
initializeLogIfNecessary(isInterpreter: Boolean, silent: Boolean): Boolean
- Attributes
- protected
- Definition Classes
- Logging
-
def
initializeLogIfNecessary(isInterpreter: Boolean): Unit
- Attributes
- protected
- Definition Classes
- Logging
-
final
def
isInstanceOf[T0]: Boolean
- Definition Classes
- Any
-
def
isTraceEnabled(): Boolean
- Attributes
- protected
- Definition Classes
- Logging
-
def
log: Logger
- Attributes
- protected
- Definition Classes
- Logging
-
def
logDebug(msg: ⇒ String, throwable: Throwable): Unit
- Attributes
- protected
- Definition Classes
- Logging
-
def
logDebug(msg: ⇒ String): Unit
- Attributes
- protected
- Definition Classes
- Logging
-
def
logError(msg: ⇒ String, throwable: Throwable): Unit
- Attributes
- protected
- Definition Classes
- Logging
-
def
logError(msg: ⇒ String): Unit
- Attributes
- protected
- Definition Classes
- Logging
-
def
logInfo(msg: ⇒ String, throwable: Throwable): Unit
- Attributes
- protected
- Definition Classes
- Logging
-
def
logInfo(msg: ⇒ String): Unit
- Attributes
- protected
- Definition Classes
- Logging
-
def
logName: String
- Attributes
- protected
- Definition Classes
- Logging
-
def
logTrace(msg: ⇒ String, throwable: Throwable): Unit
- Attributes
- protected
- Definition Classes
- Logging
-
def
logTrace(msg: ⇒ String): Unit
- Attributes
- protected
- Definition Classes
- Logging
-
def
logWarning(msg: ⇒ String, throwable: Throwable): Unit
- Attributes
- protected
- Definition Classes
- Logging
-
def
logWarning(msg: ⇒ String): Unit
- Attributes
- protected
- Definition Classes
- Logging
-
lazy val
meanAveragePrecision: Double
Returns the mean average precision (MAP) of all the queries.
Returns the mean average precision (MAP) of all the queries. If a query has an empty ground truth set, the average precision will be zero and a log warning is generated.
- Annotations
- @Since( "1.2.0" )
-
def
meanAveragePrecisionAt(k: Int): Double
Returns the mean average precision (MAP) at ranking position k of all the queries.
Returns the mean average precision (MAP) at ranking position k of all the queries. If a query has an empty ground truth set, the average precision will be zero and a log warning is generated.
- k
the position to compute the truncated precision, must be positive
- returns
the mean average precision at first k ranking positions
- Annotations
- @Since( "3.0.0" )
-
def
ndcgAt(k: Int): Double
Compute the average NDCG value of all the queries, truncated at ranking position k.
Compute the average NDCG value of all the queries, truncated at ranking position k. The discounted cumulative gain at position k is computed as: sumi=1k (2{relevance of ith item} - 1) / log(i + 1), and the NDCG is obtained by dividing the DCG value on the ground truth set. In the current implementation, the relevance value is binary if the relevance value is empty.
If a query has an empty ground truth set, zero will be used as ndcg together with a log warning.
See the following paper for detail:
IR evaluation methods for retrieving highly relevant documents. K. Jarvelin and J. Kekalainen
- k
the position to compute the truncated ndcg, must be positive
- returns
the average ndcg at the first k ranking positions
- Annotations
- @Since( "1.2.0" )
-
final
def
ne(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
-
final
def
notify(): Unit
- Definition Classes
- AnyRef
- Annotations
- @native()
-
final
def
notifyAll(): Unit
- Definition Classes
- AnyRef
- Annotations
- @native()
-
def
precisionAt(k: Int): Double
Compute the average precision of all the queries, truncated at ranking position k.
Compute the average precision of all the queries, truncated at ranking position k.
If for a query, the ranking algorithm returns n (n is less than k) results, the precision value will be computed as #(relevant items retrieved) / k. This formula also applies when the size of the ground truth set is less than k.
If a query has an empty ground truth set, zero will be used as precision together with a log warning.
See the following paper for detail:
IR evaluation methods for retrieving highly relevant documents. K. Jarvelin and J. Kekalainen
- k
the position to compute the truncated precision, must be positive
- returns
the average precision at the first k ranking positions
- Annotations
- @Since( "1.2.0" )
-
def
recallAt(k: Int): Double
Compute the average recall of all the queries, truncated at ranking position k.
Compute the average recall of all the queries, truncated at ranking position k.
If for a query, the ranking algorithm returns n results, the recall value will be computed as #(relevant items retrieved) / #(ground truth set). This formula also applies when the size of the ground truth set is less than k.
If a query has an empty ground truth set, zero will be used as recall together with a log warning.
See the following paper for detail:
IR evaluation methods for retrieving highly relevant documents. K. Jarvelin and J. Kekalainen
- k
the position to compute the truncated recall, must be positive
- returns
the average recall at the first k ranking positions
- Annotations
- @Since( "3.0.0" )
-
final
def
synchronized[T0](arg0: ⇒ T0): T0
- Definition Classes
- AnyRef
-
def
toString(): String
- Definition Classes
- AnyRef → Any
-
final
def
wait(): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws( ... )
-
final
def
wait(arg0: Long, arg1: Int): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws( ... )
-
final
def
wait(arg0: Long): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws( ... ) @native()