class LocalLDAModel extends LDAModel with Serializable
Local LDA model. This model stores only the inferred topics.
- Annotations
- @Since( "1.3.0" )
- Alphabetic
- By Inheritance
- LocalLDAModel
- Serializable
- Serializable
- LDAModel
- Saveable
- AnyRef
- Any
- Hide All
- Show All
- Public
- All
Value Members
-
final
def
!=(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
-
final
def
##(): Int
- Definition Classes
- AnyRef → Any
-
final
def
==(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
-
final
def
asInstanceOf[T0]: T0
- Definition Classes
- Any
-
def
clone(): AnyRef
- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws( ... ) @native()
-
def
describeTopics(maxTermsPerTopic: Int): Array[(Array[Int], Array[Double])]
Return the topics described by weighted terms.
Return the topics described by weighted terms.
- maxTermsPerTopic
Maximum number of terms to collect for each topic.
- returns
Array over topics. Each topic is represented as a pair of matching arrays: (term indices, term weights in topic). Each topic's terms are sorted in order of decreasing weight.
- Definition Classes
- LocalLDAModel → LDAModel
- Annotations
- @Since( "1.3.0" )
-
def
describeTopics(): Array[(Array[Int], Array[Double])]
Return the topics described by weighted terms.
Return the topics described by weighted terms.
WARNING: If vocabSize and k are large, this can return a large object!
- returns
Array over topics. Each topic is represented as a pair of matching arrays: (term indices, term weights in topic). Each topic's terms are sorted in order of decreasing weight.
- Definition Classes
- LDAModel
- Annotations
- @Since( "1.3.0" )
-
val
docConcentration: Vector
Concentration parameter (commonly named "alpha") for the prior placed on documents' distributions over topics ("theta").
Concentration parameter (commonly named "alpha") for the prior placed on documents' distributions over topics ("theta").
This is the parameter to a Dirichlet distribution.
- Definition Classes
- LocalLDAModel → LDAModel
- Annotations
- @Since( "1.5.0" )
-
final
def
eq(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
-
def
equals(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
-
def
finalize(): Unit
- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws( classOf[java.lang.Throwable] )
-
val
gammaShape: Double
Shape parameter for random initialization of variational parameter gamma.
Shape parameter for random initialization of variational parameter gamma. Used for variational inference for perplexity and other test-time computations.
- Attributes
- protected[spark]
- Definition Classes
- LocalLDAModel → LDAModel
-
final
def
getClass(): Class[_]
- Definition Classes
- AnyRef → Any
- Annotations
- @native()
-
def
getSeed: Long
Random seed for cluster initialization.
Random seed for cluster initialization.
- Annotations
- @Since( "2.4.0" )
-
def
hashCode(): Int
- Definition Classes
- AnyRef → Any
- Annotations
- @native()
-
final
def
isInstanceOf[T0]: Boolean
- Definition Classes
- Any
-
def
k: Int
Number of topics
Number of topics
- Definition Classes
- LocalLDAModel → LDAModel
- Annotations
- @Since( "1.3.0" )
-
def
logLikelihood(documents: JavaPairRDD[Long, Vector]): Double
Java-friendly version of
logLikelihood
Java-friendly version of
logLikelihood
- Annotations
- @Since( "1.5.0" )
-
def
logLikelihood(documents: RDD[(Long, Vector)]): Double
Calculates a lower bound on the log likelihood of the entire corpus.
Calculates a lower bound on the log likelihood of the entire corpus.
See Equation (16) in original Online LDA paper.
- documents
test corpus to use for calculating log likelihood
- returns
variational lower bound on the log likelihood of the entire corpus
- Annotations
- @Since( "1.5.0" )
-
def
logPerplexity(documents: JavaPairRDD[Long, Vector]): Double
Java-friendly version of
logPerplexity
Java-friendly version of
logPerplexity
- Annotations
- @Since( "1.5.0" )
-
def
logPerplexity(documents: RDD[(Long, Vector)]): Double
Calculate an upper bound on perplexity.
Calculate an upper bound on perplexity. (Lower is better.) See Equation (16) in original Online LDA paper.
- documents
test corpus to use for calculating perplexity
- returns
Variational upper bound on log perplexity per token.
- Annotations
- @Since( "1.5.0" )
-
final
def
ne(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
-
final
def
notify(): Unit
- Definition Classes
- AnyRef
- Annotations
- @native()
-
final
def
notifyAll(): Unit
- Definition Classes
- AnyRef
- Annotations
- @native()
-
def
save(sc: SparkContext, path: String): Unit
Save this model to the given path.
Save this model to the given path.
This saves:
- human-readable (JSON) model metadata to path/metadata/
- Parquet formatted data to path/data/
The model may be loaded using
Loader.load
.- sc
Spark context used to save model data.
- path
Path specifying the directory in which to save this model. If the directory already exists, this method throws an exception.
- Definition Classes
- LocalLDAModel → Saveable
- Annotations
- @Since( "1.5.0" )
-
def
setSeed(seed: Long): LocalLDAModel.this.type
Set the random seed for cluster initialization.
Set the random seed for cluster initialization.
- Annotations
- @Since( "2.4.0" )
-
final
def
synchronized[T0](arg0: ⇒ T0): T0
- Definition Classes
- AnyRef
-
def
toString(): String
- Definition Classes
- AnyRef → Any
-
val
topicConcentration: Double
Concentration parameter (commonly named "beta" or "eta") for the prior placed on topics' distributions over terms.
Concentration parameter (commonly named "beta" or "eta") for the prior placed on topics' distributions over terms.
This is the parameter to a symmetric Dirichlet distribution.
- Definition Classes
- LocalLDAModel → LDAModel
- Annotations
- @Since( "1.5.0" )
- Note
The topics' distributions over terms are called "beta" in the original LDA paper by Blei et al., but are called "phi" in many later papers such as Asuncion et al., 2009.
-
def
topicDistribution(document: Vector): Vector
Predicts the topic mixture distribution for a document (often called "theta" in the literature).
Predicts the topic mixture distribution for a document (often called "theta" in the literature). Returns a vector of zeros for an empty document.
Note this means to allow quick query for single document. For batch documents, please refer to
topicDistributions()
to avoid overhead.- document
document to predict topic mixture distributions for
- returns
topic mixture distribution for the document
- Annotations
- @Since( "2.0.0" )
-
def
topicDistributions(documents: JavaPairRDD[Long, Vector]): JavaPairRDD[Long, Vector]
Java-friendly version of
topicDistributions
Java-friendly version of
topicDistributions
- Annotations
- @Since( "1.4.1" )
-
def
topicDistributions(documents: RDD[(Long, Vector)]): RDD[(Long, Vector)]
Predicts the topic mixture distribution for each document (often called "theta" in the literature).
Predicts the topic mixture distribution for each document (often called "theta" in the literature). Returns a vector of zeros for an empty document.
This uses a variational approximation following Hoffman et al. (2010), where the approximate distribution is called "gamma." Technically, this method returns this approximation "gamma" for each document.
- documents
documents to predict topic mixture distributions for
- returns
An RDD of (document ID, topic mixture distribution for document)
- Annotations
- @Since( "1.3.0" )
-
val
topics: Matrix
- Annotations
- @Since( "1.3.0" )
-
def
topicsMatrix: Matrix
Inferred topics, where each topic is represented by a distribution over terms.
Inferred topics, where each topic is represented by a distribution over terms. This is a matrix of size vocabSize x k, where each column is a topic. No guarantees are given about the ordering of the topics.
- Definition Classes
- LocalLDAModel → LDAModel
- Annotations
- @Since( "1.3.0" )
-
def
vocabSize: Int
Vocabulary size (number of terms or terms in the vocabulary)
Vocabulary size (number of terms or terms in the vocabulary)
- Definition Classes
- LocalLDAModel → LDAModel
- Annotations
- @Since( "1.3.0" )
-
final
def
wait(): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws( ... )
-
final
def
wait(arg0: Long, arg1: Int): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws( ... )
-
final
def
wait(arg0: Long): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws( ... ) @native()