Packages

class DistributedLDAModel extends LDAModel

Distributed LDA model. This model stores the inferred topics, the full training dataset, and the topic distributions.

Annotations
@Since( "1.3.0" )
Linear Supertypes
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. DistributedLDAModel
  2. LDAModel
  3. Saveable
  4. AnyRef
  5. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Value Members

  1. final def !=(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  2. final def ##(): Int
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  4. final def asInstanceOf[T0]: T0
    Definition Classes
    Any
  5. def clone(): AnyRef
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... ) @native()
  6. def describeTopics(maxTermsPerTopic: Int): Array[(Array[Int], Array[Double])]

    Return the topics described by weighted terms.

    Return the topics described by weighted terms.

    maxTermsPerTopic

    Maximum number of terms to collect for each topic.

    returns

    Array over topics. Each topic is represented as a pair of matching arrays: (term indices, term weights in topic). Each topic's terms are sorted in order of decreasing weight.

    Definition Classes
    DistributedLDAModelLDAModel
    Annotations
    @Since( "1.3.0" )
  7. def describeTopics(): Array[(Array[Int], Array[Double])]

    Return the topics described by weighted terms.

    Return the topics described by weighted terms.

    WARNING: If vocabSize and k are large, this can return a large object!

    returns

    Array over topics. Each topic is represented as a pair of matching arrays: (term indices, term weights in topic). Each topic's terms are sorted in order of decreasing weight.

    Definition Classes
    LDAModel
    Annotations
    @Since( "1.3.0" )
  8. val docConcentration: Vector

    Concentration parameter (commonly named "alpha") for the prior placed on documents' distributions over topics ("theta").

    Concentration parameter (commonly named "alpha") for the prior placed on documents' distributions over topics ("theta").

    This is the parameter to a Dirichlet distribution.

    Definition Classes
    DistributedLDAModelLDAModel
    Annotations
    @Since( "1.5.0" )
  9. final def eq(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  10. def equals(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  11. def finalize(): Unit
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  12. val gammaShape: Double

    Shape parameter for random initialization of variational parameter gamma.

    Shape parameter for random initialization of variational parameter gamma. Used for variational inference for perplexity and other test-time computations.

    Attributes
    protected[clustering]
    Definition Classes
    DistributedLDAModelLDAModel
  13. final def getClass(): Class[_]
    Definition Classes
    AnyRef → Any
    Annotations
    @native()
  14. def hashCode(): Int
    Definition Classes
    AnyRef → Any
    Annotations
    @native()
  15. final def isInstanceOf[T0]: Boolean
    Definition Classes
    Any
  16. def javaTopTopicsPerDocument(k: Int): JavaRDD[(Long, Array[Int], Array[Double])]

    Java-friendly version of topTopicsPerDocument

    Java-friendly version of topTopicsPerDocument

    Annotations
    @Since( "1.5.0" )
  17. lazy val javaTopicAssignments: JavaRDD[(Long, Array[Int], Array[Int])]

    Java-friendly version of topicAssignments

    Java-friendly version of topicAssignments

    Annotations
    @Since( "1.5.0" )
  18. def javaTopicDistributions: JavaPairRDD[Long, Vector]

    Java-friendly version of topicDistributions

    Java-friendly version of topicDistributions

    Annotations
    @Since( "1.4.1" )
  19. val k: Int

    Number of topics

    Number of topics

    Definition Classes
    DistributedLDAModelLDAModel
    Annotations
    @Since( "1.3.0" )
  20. lazy val logLikelihood: Double

    Log likelihood of the observed tokens in the training set, given the current parameter estimates: log P(docs | topics, topic distributions for docs, alpha, eta)

    Log likelihood of the observed tokens in the training set, given the current parameter estimates: log P(docs | topics, topic distributions for docs, alpha, eta)

    Note:

    • This excludes the prior; for that, use logPrior.
    • Even with logPrior, this is NOT the same as the data log likelihood given the hyperparameters.
    Annotations
    @Since( "1.3.0" )
  21. lazy val logPrior: Double

    Log probability of the current parameter estimate: log P(topics, topic distributions for docs | alpha, eta)

    Log probability of the current parameter estimate: log P(topics, topic distributions for docs | alpha, eta)

    Annotations
    @Since( "1.3.0" )
  22. final def ne(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  23. final def notify(): Unit
    Definition Classes
    AnyRef
    Annotations
    @native()
  24. final def notifyAll(): Unit
    Definition Classes
    AnyRef
    Annotations
    @native()
  25. def save(sc: SparkContext, path: String): Unit

    Save this model to the given path.

    Save this model to the given path.

    This saves:

    • human-readable (JSON) model metadata to path/metadata/
    • Parquet formatted data to path/data/

    The model may be loaded using Loader.load.

    sc

    Spark context used to save model data.

    path

    Path specifying the directory in which to save this model. If the directory already exists, this method throws an exception.

    Definition Classes
    DistributedLDAModelSaveable
    Annotations
    @Since( "1.5.0" )
  26. final def synchronized[T0](arg0: ⇒ T0): T0
    Definition Classes
    AnyRef
  27. def toLocal: LocalLDAModel

    Convert model to a local model.

    Convert model to a local model. The local model stores the inferred topics but not the topic distributions for training documents.

    Annotations
    @Since( "1.3.0" )
  28. def toString(): String
    Definition Classes
    AnyRef → Any
  29. def topDocumentsPerTopic(maxDocumentsPerTopic: Int): Array[(Array[Long], Array[Double])]

    Return the top documents for each topic

    Return the top documents for each topic

    maxDocumentsPerTopic

    Maximum number of documents to collect for each topic.

    returns

    Array over topics. Each element represent as a pair of matching arrays: (IDs for the documents, weights of the topic in these documents). For each topic, documents are sorted in order of decreasing topic weights.

    Annotations
    @Since( "1.5.0" )
  30. def topTopicsPerDocument(k: Int): RDD[(Long, Array[Int], Array[Double])]

    For each document, return the top k weighted topics for that document and their weights.

    For each document, return the top k weighted topics for that document and their weights.

    returns

    RDD of (doc ID, topic indices, topic weights)

    Annotations
    @Since( "1.5.0" )
  31. lazy val topicAssignments: RDD[(Long, Array[Int], Array[Int])]

    Return the top topic for each (doc, term) pair.

    Return the top topic for each (doc, term) pair. I.e., for each document, what is the most likely topic generating each term?

    returns

    RDD of (doc ID, assignment of top topic index for each term), where the assignment is specified via a pair of zippable arrays (term indices, topic indices). Note that terms will be omitted if not present in the document.

    Annotations
    @Since( "1.5.0" )
  32. val topicConcentration: Double

    Concentration parameter (commonly named "beta" or "eta") for the prior placed on topics' distributions over terms.

    Concentration parameter (commonly named "beta" or "eta") for the prior placed on topics' distributions over terms.

    This is the parameter to a symmetric Dirichlet distribution.

    Definition Classes
    DistributedLDAModelLDAModel
    Annotations
    @Since( "1.5.0" )
    Note

    The topics' distributions over terms are called "beta" in the original LDA paper by Blei et al., but are called "phi" in many later papers such as Asuncion et al., 2009.

  33. def topicDistributions: RDD[(Long, Vector)]

    For each document in the training set, return the distribution over topics for that document ("theta_doc").

    For each document in the training set, return the distribution over topics for that document ("theta_doc").

    returns

    RDD of (document ID, topic distribution) pairs

    Annotations
    @Since( "1.3.0" )
  34. lazy val topicsMatrix: Matrix

    Inferred topics, where each topic is represented by a distribution over terms.

    Inferred topics, where each topic is represented by a distribution over terms. This is a matrix of size vocabSize x k, where each column is a topic. No guarantees are given about the ordering of the topics.

    WARNING: This matrix is collected from an RDD. Beware memory usage when vocabSize, k are large.

    Definition Classes
    DistributedLDAModelLDAModel
    Annotations
    @Since( "1.3.0" )
  35. val vocabSize: Int

    Vocabulary size (number of terms or terms in the vocabulary)

    Vocabulary size (number of terms or terms in the vocabulary)

    Definition Classes
    DistributedLDAModelLDAModel
    Annotations
    @Since( "1.3.0" )
  36. final def wait(): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  37. final def wait(arg0: Long, arg1: Int): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  38. final def wait(arg0: Long): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... ) @native()

Inherited from LDAModel

Inherited from Saveable

Inherited from AnyRef

Inherited from Any

Ungrouped