5 Matching Annotations
  1. Jun 2023
  2. Nov 2022
    1. n recent years, the neural network based topic modelshave been proposed for many NLP tasks, such as infor-mation retrieval [11], aspect extraction [12] and sentimentclassification [13]. The basic idea is to construct a neuralnetwork which aims to approximate the topic-word distri-bution in probabilistic topic models. Additional constraints,such as incorporating prior distribution [14], enforcing di-versity among topics [15] or encouraging topic sparsity [16],have been explored for neural topic model learning andproved effective.

      Neural topic models are often trained to mimic the behaviours of probabilistic topic models - I should come back and look at some of the works:

      • R. Das, M. Zaheer, and C. Dyer, “Gaussian LDA for topic models with word embeddings,”
      • P. Xie, J. Zhu, and E. P. Xing, “Diversity-promoting bayesian learning of latent variable models,”
      • M. Peng, Q. Xie, H. Wang, Y. Zhang, X. Zhang, J. Huang, and G. Tian, “Neural sparse topical coding,”
    2. e argue that mutual learningwould benefit sentiment classification since it enriches theinformation required for the training of the sentiment clas-sifier (e.g., when the word “incredible” is used to describe“acting” or “movie”, the polarity should be positive)

      By training a topic model that has "similar" weights to the word vector model the sentiment task can also be improved (as per the example "incredible" should be positive when used to describe "acting" or "movie" in this context

    3. . However, such a framework is not applicablehere since the learned latent topic representations in topicmodels can not be shared directly with word or sentencerepresentations learned in classifiers, due to their differentinherent meanings

      Latent word vectors and topic models learn different and entirely unrelated representations

  3. May 2019