- Mar 2019
-
arxiv.org arxiv.org
-
BERT for Joint Intent Classification and Slot Filling
Tags
Annotators
URL
-
-
arxiv.org arxiv.org
-
or joint modeling of intent detection and slot filling, weadd an additional decoder for intent detection (or intent clas-sification) task that shares the same encoder with slot fillingdecoder.
本文为了对intent和slot-filling联合建模,额外添加了一个decoder来进行意图检测。
-
The attentionmechanism later introduced in [12] enables the encoder-decodermodel to learn a soft alignment and to decode at the same time.
本文中用到的attention-RNN算法。
D. Bahdanau, K. Cho, and Y. Bengio, “Neural machine trans-lation by jointly learning to align and translate,”arXiv preprintarXiv:1409.0473, 2014
Tags
Annotators
URL
-
-
gitee.com gitee.com
-
Dialogue State Tracking
跟进对话状态是保障dialog system的robust的核心。主要目标是预测每轮对话的用户目标。经典的状态结构通常叫做slot-filling 或者 sematic frame.
传统用手工规则的方法: D. Goddeau, H. Meng, J. Polifroni, S. Seneff, andS. Busayapongchai. A form-based dialogue managerfor spoken language applications. InSpoken Language,1996. ICSLP 96. Proceedings., Fourth InternationalConference on, volume 2, pages 701–704. IEEE, 1996
基于规则的方法倾向于常见的错误,然后很多结果并不是想要的。 J. D. Williams. Web-style ranking and slu combina-tion for dialog state tracking. InSIGDIAL Conference,pages 282–291, 2014
-
Slot filling
填槽这个问题更多的是看成一个序列标注的问题。句子中的每个词都打上一个语义标签。输入是由词组成的句子,输出是每个词对应的slot/concept IDs.
DBN 类的处理:
A Deoras and R. Sarikaya. Deep belief network basedsemantic taggers for spoken language understanding.
L. Deng, G. Tur, X. He, and D. Hakkani-Tur. Use ofkernel deep convex networks and end-to-end learningfor spoken language understanding
RNN:
- G. Mesnil, X. He, L. Deng, and Y. Bengio. Investi-gation of recurrent-neural-network architectures andlearning methods for spoken language understanding.Interspeech, 2013.
- K. Yao, G. Zweig, M. Y. Hwang, Y. Shi, and D. Yu.Recurrent neural networks for language understand-ing. InInterspeech, 2013
- R. Sarikaya, G. E. Hinton, and B. Ramabhadran.Deep belief nets for natural language call-routing
- K. Yao, B. Peng, Y. Zhang, D. Yu, G. Zweig, andY. Shi. Spoken language understanding using longshort-term memory neural networks. InIEEE Insti-tute of Electrical & Electronics Engineers, pages 189 –194, 2014
-
- Feb 2019
-
www.iro.umontreal.ca www.iro.umontreal.ca2
-
For the slot filling task, the input is the sentence consisting of a sequence of words, L, and the output is a sequence of slot/concept IDs, S, one for each word. In the statistical SLU systems, the task is often formalized as a pattern recognition problem: Given the word sequence L, the goal of SLU is to find the semantic representation of the slot sequence 푆that has the maximum a posterioriprobability 푃(푆|퐿).
对于填槽任务,输入是一个有一系列词组成的语句,输出是每个词对应的slot/concept IDs。在统计SLU系统里,这个任务可以看作是:给定词序列L,SLU的目标是找到一个slot 序列来最大化后验概率P(S/L).
-
bi-directional Jordan-type network that takes into account both past and future dependencies among slots works best
双向的 Jordan-type网络对槽最好用
-