2.793

                    2018影響因子

                    (CJCR)

                    • 中文核心
                    • EI
                    • 中國科技核心
                    • Scopus
                    • CSCD
                    • 英國科學文摘

                    留言板

                    尊敬的讀者、作者、審稿人, 關于本刊的投稿、審稿、編輯和出版的任何問題, 您可以本頁添加留言。我們將盡快給您答復。謝謝您的支持!

                    姓名
                    郵箱
                    手機號碼
                    標題
                    留言內容
                    驗證碼

                    基于多粒度對抗訓練的魯棒跨語言對話系統

                    向露 朱軍楠 周玉 宗成慶

                    向露,  朱軍楠,  周玉,  宗成慶.  基于多粒度對抗訓練的魯棒跨語言對話系統.  自動化學報,  2021,  47(8): 1855?1866 doi: 10.16383/j.aas.c200764
                    引用本文: 向露,  朱軍楠,  周玉,  宗成慶.  基于多粒度對抗訓練的魯棒跨語言對話系統.  自動化學報,  2021,  47(8): 1855?1866 doi: 10.16383/j.aas.c200764
                    Xiang Lu,  Zhu Jun-Nan,  Zhou Yu,  Zong Cheng-Qing.  Robust cross-lingual dialogue system based on multi-granularity adversarial training.  Acta Automatica Sinica,  2021,  47(8): 1855?1866 doi: 10.16383/j.aas.c200764
                    Citation: Xiang Lu,  Zhu Jun-Nan,  Zhou Yu,  Zong Cheng-Qing.  Robust cross-lingual dialogue system based on multi-granularity adversarial training.  Acta Automatica Sinica,  2021,  47(8): 1855?1866 doi: 10.16383/j.aas.c200764

                    基于多粒度對抗訓練的魯棒跨語言對話系統

                    doi: 10.16383/j.aas.c200764
                    基金項目: 國家重點研發計劃重點專項(2017YFB1002103)資助
                    詳細信息
                      作者簡介:

                      向露:中國科學院自動化研究所模式識別國家重點實驗室博士研究生. 主要研究方向為人機對話系統, 文本生成和自然語言處理. E-mail: lu.xiang@nlpr.ia.ac.cn

                      朱軍楠:中國科學院自動化研究所助理研究員. 主要研究方向為自動摘要, 文本生成和自然語言處理. E-mail: junnan.zhu@nlpr.ia.ac.cn

                      周玉:中國科學院自動化研究所研究員. 主要研究方向為自動摘要, 機器翻譯和自然語言處理. 本文通信作者. E-mail: yzhou@nlpr.ia.ac.cn

                      宗成慶:中國科學院自動化研究所研究員, 中國科學院大學崗位教授, 中國計算機學會會士、中國人工智能學會會士. 主要研究方向為自然語言處理, 機器翻譯.E-mail: cqzong@nlpr.ia.ac.cn

                    Robust Cross-lingual Dialogue System Based on Multi-granularity Adversarial Training

                    Funds: Supported by National Key Research and Development Program of China (2017YFB1002103)
                    More Information
                      Author Bio:

                      XIANG Lu Ph. D. candidate at the National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences. Her research interest covers dialogue systems, text generation, and natural language processing

                      ZHU Jun-Nan Assistant professor at Institute of Automation, Chinese Academy of Sciences. His research interest covers summarization, text generation, and natural language processing

                      ZHOU Yu Professor at Institute of Automation, Chinese Academy of Sciences. Her research interest covers summarization, machine translation, and natural language processing. Corresponding author of this paper

                      ZONG Cheng-Qing Professor at Institute of Automation, Chinese Academy of Sciences, and an adjunct professor at the University of Chinese Academy of Sciences. He is CCF Fellow and CAAI Fellow. His research interest covers natural language processing and machine translation

                    • 摘要:

                      跨語言對話系統是當前國際研究的熱點和難點. 在實際的應用系統搭建中, 通常需要翻譯引擎作為不同語言之間對話的橋梁. 然而, 翻譯引擎往往是基于不同訓練樣本構建的, 無論是所在領域, 還是擅長處理語言的特性, 均與對話系統的實際應用需求存在較大的差異, 從而導致整個對話系統的魯棒性差、響應性能低. 因此, 如何增強跨語言對話系統的魯棒性對于提升其實用性具有重要的意義. 提出了一種基于多粒度對抗訓練的魯棒跨語言對話系統構建方法. 該方法首先面向機器翻譯構建多粒度噪聲數據, 分別在詞匯、短語和句子層面生成相應的對抗樣本, 之后利用多粒度噪聲數據和干凈數據進行對抗訓練, 從而更新對話系統的參數, 進而指導對話系統學習噪聲無關的隱層向量表示, 最終達到提升跨語言對話系統性能的目的. 在公開對話數據集上對兩種語言的實驗表明, 所提出的方法能夠顯著提升跨語言對話系統的性能, 尤其提升跨語言對話系統的魯棒性.

                    • 圖  1  基于機器翻譯的跨語言對話系統

                      Fig.  1  Machine translation based cross-lingual dialogue system

                      圖  2  TSCP框架

                      Fig.  2  TSCP framework

                      圖  3  詞匯級和短語級對抗樣本生成框架

                      Fig.  3  The framework of word-level and phrase-level adversarial examples generation

                      圖  4  多粒度對抗樣本實例

                      Fig.  4  An example of multi-granularity adversarial examples

                      圖  5  對抗訓練結構框圖

                      Fig.  5  The structure of adversarial training

                      圖  6  兩種測試

                      Fig.  6  Two kinds of test

                      表  1  數據集統計信息

                      Table  1  Statistics of datasets

                      數據集CamRest676
                      規模訓練集: 405 驗證集: 135 測試集: 136
                      領域餐館預定
                      數據集KVRET
                      規模訓練集: 2425 驗證集: 302 測試集: 302
                      領域日程規劃、天氣信息查詢、導航
                      下載: 導出CSV

                      表  2  CamRest676數據集上的實驗結果

                      Table  2  Experimental results on CamRest676

                      對抗樣本Cross-test Mono-test
                      BLEU實體匹配率成功率${{F} }_{1}$組合分數 BLEU實體匹配率成功率${{F} }_{1}$組合分數
                      0基線系統0.17310.47760.64850.73610.20010.93280.82041.0767
                      1隨機交換0.17590.48510.65990.74840.21590.91040.76391.0530
                      2停用詞0.16920.50000.63470.73650.23000.91790.78031.0791
                      3同義詞0.18050.44030.70510.75320.21590.90300.78241.0586
                      4詞匯級0.19410.45520.75030.79690.20560.89550.82271.0647
                      5短語級0.20170.44780.76020.80570.22150.85070.79921.0465
                      6句子級0.19370.49250.76620.82310.21270.87310.81211.0553
                      7多粒度0.21780.51490.79250.87150.23430.88810.82691.0918
                      下載: 導出CSV

                      表  3  KVRET數據集上的實驗結果

                      Table  3  Experimental results on KVRET

                      對抗樣本Cross-testMono-test
                      BLEU實體匹配率成功率${{F} }_{1}$組合分數BLEU實體匹配率成功率${{F} }_{1}$組合分數
                      0基線系統0.17370.42180.70730.73820.20960.79290.79481.0034
                      1隨機交換0.17510.44360.71220.75310.20560.84000.80331.0273
                      2停用詞0.16760.43270.71830.74310.19610.81090.80161.0023
                      3同義詞0.16800.41450.72340.73700.19440.81090.78980.9947
                      4詞匯級0.18050.44360.76960.78710.20950.81090.82021.0251
                      5短語級0.19330.47270.76030.80970.22190.82550.81701.0431
                      6句子級0.18030.47270.78430.80880.19650.82180.81361.0142
                      7多粒度0.17620.52360.78590.83090.19440.82180.82351.0171
                      下載: 導出CSV

                      表  4  KVRET數據集上德到英的跨語言實驗結果

                      Table  4  Cross-lingual experimental results from German to English on KVRET

                      對抗樣本Cross-testMono-test
                      BLEU實體匹配率成功率${{F} }_{1}$組合分數BLEU實體匹配率成功率${{F} }_{1}$組合分數
                      0基線系統0.16410.39640.71620.72030.20960.79270.79481.0034
                      1詞匯級0.16610.46180.75960.77680.20110.81090.81631.0147
                      2短語級0.18420.51270.79220.83670.19750.82180.82481.0208
                      3句子級0.18440.50910.77740.82770.20670.82910.80951.0260
                      4多粒度0.17430.53090.81170.84560.19050.79640.81860.9981
                      下載: 導出CSV

                      表  5  跨語言對話實驗結果實例

                      Table  5  Examples of cross-lingual experimental results

                      Cross-testMono-test
                      CamRest676 實驗結果實例
                      實例1
                      User: 請推薦給我一些位于南方的價位適中的餐廳.
                      MT: please recommend me some restaurant at the
                      appropriate price in the south.
                      User: please suggest me some moderately priced
                      restaurant in the south.
                      Gold_bspan: moderate south
                      Gold_response: i found two moderately priced restaurants in the south part of town. would you prefer food_SLOT or food_SLOT food?
                      基線系統generated_bspan: south
                      generated_response: there are no restaurants in the
                      south part of town.
                      generated_bspan: moderate south
                      generated_response: there are two restaurants in
                      the south part of town.
                      多粒度generated_bspan: moderate south
                      generated_response: there are two moderately priced
                      restaurants in the south part of town. would you like
                      more information?
                      generated_bspan: moderate south
                      generated_response: there are two moderately priced
                      restaurants in the south part of town. would you like
                      more information?
                      KVRET 實驗結果實例
                      實例2
                      Driver: 最近的停車場怎么走?
                      MT: how can the latest car park go?
                      Driver: what are the direction to the closest parking garage?
                      Gold_bspan: parking garage
                      Gold_response: the closest parking garage is poi_SLOT, located distance_SLOT away at address_SLOT.
                      基線系統generated_bspan: ITEM_3
                      generated_response: there is a ITEM_0 one and poi_SLOT, distance_SLOT away with traffic_info_SLOT.
                      generated_bspan: parking garage
                      generated_response: the nearest parking garage is
                      poi_SLOT, distance_SLOT away.
                      多粒度generated_bspan: parking garage
                      generated_response: the closest parking garage is
                      distance_SLOT away.
                      generated_bspan: parking garage
                      generated_response: the closest parking garage is
                      poi_SLOT.
                      實例3
                      Driver: 安排 5 日上午 11 點與副會長會面.
                      MT: arranged for him to meet with the vice
                      president at 11am on the 5th.
                      Driver: schedule a conference with the vice president
                      on the 5th at 11am.
                      Gold_bspan: 5th; vice president; conference; 11am
                      Gold_response: conference scheduled for the 5th at 11am with the vice president.
                      基線系統generated_bspan: 5th; vice president; vice president
                      generated_response: the 5th is set the president
                      to discus the president.
                      generated_bspan: 5th; vice president; conference; 11am
                      generated_response: the 5th is with a vice president for
                      the 5th at address_SLOT with a high of the 5th of the 5th
                      of the 5th of the 5th of the 5th of the 5th is with the 5th.
                      多粒度generated_bspan: 5th; vice president; conference; 11am
                      generated_response: your conference with vice president
                      at this 5th at with the vice president at the 5th.
                      generated_bspan: 5th; vice president; conference; 11am
                      generated_response: you have a conference with the
                      vice president on the 5th at this 5th.
                      下載: 導出CSV

                      表  6  翻譯現象類別實例

                      Table  6  Categories of translation phenomena

                      類別 1原始單語句子I am looking for a moderately priced restaurant in the south part of town.
                      中文測試集你知道鎮北部有什么價格適中的餐館嗎?
                      MTI' m looking for a cheap restaurant in the south of the town.
                      類別 2原始單語句子A restaurant in the moderately priced range, please.
                      中文測試集請給我一家中等價位的餐館.
                      MTPlease give me a mid-priced restaurant.
                      類別 3原始單語句子I would like a cheap restaurant that serves greek food.
                      中文測試集我想要一家供應希臘食物的便宜餐館.
                      MTI' d like a cheap restaurant to supply greek food.
                      下載: 導出CSV

                      表  7  翻譯系統噪聲類型分析

                      Table  7  Noise type analysis of machine translation

                      翻譯結果分類輪數
                      類別 127
                      類別 272
                      類別 323
                      類別 455
                      下載: 導出CSV

                      表  8  4種翻譯現象上的實驗結果

                      Table  8  Experimental results on four translation phenomena

                      類別Cross-testMono-test
                      BLEU/ 實體匹配率/ 成功率${{F} }_{1}$BLEU/ 實體匹配率/ 成功率${{F} }_{1}$
                      基線系統
                      10.1229/ 0.2632/ 0.35480.1987/ 1.0000/ 0.6571
                      20.1672/ 0.2879/ 0.42340.2093/ 0.9394/ 0.6239
                      30.1429/ 0.3500/ 0.55380.1588/ 0.8500/ 0.6757
                      40.1640/ 0.5909/ 0.56290.1891/ 0.8864/ 0.6595
                      多粒度
                      10.1706/ 0.4737/ 0.51350.2301/ 1.0000/ 0.6835
                      20.2327/ 0.5000/ 0.67480.2594/ 0.8939/ 0.6935
                      30.1607/ 0.3000/ 0.53520.1801/ 0.7000/ 0.5278
                      40.2066/ 0.5909/ 0.59890.1924/ 0.8182/ 0.6448
                      下載: 導出CSV

                      表  9  CamRest676數據集上使用其他單語基線對話系統的跨語言實驗結果

                      Table  9  Cross-lingual experimental results using other monolingual baseline dialogue systems on CamRest676

                      對抗樣本Cross-testMono-test
                      BLEU實體匹配率成功率${{F} }_{1}$組合分數BLEU實體匹配率成功率${{F} }_{1}$組合分數
                      SEDST
                      0基線系統0.16710.64550.72940.85450.21070.95450.81201.0940
                      1多粒度0.20930.83330.81931.03560.22920.92590.83781.1111
                      LABES-S2S
                      2基線系統0.19100.74500.72600.92650.23500.96400.79901.1165
                      3多粒度0.23000.81500.82901.05200.24000.94400.85801.1410
                      下載: 導出CSV
                      360彩票
                    • [1] Li X J, Chen Y N, Li L H, Gao J F, Celikyilmaz A. End-to-end task-completion neural dialogue systems. In: Proceedings of the Eighth International Joint Conference on Natural Language Processing. Taipei, China: Asian Federation of Natural Language Processing, 2017. 733?743
                      [2] Liu B, Lane I. End-to-end learning of task-oriented dialogs. In: Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Student Research Workshop. New Orleans, Louisiana, USA: Association for Computational Linguistics, 2018. 67?73
                      [3] Wen T H, Vandyke D, Mrk?i? N, Ga?i? M, Rojas-Barahona L M, Su P H, et al. A network-based end-to-end trainable task-oriented dialogue system. In: Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics. Valencia, Spain: Association for Computational Linguistics, 2017. 438?449
                      [4] Wang W K, Zhang J J, Li Q, Zong C Q, Li Z F. Are you for real? Detecting identity fraud via dialogue interactions. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing. Hong Kong, China: Association for Computational Linguistics, 2019. 1762?1771
                      [5] Wang W K, Zhang J J, Li Q, Hwang M Y, Zong C Q, Li Z F. Incremental learning from scratch for task-oriented dialogue systems. In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Florence, Italy: Association for Computational Linguistics, 2019. 3710?3720
                      [6] Goodfellow I J, Shlens J, Szegedy C. Explaining and harnessing adversarial examples. In: Proceedings of the 3rd International Conference on Learning Representations. San Diego, California, USA: arXiv Press, 2015. 1412.6572
                      [7] Szegedy C, Zaremba W, Sutskever I, Bruna J, Erhan D, Goodfellow I J, et al. Intriguing properties of neural networks. arXiv preprint arXiv: 1312. 6199, 2013.
                      [8] 董胤蓬, 蘇航, 朱軍. 面向對抗樣本的深度神經網絡可解釋性分析. 自動化學報, DOI: 10.16383/j.aas.c200317

                      Dong Yin-Peng, Su Hang, Zhu Jun. Towards interpretable deep neural networks by leveraging adversarial examples. Acta Automatica Sinica, DOI: 10.16383/j.aas.c200317
                      [9] 孔銳, 蔡佳純, 黃鋼. 基于生成對抗網絡的對抗攻擊防御模型. 自動化學報, DOI: 10.16383/j.aas.c200033

                      Kong Rui, Cai Jia-Chun, Huang Gang. Defense to adversarial attack with generative adversarial network. Acta Automatica Sinica, DOI: 10.16383/j.aas.c200033
                      [10] Young S, Gasic M, Thomson B, Williams J D. POMDP-based statistical spoken dialog systems: a review[J]. Proceedings of the IEEE, 2013, 101(5): 1160?1179. doi: 10.1109/JPROC.2012.2225812
                      [11] Williams J D, Young S. Partially observable markov decision processes for spoken dialog systems[J]. Computer Speech & Language, 2007, 21(2): 393?422.
                      [12] Mesnil G, Dauphin Y, Yao K, Bengio Y, Zweig G. Using recurrent neural networks for slot filling in spoken language understanding[J]. IEEE/ACM Transactions on Audio Speech & Language Processing, 2015, 23(3): 530?539.
                      [13] Bai H, Zhou Y, Zhang J J, Zong C Q. Memory consolidation for contextual spoken language understanding with dialogue logistic inference. In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Florence, Italy: Association for Computational Linguistics, 2019. 5448?5453
                      [14] Lee S, Stent A. Task lineages: Dialog state tracking for flexible interaction. In: Proceedings of the 17th Annual Meeting of the Special Interest Group on Discourse and Dialogue. Los Angeles, California, USA: Association for Computational Linguistics, 2016. 11?21
                      [15] Zhong V, Xiong C, Socher R. Global-locally self-attentive encoder for dialogue state tracking. In: Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. Melbourne, Australia: Association for Computational Linguistics, 2018. 1458?1467
                      [16] Wang W K, Zhang J J, Zhang H, Hwang M Y, Zong C Q, Li Z F. A teacher-student framework for maintainable dialog manager. In: Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Brussels, Belgium: Association for Computational Linguistics, 2018. 3803?3812
                      [17] Sharma S, He J, Suleman K, Schulz H, Bachman P. Natural language generation in dialogue using lexicalized and delexicalized data. In: Proceedings of the 5th International Conference on Learning Representations Workshop. Toulon, France: arXiv Press, 2017. 1606.03632v3
                      [18] Eric M, Manning C D. Key-value retrieval networks for task-oriented dialogue. In: Proceedings of the 18th Annual SIGdial Meeting on Discourse and Dialogue. Saarbrücken, Germany: Association for Computational Linguistics, 2017. 37?49
                      [19] Madotto A, Wu C S, Fung P. Mem2seq: Effectively incorporating knowledge bases into end-to-end task-oriented dialog systems. In: Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. Melbourne, Australia: Association for Computational Linguistics, 2018. 1468?1478
                      [20] Wu C S, Socher R, Xiong C. Global-to-local memory pointer networks for task-oriented dialogue. In: Proceedings of the 7th International Conference on Learning Representations. New Orleans, Louisiana, USA: arXiv Press, 2019. 1901.04713v2
                      [21] Lei W Q, Jin X S, Kan M Y, Ren Z C, He X N, Yin D W. Sequicity: Simplifying task-oriented dialogue systems with single sequence-to-sequence architectures. In: Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. Melbourne, Australia: Association for Computational Linguistics, 2018. 1437?1447
                      [22] García F, Hurtado L F, Segarra E, Sanchis E, Riccardi G. Combining multiple translation systems for spoken language understanding portability. In: Proceedings of the 2012 IEEE Spoken Language Technology Workshop (SLT). Miami, FL, USA: IEEE, 2012. 194?198
                      [23] Calvo M, García F, Hurtado L F, Jiménez S, Sanchis E. Exploiting multiple hypotheses for multilingual spoken language understanding. In: Proceedings of the Seventeenth Conference on Computational Natural Language Learning. Sofia, Bulgaria: Association for Computational Linguistics, 2013. 193?201
                      [24] Calvo M, Hurtado L F, Garcia F, Sanchis E, Segarra E. Multilingual Spoken Language Understanding using graphs and multiple translations[J]. Computer Speech & Language, 2016, 38: 86?103.
                      [25] Bai H, Zhou Y, Zhang J J, Zhao L, Hwang M Y, Zong C Q. Source critical reinforcement learning for transferring spoken language understanding to a new language. In: Proceedings of the 27th International Conference on Computational Linguistics. Santa Fe, New Mexico, USA: Association for Computational Linguistics, 2018. 3597?3607
                      [26] Chen W H, Chen J S, Su Y, Wang X, Yu D, Yan X F, et al. Xl-nbt: A cross-lingual neural belief tracking framework. In: Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Brussels, Belgium: Association for Computational Linguistics, 2018. 414?424
                      [27] Schuster S, Gupta S, Shah R, Lewis M. Cross-lingual transfer learning for multilingual task oriented dialog. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Minneapolis, Minnesota: Association for Computational Linguistics, 2019. 3795?3805
                      [28] Ebrahimi J, Rao A, Lowd D, Dou D J. HotFlip: White-box adversarial examples for text classification. In: Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. Melbourne, Australia: Association for Computational Linguistics, 2018. 31?36
                      [29] Miyato T, Dai A M, Goodfellow I. Adversarial training methods for semi-supervised text classification. In: Proceedings of the 5th International Conference on Learning Representations. Toulon, France: arXiv Press, 2017. 1605.07725
                      [30] Belinkov Y, Bisk Y. Synthetic and natural noise both break neural machine translation. In: Proceedings of the 5th International Conference on Learning Representations. Vancouver, BC, Canada: arXiv Press, 2018. 1711.02173
                      [31] Cheng Y, Jiang L, Macherey W. Robust neural machine translation with doubly adversarial inputs. In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Florence, Italy: Association for Computational Linguistics, 2019. 4324?4333
                      [32] Cheng Y, Tu Z P, Meng F D, Zhai J J, Liu Y. Towards robust neural machine translation. In: Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. Melbourne, Australia: Association for Computational Linguistics, 2018. 1756?1766
                      [33] Li J W, Monroe W, Shi T L, Jean S, Ritter A, Jurafsky D. Adversarial learning for neural dialogue generation. In: Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. Copenhagen, Denmark: Association for Computational Linguistics, 2017. 2157?2169
                      [34] Tong N, Bansal M. Adversarial over-sensitivity and over-stability strategies for dialogue models. In: Proceedings of the 22nd Conference on Computational Natural Language Learning. Brussels, Belgium: Association for Computational Linguistics, 2018. 486?496
                      [35] Gu J T, Lu Z D, Li H, Li V O K. Incorporating copying mechanism in sequence-to-sequence learning. In: Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. Berlin, Germany: Association for Computational Linguistics, 2016. 1631?1640
                      [36] Och F J, Ney H. A systematic comparison of various statistical alignment models[J]. Computational Linguistics, 2003, 29(1): 19?51. doi: 10.1162/089120103321337421
                      [37] Koehn P, Hoang H, Birch A, Callison-Burch C, Federico M, Bertoldi N, et al. Moses: Open source toolkit for statistical machine translation. In: Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics Companion Volume Proceedings of the Demo and Poster Sessions. Prague, Czech Republic: Association for Computational Linguistics, 2007. 177?180
                      [38] Kingma D, Ba J. Adam: A method for stochastic optimization. In: Proceedings of the 3rd International Conference on Learning Representations. San Diego, California, USA: arXiv Press, 2015. 1412.6980
                      [39] Mehri S, Srinivasan T, Eskenazi M. Structured fusion networks for dialog. In: Proceedings of the 20th Annual SIGdial Meeting on Discourse and Dialogue. Stockholm, Sweden: Association for Computational Linguistics, 2019. 165?177
                      [40] Jin X S, Lei W Q, Ren Z C, Chen H S, Liang S S, Zhao Y H, et al. Explicit state tracking with semi-supervision for neural dialogue generation. In: Proceedings of the 27th ACM International Conference on Information and Knowledge Management. New York, USA: Association for Computing Machinery, 2018. 1403?1412
                      [41] Zhang Y C, Ou Z J, Wang H X, Feng J L. A probabilistic end-to-end task-oriented dialog model with latent belief states towards semi-supervised learning. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing. Online: Association for Computational Linguistics, 2020. 9207?9219
                    • 加載中
                    圖(6) / 表(9)
                    計量
                    • 文章訪問數:  307
                    • HTML全文瀏覽量:  130
                    • PDF下載量:  50
                    • 被引次數: 0
                    出版歷程
                    • 收稿日期:  2020-09-16
                    • 錄用日期:  2021-01-15
                    • 網絡出版日期:  2021-02-02
                    • 刊出日期:  2021-08-20

                    目錄

                      /

                      返回文章
                      返回