王丽亚, 刘昌辉, 蔡敦波, 赵彤洲, 王梦. 基于字符级联合网络特征融合的中文文本情感分析[J]. 微电子学与计算机, 2020, 37(1): 80-86.
引用本文: 王丽亚, 刘昌辉, 蔡敦波, 赵彤洲, 王梦. 基于字符级联合网络特征融合的中文文本情感分析[J]. 微电子学与计算机, 2020, 37(1): 80-86.
WANG Li-ya, LIU Chang-hui, CAI Dun-bo, ZHAO Tong-zhou, WANG Meng. Chinese text sentiment analysis based on joint network and attention model[J]. Microelectronics & Computer, 2020, 37(1): 80-86.
Citation: WANG Li-ya, LIU Chang-hui, CAI Dun-bo, ZHAO Tong-zhou, WANG Meng. Chinese text sentiment analysis based on joint network and attention model[J]. Microelectronics & Computer, 2020, 37(1): 80-86.

基于字符级联合网络特征融合的中文文本情感分析

Chinese text sentiment analysis based on joint network and attention model

  • 摘要: 针对传统卷积神经网络(CNN)同层神经元之间信息不能互传,无法充分利用同一层次上的特征信息,以及无法提取长距离上下文相关特征的问题.该文针对中文文本,提出字符级联合网络特征融合的模型进行情感分析,在字符级的基础上采用BiGRU和CNN-BiGRU并行的联合网络提取特征,利用CNN的强学习能力提取深层次特征,再利用双向门限循环神经网络(BiGRU)进行深度学习,加强模型对特征的学习能力.另一方面,利用BiGRU提取上下文相关的特征,丰富特征信息.最后在单方面上引入注意力机制进行特征权重分配,降低噪声干扰.在数据集上进行多组对比实验,该方法取得92.36%的F1值,结果表明本文提出的模型能有效的提高文本分类的准确率.

     

    Abstract: For the traditional convolutional neural network (CNN), the information between the same layer of neurons cannot be transmitted to each other, and the feature information at the same level cannot be fully utilized, and the problem of long-distance context-related features cannot be extracted. In this paper, based on the Chinese text, this paper proposes a model of character-level joint network feature fusion for sentiment analysis. Based on the character level, BiGRU and CNN-BiGRU parallel joint network are used to extract features, and CNN's strong learning ability is used to extract deep features and reuse. The two-way threshold cyclic neural network (BiGRU) performs deep learning and enhances the model's ability to learn features. On the other hand, BiGRU is used to extract context-related features to enrich feature information. Finally, the attention mechanism is introduced unilaterally to perform feature weight distribution to reduce noise interference. Multi-group comparison experiments were carried out on the dataset. The method obtained 92.36% F1 value. The results show that the proposed model can effectively improve the accuracy of text classification.

     

/

返回文章
返回