LENG Y L,ZOU X Y. Network abnormal traffic detection model based on CNN-BiBASRU-AT[J]. Microelectronics & Computer,2024,41(1):93-99. doi: 10.19304/J.ISSN1000-7180.2022.0853
Citation: LENG Y L,ZOU X Y. Network abnormal traffic detection model based on CNN-BiBASRU-AT[J]. Microelectronics & Computer,2024,41(1):93-99. doi: 10.19304/J.ISSN1000-7180.2022.0853

Network abnormal traffic detection model based on CNN-BiBASRU-AT

  • To address the problems such as the low accuracy of network abnormal traffic identification, the lack of feature extraction ability of basic deep learning model and the low training efficiency of recurrent neural network , a network abnormal traffic detection model based on Convolutional Neural Network(CNN)-Bidirectional Built in Attention Simple Recurrent Unit(BiBASRU)-AT is proposed. The deep one-dimensional convolution module is used to extract the local feature representation of traffic, reduce the dimension of high-dimensional traffic features and learn significant classification features to enhance the feature representation ability of the model. At the same time, the Built-in self Attention Simple Recurrent Unit (BASRU) is built to capture the interdependencies between the time-series feature information and the internal features in the medium and long distance of the traffic at the same time, further mining the high-dimensional structural information in the traffic features. The soft attention mechanism identifies the key features that have a greater impact on the classification results, assigns a higher weight value to the key features, and avoids the interference of irrelevant information on the classification results. Finally, the classification probability distribution is output from the linear layer, and the label corresponding to the maximum value is taken as the traffic identification result after the normalization of the Softmax function. The experimental results on the open standard data set UNSW-NB15 of multi-classification network abnormal traffic show that the model has achieved 92.81% F1 value, which is higher than other advanced deep learning models compared with the experiment. The feature capture ability and training performance of the built-in self-attention simple cycle unit are better than other traditional cycle neural networks, which proves the feasibility and effectiveness of the model.
  • loading

Catalog

    Turn off MathJax
    Article Contents

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return