LI Yong-bo, WANG Qin, JIANG Jian-fei. Design of sparse convolutional neural network accelerator[J]. Microelectronics & Computer, 2020, 37(6): 30-34,39.
Citation: LI Yong-bo, WANG Qin, JIANG Jian-fei. Design of sparse convolutional neural network accelerator[J]. Microelectronics & Computer, 2020, 37(6): 30-34,39.

Design of sparse convolutional neural network accelerator

  • In order to reduce the latency and energy consumption of convolutional neural networks, dynamic network surgery is used to get sparse networks and a high energy efficiency sparse convolutional neural network accelerator is designed. Aiming at the problem of unbalanced computing load, a dataflow suitable for sparse computing is proposed. To reduce the latency of convolution operation, a 16×16 process engine array is used to improve computation parallelism, index units are designed to avoid invalid operation, the systolic input structure is designed to enhance data reuse, and ping-pong buffers are introduced to reduce data waiting. The synthesis results showthat the frequency can reach 500 MHz, the power consumption is 139mW, the peak performance is 221 GOPS, and the energy efficiency is 1.59T OPS/W with TSMC 28nm process.
  • loading

Catalog

    Turn off MathJax
    Article Contents

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return