Abstract:
Aiming at the problem of data access conflicts caused by multiple neural network parameters and high computational complexity, this paper proposes a buffer address scheduling method for general CNN accelerator. The on-chip buffer is flexibly scheduled in the address controller, and the convolutional layers can be implemented by one or two specific convolutional operation units to reduce resource consumption; the input parameters can be configured with images and convolution kernels of different sizes, with some universality. The experimental results show that the address scheduling method can be applied to different numbers and sizes of convolution, pooling and full connection operations, and the computing performance is improved.