师雨洁,杨轲翔,刘旭东,等.基于“承影”GPGPU的张量处理器设计[J]. 微电子学与计算机,2024,41(5):109-116. doi: 10.19304/J.ISSN1000-7180.2023.0214
引用本文: 师雨洁,杨轲翔,刘旭东,等.基于“承影”GPGPU的张量处理器设计[J]. 微电子学与计算机,2024,41(5):109-116. doi: 10.19304/J.ISSN1000-7180.2023.0214
SHI Y J,YANG K X,LIU X D,et al. Design of tensor core based on "Ventus" GPGPU[J]. Microelectronics & Computer,2024,41(5):109-116. doi: 10.19304/J.ISSN1000-7180.2023.0214
Citation: SHI Y J,YANG K X,LIU X D,et al. Design of tensor core based on "Ventus" GPGPU[J]. Microelectronics & Computer,2024,41(5):109-116. doi: 10.19304/J.ISSN1000-7180.2023.0214

基于“承影”GPGPU的张量处理器设计

Design of tensor core based on "Ventus" GPGPU

  • 摘要: 针对神经网络对算力和通用性的需求进一步扩大,基于开源项目“承影”GPGPU,设计了张量处理器,可以对卷积、通用矩阵乘进行加速。首先,分析现有张量处理器设计方案及其对应算法,与直接进行卷积计算进行对比,分析性能差异。然后,提出基于三维乘法树结构的张量处理器设计,将其部署在Xilinx VCU128开发板上。在VCU128开发板上,张量处理器的工作频率为222 MHz。同时,开发了指数运算单元,辅助完成神经网络运算。在VCU128开发板上的工作频率为159 MHz。最后,利用编写汇编程序的方法,验证张量处理器的功能正确性。引入张量处理器后,预期运行时间明显减少。

     

    Abstract: To meet the growing demands for computational power and versatility in neural networks, a tensor processor is designed based on the open-source project " Ventus" GPGPU. The tensor processor can accelerate convolution and general matrix multiplication operations. This study analyzes existing tensor processor design schemes and their corresponding algorithms and compares their performance differences with direct convolution calculations. Subsequently, a novel tensor processor design based on a three-dimensional multiplication tree structure is proposed. The proposed design is deployed on the Xilinx VCU128 development board. The tensor processor operates at a frequency of 222 MHz on the VCU128 development board. Additionally, an exponential operation unit is developed to aid in neural network operations. The frequency is 159 MHz on the VCU128 development board. The functionality of the tensor processor is verified using assembly language programming, and the results demonstrated a significant reduction in expected execution time after introducing the tensor processor. These findings contribute to the advancement of hardware acceleration for deep learning applications and provide a foundation for further research in this field.

     

/

返回文章
返回