Software-Hardware Co-Design for Efficient Neural Network Acceleration
Artificial neural networks, which dominate artificial intelligence applications such as object recognition and speech recognition, are in evolution. To apply neural networks to wider applications, customized hardware are necessary since CPU and GPU are not efficient enough. FPGA can be an ideal platform for neural network acceleration (inference part) since it is programmable and can achieve much higher energy efficiency compared with general-purpose processors. However, the long development period and insufficient performance of traditional FPGA acceleration prevent it from wide utilization.
Firstly, we will review the current deep learning acceleration work together with the chart we make, and then review our solution: a complete design flow to achieve both fast deployment and high energy efficiency for accelerating neural networks on FPGA [FPGA 16/17]. Deep compression and data quantization are employed to exploit the redundancy in algorithm and reduce both computational and memory complexity. Two architecture designs for CNN and DNN/RNN are proposed together with compilation environment. Evaluated on Xilinx Zynq 7000 and Kintex Ultrascale series FPGA with real-world neural networks, up to 15 times higher energy efficiency can be achieved compared with mobile GPU and desktop GPU.
We will further introduce our effort on how to turn these papers into production from Deephi Tech’s perspective, i.e. what we achieved for more test cases, the application domain and products in Surveillance/ Data Center/ Automobile, and the deep learning inference chip design.
Finally, we will briefly talk about the trends of adopting emerging NVM technology for efficient learning systems to further improve the energy efficiency and what we have done in the past 5 years.
Yu Wang received his B.S. degree in 2002 and Ph.D. degree (with honor) in 2007 from Tsinghua University, Beijing. Dr. Wang has authored and coauthored over 170 papers in refereed journals and conferences.
Currently he serves as Co-Editor-in-Chief for ACM SIGDA E-Newsletter, Special Issue Editor for Elsevier Microelectronic Journal, Associate Editor for IEEE Transactions on CAD, and Journal of Circuits, Systems, and Computers. He also serves as guest editor for Integration, the VLSI Journal and IEEE Transactions on Multi-Scale Computing Systems. Yu Wang also received The Natural Science Fund for Outstanding Youth Fund in 2016, and is the co-founder of Deephi Tech (valued over 150M USD), which is a leading deep learning processing platform provider.