A New and Scalable Methodology for Fast Machine Learning Accelerator Design
Machine learning (ML) has been generating phenomenal impact on human lives and society. The success of ML comes with immense cost of computation. It is well known that customized hardware computing is significantly more efficient than software computing on general purpose microprocessors. Therefore, hardware accelerators become a necessity for many ML applications. On the one hand, the growth of ML model complexity far outpaces the productivity of existing hardware chip design flows. On the other hand, increasing applications of ML demand fast design turn-around time for market competitiveness. Thus, there is a compelling need for a new ML accelerator design methodology that is fast and scalable to the growth in complexity. The goal of this research, led by Hans Fischer Senior Fellow Prof. Jiang Hu (Texas A&M University) and his host Prof. Ulf Schlichtmann (Electronic Design Automation, TUM), is to develop a new methodology that can reduce ML accelerator design turn-around time by an order of magnitude, and that will be scalable to ML models with trillions of parameters. This research goal will be achieved by developing a new design abstraction and algorithmic techniques that exploit design regularity in ML computations.
TUM-IAS funded doctoral candidate:
Benedikt Schaible, Electronic Design Automation, TUM