직함: Professor
University of California, San Diego
Artificial intelligence techniques driven by deep learning have experienced significant progress in the past decade. The usage of deep learning methods has increased dramatically in practical application domains such as autonomous driving, healthcare, and robotics, where the utmost hardware resource efficiency, as well as strict hardware safety and reliability requirements, are often imposed. The increasing computational cost of deep learning models has been traditionally tackled through model compression and domain-specific accelerator design. As the cost of conventional fault tolerance methods is often prohibitive in consumer electronics, the question of functional safety and reliability for deep learning hardware is still in its infancy. This talk outlines a novel approach to deliver dramatic boosts in hardware safety, reliability, and resource efficiency through a synergistic co-design paradigm.
We start off by reviewing the unique algorithmic characteristics of deep neural networks, including plasticity in the design process, resiliency to small numerical perturbations, and their inherent redundancy, as well as the unique micro-architectural properties of deep learning accelerators such as regularity. The advocated approaches reshape deep neural networks and enhance deep neural network accelerators strategically by prioritizing the overall functional correctness and minimizing the associated costs through the statistical nature of deep neural networks. Experimental results demonstrate that deep neural networks equipped with the proposed techniques can maintain accuracy gracefully, even at extreme rates of hardware errors. As a result, the described methodology can embed strong safety and reliability characteristics in mission-critical deep learning applications at a negligible cost.
The proposed strategies further offer promising avenues for handling the micro-architectural challenges of deep neural network accelerators and boosting resource efficiency through the synergistic co-design of deep neural networks and hardware micro-architectures. Practical data analysis techniques coupled with a novel feature elimination algorithm identify a minimal set of computation units that capture the information content of the layer and squash the rest. Linear transformations on the subsequent layer ensure accuracy retention despite the removal of a significant portion of the computation units. We further demonstrate that novel complementary sparsity patterns can offer utmost expressiveness levels with inherent hardware exploitable regularity. A novel dynamic training method converts the expressiveness of such sparsity configurations into highly accurate and compact sparse neural networks.
Prof. Alex Orailoglu is an expert in Robust Systems and Designs. He has chaired a great many technical conferences, including the leading conferences of both the VLSI Reliability (IEEE VLSI Test Symposium) and of the Embedded Systems (IEEE/ACM CODES-ISSS) research domains. Many of his doctoral students have attained top-notch university research faculty positions in the United States and globally.
Prof. Orailoglu holds an S.B. degree cum laude in Applied Mathematics from Harvard College and the M.S. and Ph.D. degrees in Computer Science from the University of Illinois, Urbana Champaign. He is currently a Professor of Computer Science and Engineering at the University of California at San Diego, La Jolla, CA, USA, where he leads the Architecture, Reliability, Test, and Security (ARTS) Laboratory, focusing on VLSI test, reliability, security, embedded systems, and processor and neural network architectures.
Prof. Orailoglu has published more than 300 peer-reviewed articles. He has founded numerous technical conferences, including HLDVT, SASP and NanoArch. Prof. Orailoglu has served as an IEEE Computer Society Distinguished Lecturer. He is a Golden Core Member of IEEE Computer Society.