Graduate Seminar: Towards On-Device Intelligence Through Deep Learning Compression

Wednesday, October 11, 2023
Event Time 03:30 p.m. - 04:30 p.m. PT
Cost
Location Thornton Hall 428
Contact Email cs-dept@sfsu.edu

Overview

Abstract

Boosted by revolutionary algorithms and outstanding computing technologies, Artificial Intelligence (AI) has served as the main driving horsepower of a new technology wave in the past decade. Among vast AI technologies, Deep Neural Network (DNN) is considered the most representative one, which involves many layers with complex structures and/or multiple nonlinear transformations to model a high-level data abstraction. The ability to learn tasks from data examples makes DNN particularly powerful in handling cognitive applications such as computer vision and natural language processing. However, new problems have also arisen: Conventionally, DNN models are trained by powerful server machines with centralized datasets for optimal efficiency. Such centrally trained models cannot well adapt to vast end-users, who may have unique data domains, different cognitive tasks, or specific data privacy requirements. As a result, increasing attention is being directed towards the concept of "On-device Machine Learning," which aims to enable DNN models to adapt to the aforementioned heterogeneity encountered in practical applications. In this talk, I will introduce our work on deep learning compression, which can reduce the size and computational complexity of DNN models while preserving optimal performance. Subsequently, I will discuss how we deploy compressed deep learning mobdels on esource-constrained devices, including mobile devices, edge devices, and embedded systems. Finally, I will describe our recent work on TinyML and efficient deep learning with a specific focus on their application in rehabilitation contexts.

Biography of Speaker

Dr. Zhuwei Qin is current an assistant professor in the School of Engineering at SFSU. His research interests are in the broad area of deep learning acceleration, interpretable deep learning, and edge computing. Dr. Qin serves as the director of the Mobile and Intelligent Computing Laboratory (MIC Lab) at SFSU. A central emphasis within his research lab revolves around the achievement of computational acceleration for deep learning on devices constrained by limited computational resources. Dr. Qin's research endeavors are dedicated to addressing the inherent challenges related to efficiency and robustness in the practical application of deep learning within real-world environments. Recently, his group actively collaborates with experts from various fields, such as robotics, rehabilitation sciences, and industrial partners. These collaborations have resulted in the implementation of efficient deep-learning algorithms with practical applications, advancing the technological landscape of mobile edge computing.

Upcoming Events