人工智能研究院学术报告 第2020-08-07期

时间:2020-08-07浏览:13

Title

Speeding Up Deep Learning From the Networking Perspective

Lecture & Organization

Dr.Ning Wang

Tenure-track Assistant Professor in the Department of Computer Science at Rowan University

Date & Time

8:00-9:00am  Fri, Aug 07, 2020

Beijing time

Mode

Zoom ID:  404 626 0856

Password: 000321

Bio

Ning Wang is currently a tenure-track assistant professor in the Department of Computer Science at Rowan University, Glassboro, NJ. He received his Ph.D. degree in the Department of Computer and Information Sciences at Temple University, Philadelphia, PA, USA, in 2018. He obtained his B.E. degree in School of Physical Electronics at University of Electronic Science and Technology of China, Chengdu, Sichuan, China, in 2013. He currently focuses on communication and computation optimization problems in Internet-of-Things systems (resource allocation, scheduling, routing, etc.) and operation optimization in Smart Cities applications (spatial crowdsourcing, bikesharing, etc.). He has published nearly thirty papers in high-impact networking conferences and journals, for example, IEEE ICDCS, IEEE INFOCOM, IEEE/ACM IWQoS, IEEE Transactions on Big Data, Journal of Parallel and Distributed Computing, etc. He has served as a program committee member for top international conferences such as IEEE ICDCS, IEEE WCNC, etc., and reviewers for premier journals such as IEEE TPDS, TWC, TMC, TITS, TOIT, TITS, TSC, etc.

Abstract

Deep learning has shown success in complex tasks, including computer vision, natural language processing, machine translation, and many others. However, deep learning’s high accuracy comes at the expense of high computational requirements for both the training and inference phases. To enable deep learning in a wide range of resource-constrained Internet-of-Things systems, the speaker will discuss new approaches to speed up deep learning from the networking perspective. Particularly, the speaker will discuss the challenges in distributing deep learning tasks into a networked system with multiple (possibly heterogeneous) devices. The speaker will talk about the communication-computation trade-off in training and inference phases via his two projects and discuss his results in distributed machine learning and distributed inferencing.