Wireless innovations Next-generation
Online Workshop (WiNOW)
3-6 November, 2025 // Virtual

Yang Yang
The Shanghai Center, HKUST

Prof. Yang Yang is currently the Dean of the Shanghai Center, Hong Kong University of Science and Technology (HKUST), China. He is also an adjunct professor with the Department of Broadband Communication at Peng Cheng Laboratory, and the Chief Scientist of IoT at Terminus Group, China. Before joining HKUST, he has held faculty positions at the Chinese University of Hong Kong, China; Brunel University, U.K.; University College London (UCL), U.K.; CAS-SIMIT, China; ShanghaiTech University, China; and HKUST (Guangzhou), China. Yang’s research interests include multi-tier computing networks, 5G/6G systems, AIoT technologies and applications, and advanced wireless testbeds. He has published more than 380 papers and filed more than 120 technical patents in these research areas. He was the Chair of the Steering Committee of Asia-Pacific Conference on Communications (APCC) from 2019 to 2021. He has served the IEEE Communications Society as the Chair for 5G Industry Community and Chair for Asia Region at Fog/Edge Industry Community.

Talk Title: Collaborative Edge Computing for Large AI Models on Wireless Networks

Large AI models have emerged as a crucial element in various intelligent applications at the network edge, such as voice assistants in smart homes and autonomous robotics in smart factories. Computing big AI models, e.g., for personalized fine-tuning and continual serving, poses significant challenges to edge devices due to the inherent conflict between limited computing resources and intensive workload associated with training. Despite the constraints of on-device training, traditional approaches usually resort to aggregating data and sending it to a remote cloud for centralized computation. Nevertheless, this approach is neither sustainable, which strains long-range backhaul transmission and energy-consuming datacenters, nor safely private, which shares users’ raw data with remote infrastructures. To address these challenges, we alternatively observe that prevalent edge environments usually contain a diverse collection of trusted edge devices with untapped idle resources, which can be leveraged for edge training acceleration. Motivated by this, we propose to leverage edge collaboration, a novel mechanism that orchestrates a group of trusted edge devices as a resource pool, for expedited, sustainable large AI model computing at the edge. As an initial step, we present a comprehensive framework for building collaborative edge computing systems and analyze in-depth its merits and sustainable scheduling choices following its workflow. To further investigate the impact of its parallelism design, we empirically study a case of four typical parallelisms from the perspective of energy demand with realistic testbeds. Finally, we discuss open challenges for sustainable edge collaboration to point to future directions of edge-centric large AI model computing.