Wireless innovations Next-generation
Online Workshop(WiNOW)
3-6 November, 2025 // Virtual

Celimuge Wu
The University of Electro-Communications

Celimuge Wu (Senior Member, IEEE) received the Ph.D. degree from The University of Electro-Communications, Chofu, Japan, in 2010.,He is currently a Professor and the Director of Meta-Networking Research Center, The University of Electro-Communications. His research interests include vehicular networks, edge computing, IoT, and AI for wireless networking and computing.,Prof. Wu is a recipient of the 2021 IEEE Communications Society Outstanding Paper Award, the 2021 IEEE Internet of Things Journal Best Paper Award, the IEEE Computer Society 2020 Best Paper Award, and the IEEE Computer Society 2019 Best Paper Award Runner-Up. He serves as an Associate Editor for IEEE Transactions on Cognitive Communications and Networking, IEEE Transactions on Network Science and Engineering, and IEEE Transactions on Green Communications and Networking. He is the Vice Chair (Asia–Pacific) of IEEE Technical Committee on Big Data. He is an IEEE Vehicular Technology Society Distinguished Lecturer.

Talk Title: Low-Latency Semantic Communications toward Efficient Remote Driving

The explosive growth of multimedia data, the continuous surge in the number of connected devices, and the increasing demand for real-time intelligent applications are posing unprecedented challenges to current communication infrastructures. Traditional communication systems that transmit raw or compressed data often suffer from excessive latency and bandwidth inefficiency, which can be critical in delay-sensitive applications such as remote driving. To overcome these limitations, semantic communications have recently emerged as a paradigm shift that focuses on transmitting the meaning of data rather than the raw data itself.This talk introduces a novel low-latency video semantic communication framework tailored for remote driving scenarios. In contrast to conventional video transmission methods, the proposed system employs an asymmetric encoder–decoder architecture that transmits only a minimal number of bits by leveraging semantic feature extraction, while reconstructing high-quality video at the receiver through generative AI techniques. To validate its effectiveness, we design and implement a prototype system that seamlessly integrates semantic feature extraction, efficient transmission, and deep learning–based video reconstruction at the receiver side.