TinyML

A Unified TinyML System for Multi-modal Edge Intelligence and Real-time Visual Perception.

Our research focuses on the software and hardware synergy of on-device learning techniques, covering the scope of model-level neural network design, algorithm-level training optimization and hardware-level arithmetic acceleration.

Adaptive Quantization-aware Training and Model Compression.

Our research focuses on the software and hardware synergy of on-device learning techniques, covering the scope of model-level neural network design, algorithm-level training optimization and hardware-level arithmetic acceleration.

A Unified Contrastive Representation Learner for Cross-modal Federated Learning Systems.

Our research focuses on the software and hardware synergy of on-device learning techniques, covering the scope of model-level neural network design, algorithm-level training optimization and hardware-level arithmetic acceleration.

Progressive Network Sparsification and Latent Feature Compression for Scalable Collaborative Learning.

Our research focuses on the software and hardware synergy of on-device learning techniques, covering the scope of model-level neural network design, algorithm-level training optimization and hardware-level arithmetic acceleration.

Masked Autoencoders for Occlusion-aware Visual Learners

Our research focuses on the software and hardware synergy of on-device learning techniques, covering the scope of model-level neural network design, algorithm-level training optimization and hardware-level arithmetic acceleration.

Flexible Patch Skip for Real-time Visual Perception.

Our research focuses on the software and hardware synergy of on-device learning techniques, covering the scope of model-level neural network design, algorithm-level training optimization and hardware-level arithmetic acceleration.