Something About Me

I am a first-year Ph.D. student in Computer Engineering at Virginia Tech, advised by Prof. Ruoxi Jia. My research focuses on trustworthy and safe Machine Learning (ML). ML is at the heart of today's Artificial Intelligence (AI) implementations, and it is expected to provide accurate predictions or recognitions, providing systems with reliable information for strategy making. We are especially at a point when AI is widely implemented in safety-critical cases such as autonomous driving, network intrusion detection, personal device authorization, medical diagnosis, and so on, making the inherent safety issues in ML a critical concern. Unfortunately, as my research has shown, even the most advanced deep neural networks are vulnerable to adversarial attacks. Such a type of attack aims specifically at manipulating the output of the ML models, causing the target model to output adversary specified results and thus directly cause system failures, such as tricking a self-driving car into mistaking a stop sign for a speed limit sign. In my recent research, I look at ML's inherent vulnerability from all angles, including its supply chains, training strategies, and inference policies.

Past Experience
  • [2019] Research Assistant on network security @ BUPT, Beijing, China. With Dr. Han Qiu.
  • [2018] Visiting Student Scholar on ad-hoc network security @ Columbia University, New York. With Prof. Meikang Qiu.
  • [2017] I feel fortunate to have started my research life early when I was pursuing my bachelor's degree at XDU under Prof Huaxi Gu's supervision.

Selected Honors

Name Year Link
Best Paper Award, ICA3PP 2020 Check out the paper here

Recent Work

Mitigating Advanced Adversarial Attacks with More Advanced Gradient Obfuscation Techniques

Link to The Paper; Appendix

We focused on deflecting the underlying assumption of advanced interactive attacks, namely the BPDA and EOT attack. The proposed defense take advantage of stochastic affine transformation and DCT-Based quantization that successfully evade all the existing attacks (BPDA, EOT, C&W, L-BFGS, I-FGSM, etc.)

Data Augmentation as Defenses on Adversarial Examples: A Comprehensive Case Study

We developed an open-sourced defense framework, termed Fencebox, which currently includes fifteen preprocessing-only defense methods. All the included methodologies can distort the gradient regarding an input without harm much over the classification result, thus can invalidate adversarial perturbations with high classification accuracy.

GYM: A Comprehensive Defense Approach against DNN Backdoor Attacks

Link to The Paper; Appendix

We developed a comprehensive and attack agnostic defense framework against backdoor attacks. The defense is consists of intensive preprocessing, fine-tuning, and inference defense. The paper demonstrates that the GYM can invalidate all the mainstream backdoor attacks (including BadNets, Trojan, Invisible attacks).

The Hidden Vulnerability of Watermarking for Deep Neural Networks

Link to The Paper

Regarding intellectual property protection in the deep learning domain, we highlighted the potential vulnerability of model watermarks. We proposed a preprocessing based poison attack successfully invalidate watermarks functionalities.

I have a cute ass friend Nemo! And he has been accompanying me since this bloody pandemic forced my friend, his mammy, back to China. I love to take photos when he falls asleep to online courses! Follow me on my social media to get to know more about this one-year-old cute boy.