About me

Hi welcome to my homepage! I am a forth year CS PhD student at The University of Texas, Austin. I work with Prof. Qiang Liu. I got my bachelor’s degree in Mathematics at Zhejiang University in 2017 and I got my master’s degree in Statistics at Purdue University in 2019. I was a AI research intern at Cruise during the summer of 2022, and machine learning research intern at Waymo during the summer of 2021, and machine learning research intern at Meta at 2020.

Selected Research Projects

I have board research interests including efficient deep learning model, causal inference, certified adversarial robustness, feature selection and Monte Carlo method.

Generative Diffusion Model

Improved diffusion bridge for learning distribution on structured domain.

Mao Ye, Lemeng Wu and Qiang Liu. First Hitting Diffusion Models. NeurIPS 2022

Bi-level Optimization

A simple purely first-order gradient algorithm for Bilevel optimization without convexity assumption for both upper and lower problems.

Mao Ye * , Bo Liu * , Stephen Wright and Qiang Liu. BOME! Bilevel Optimization Made Easy: A Simple First-Order Approach. NeurIPS 2022

3D Detection

I studied 3D object detection when each image is partially labeled.

Mao Ye, Chenxi Liu, Maoqing Yao, Weiyue Wang, Zhaoqi Leng, Charles R. Qi and Dragomir Anguelov. Multi-Class 3D Object Detection with Single-Class Supervision.) ICRA 2022

Causal Inference

I develop network architecture and optimization techniques for treatment estimation using deep neural network.

Lizhen Nie * , Mao Ye * , Qiang Liu and Dan Nicolae. Varying Coefficient Neural Network with Functional Targeted Regularization for Estimating Continuous Treatment Effects. ICLR 2021 (Oral Presentation, accept rate 1.77%)

Learning Efficient Neural Network

Network pruning is a successful technique to learn a compact network model. I work on strong theory-oriented pruning algorithm based on greedy optimization. We are able to show that

  • Small network learned by our algorithm is guaranteed to be better than small network learned by direct training.
  • Theoretically, it is importance to fine-tune the pruned network instead of retrain it.

Mao Ye * , Lemeng Wu * and Qiang Liu. Greedy Optimization Provably Wins the Lottery: Logarithmic Number of Winning Tickets is Enough. NeurIPS 2020

Mao Ye, Chengyue Gong * , Lizhen Nie * , Denny Zhou, Adam Klivans and Qiang Liu. Good Subnetworks Provably Exists: Pruning via Greedy Forward Selection. ICML 2020

Certified Robustness

We propose a functional constraint optimization framework for cerfified adversarial defense using random smoothing. A general class of smoothing distribution is proposed for image classification task. And a special discrete smoothing distribution is proposed for text classification.

Dinghuai Zhang * , Mao Ye * , Chengyue Gong * and Qiang Liu. Black-Box Certification with Randomized Smoothing: A Functional Optimization Based Framework. Neurips 2020

Mao Ye * , Chengyue Gong * and Qiang Liu. SAFER: A Structure-free Approach for Certified Robustness to Adversarial Word Substitutions. ACL 2020

Uncertainty Estimation

I develop simple surrogate loss for imporving the particle quality in bootstrap.

Mao Ye and Qiang Liu. Centroid Approximation for Bootstrap: Improving Particle Quality at Inference. ICML 2022

Feature Selection

Selecting useful feature is important for ML system. We develop a new drop-out-one loss that accurately detects useful features for difficult task with highly correlated features. Our approach is very easy to implement and is guaranteed to select all useful features and discard all useless features when there is enough data.

Mao Ye * and Yan Sun * . Variable Selection via Penalized Neural Network: a Drop-Out-One Loss Approac. ICML 2018

Sampling Method

Sampling method is important for Bayesian inference and RL. We develop a special samping dynamics that utilizes the collected samples to guide the sampler to explore the unexplored region and thus improve the sampling quality.

Mao Ye * , Tongzheng Ren * and Qiang Liu. Stein Self-Repulsive Dynamics: Benefits from Past Samples. Neurips 2020

Services

Conference reviewer: ICML, NeurIPS, ICLR, CVPR

Journal reviewer: JMLR, TAMPI, Machine Learning, Neurocomputing, Canadian journal of statistics