Publications

(2022). Model Uncertainty-Aware Knowledge Amalgamation for Pre-Trained Language Models. Arixv.

PDF

(2022). Rethinking the Openness of CLIP. Arxiv.

PDF

(2021). Dynamic Knowledge Distillation for Pre-trained Language Models. Main Conference, EMNLP 2021.

PDF Code

(2021). CascadeBERT: Accelerating Inference of Pre-trained Language Models via Calibrated Complete Models Cascade. Findings of EMNLP 2021.

PDF Code

(2021). Leveraging Word-Formation Knowledge for Chinese Word Sense Disambiguation. Findings of EMNLP 2021.

PDF Code

(2021). Text AutoAugment: Learning Compositional Augmentation Policy for Text Classification. EMNLP 2021.

PDF Code

(2021). Be Careful about Poisoned Word Embeddings: Exploring the Vulnerability of the Embedding Layers in NLP Models. NAACL-HLT 2021.

PDF Code

(2021). Decompose, Fuse and Generate: A Formation-Informed Method for Chinese Definition Generation. NAACL-HLT 2021.

(2019). Enhancing Topic-to-Essay Generation with External Commonsense Knowledge. ACL 2019.

PDF Code