Selected Publications

Refer to Google Scholar for full list

More Publications

. Unipelt: A unified framework for parameter-efficient language model tuning. ACL 2022, 2022.

PDF

. Quantifying Adaptability in Pre-trained Language Models with 500 Tasks. arXiv preprint arXiv:2112.03204, 2021.

PDF

. Sparse Distillation: Speeding Up Text Classification by Using Bigger Models. arXiv preprint arXiv:2110.08536, 2021.

PDF

. A Fistful of Words: Learning Transferable Visual Models from Bag-of-Words Supervision. arXiv preprint arXiv:2112.13884, 2021.

PDF

. On the influence of masking policies in intermediate pre-training. EMNLP 2021, 2021.

PDF

. Towards few-shot fact-checking via perplexity. NAACL 2021, 2021.

PDF

. On unifying misinformation detection. NAACL 2021, 2021.

PDF

. Linformer: Self-Attention with Linear Complexity. arXiv preprint arXiv:2006.04768, 2020.

PDF

. To Pretrain or Not to Pretrain: Examining the Benefits of Pretrainng on Resource Rich Tasks. The 58th Annual Meeting of the Association for Computational Linguistics (ACL 2020), 2020.

PDF

. Language Models as Fact Checkers?. Proceedings of the Third Workshop on Fact Extraction and VERification (FEVER), 2020.

PDF

Contact