1

Unipelt: A unified framework for parameter-efficient language model tuning

On the influence of masking policies in intermediate pre-training

Towards few-shot fact-checking via perplexity

On unifying misinformation detection

To Pretrain or Not to Pretrain: Examining the Benefits of Pretrainng on Resource Rich Tasks

Language Models as Fact Checkers?

Keeping Notes: Conditional Natural Language Generation with a Scratchpad Encoder

Adversarial Training for Community Question Answer Selection Based on Multi-scale Matching

Characterizing and Supporting Question Answering in Human-to-Human Communication

Identifying Task Boundaries in Digital Assistants