Toward a Better Understanding of Relational Knowledge in Language Models
Date:
This is an invited talk at NLP Colloquium about relational knowledge representation of language models including the following papers.
- Ushio, et al. “Distilling Relation Embeddings from Pretrained Language Models” 2021
- Ushio, et al. “BERT is to NLP what AlexNet is to CV: Can Pre-Trained Language Models Identify Analogies?” 2021
The talk is being recoded and shared here.