Hello! I am research engineer at Google working on multi-media generation. Before joining Google, I was applied scientist at Amazon working on information retrieval and product search, and former Ph.D student in the School of Computer Science and Informatics at Cardiff University, co-advised by Jose-camacho Collados and Steven Schockaert. During my PhD, I studied relational knowledge representation in language model and application of language models in tasks such as named-entity recognition (eg. T-NER, TweetNER7) and question generation. I also studied NLP on social media, and I am a part of TweetNLP, where I am developing the core library. Aside from my role at Amazon, I am collaborating with Kotoba-technologies for research on bilingual speech foundation model of Japanese and English.
Representative Papers (see full publication):
- Asahi Ushio, Jose Camacho-Collados, and Steven Schockaert
Distilling Relation Embeddings from Pre-trained Language Models
Proceedings of EMNLP 2021 Main Conference [pdf] [code] [slide] [acl anthology] [arxiv] [demo] - Asahi Ushio, Luis Espinosa-Anke, Steven Schockaert, and Jose Camacho-Collados
BERT is to NLP what AlexNet is to CV: Can Pre-Trained Language Models Identify Analogies?
Proceedings of ACL-IJCNLP 2021 Main Conference [pdf] [code] [slide] [acl anthology] [arxiv]
Open Source Projects:
- T-NER: A python library to facilitate named-entity-recognition fine-tuning, evaluation, and inference via API.
- LMQG: Web application to run multi-lingual question generation models.
- TweetNLP: A python library of comprehensive NLP solutions tailored for Twitter.
- KEX: A python library of modern graph-based keyphrase extraction.
In 2023, I did a research internship at Google Research working on MusicLM team supervised by Andrea Agostinelli. In 2021, I did research internships at Amazon supervised by Danushka Bollegala, and Snapchat co-supervised by Francesco Barbieri, VĂtor Silva Sousa, and Leonardo Neves. Before joining Cardiff University, I had been a full-time research engineer at Cogent Labs from 2018 to 2020. Aside from NLP, I spend some time on the research of computational art (WikiART Face).