This project provides a comprehensive framework for exploring, analyzing, and visualizing contextualized word embeddings using modern transformer-based language models (such as BERT and RoBERTa). The main goals of the project are to:
- Extract contextualized embeddings for specific words in different sentence contexts;
- Compare static (non-contextual) and contextualized embeddings visually and quantitatively;
- Analyze and visualize word usage across different domains and models.
- Probe transformer-based embeddings,
- Visualize how context (or domain, or model) influences word meaning representations,
- Gain intuition about the power of contextualized embeddings compared to static ones.