You are here:

David Bamman, Berkeley

Much work in the space of natural language processing has shifted toward exploring the affordances of contextual language models (such as the BERT and GPT families), which learn representations of words that are sensitive to the sentence context in which they are used. In this talk, I'll discuss Latin BERT, a contextual language model for the Latin language, trained on 642.7 million words from a variety of sources spanning the Classical era to the 21st century. In a series of case studies, we illustrate the uses of this language-specific model both for work in natural language processing for Latin and for traditional scholarship: we show that Latin BERT achieves a new state of the art for part-of-speech tagging on all three Universal Dependency datasets for Latin and can be used for predicting missing text (including critical emendations); we create a new dataset for assessing word sense disambiguation for Latin and demonstrate that Latin BERT outperforms static word embeddings; and we show that it can be used for semantically-informed search by querying contextual nearest neighbors. We publicly release trained models to help drive future work in this space. 

This seminar will also be livestreamed at: