Course Description

Two recent trends in NLP---the application of deep neural networks and the use of transfer learning---have resulted in many models that achieve high performance on important tasks but whose behavior on those tasks is difficult to interpret. In this seminar, we will look at methods inspired by linguistics and cognitive science for analyzing what large neural language models have in fact learned: diagnostic/probing classifiers, adversarial test sets, and artificial languages, among others. Particular attention will be paid to probing these models' _semantic_ knowledge, which has received comparably little attention compared to their syntactic knowledge. Students will acquire relevant skills and (in small groups) design and execute a linguistically-informed analysis experiment, resulting in a report in the form of a publishable conference paper.

Days Time Location
Monday 3:30 - 5:50 PM https://washington.zoom.us/j/99141182318

Teaching Staff

Role Name Office Office Hours
Instructor Shane Steinert-Threlkeld https://washington.zoom.us/my/shanest Wednesday, 3-5 PM Pacific

Prerequisites

  • Mathematical background: linear algebra, multivariable calculus
  • LING 570 or 571
  • LING 572 recommended, but not required
  • One other linguistics course (not necessarily at UW)
  • Programming in Python
  • Linux/Unix Commands

Course Resources

Policies

As a project-oriented, student-driven, seminar-style class, active participation---in the classroom, or in Zoom, as well as on Canvas---is expected.

All student work will be carried out in small groups. Groups are free to divide up work as they see fit, but will be required to explain the division of labor with their final project. Except under rare circumstances, every member of a group will receive the same grades.

Grading

The distribution of grades for the final grade will be:

  • Final project paper: 50%
  • Project proposal: 10%
  • Special topic presentation: 30%
  • Class participation: 10%

Communication

Any questions concerning course content and logistics should be posted on the Canvas discussion board. If a more personal issue arises, you can email me personally; include "LING575" in the subject line. You can expect responses from teaching staff within 24 hours, but only during normal business hours, and excluding weekends.

Religious Accommodation

Washington state law requires that UW develop a policy for accommodation of student absences or significant hardship due to reasons of faith or conscience, or for organized religious activities. The UW’s policy, including more information about how to request an accommodation, is available at Religious Accommodations Policy (https://registrar.washington.edu/staffandfaculty/religious-accommodations-policy/). Accommodations must be requested within the first two weeks of this course using the Religious Accommodations Request form (https://registrar.washington.edu/students/religious-accommodations-request/).

Access and Accommodations

Your experience in this class is important to me. If you have already established accommodations with Disability Resources for Students (DRS), please communicate your approved accommodations to me at your earliest convenience so we can discuss your needs in this course.

If you have not yet established services through DRS, but have a temporary health condition or permanent disability that requires accommodations (conditions include but not limited to; mental health, attention-related, learning, vision, hearing, physical or health impacts), you are welcome to contact DRS at 206-543-8924 or uwdrs@uw.edu or disability.uw.edu. DRS offers resources and coordinates reasonable accommodations for students with disabilities and/or temporary health conditions. Reasonable accommodations are established through an interactive process between you, your instructor(s) and DRS. It is the policy and practice of the University of Washington to create inclusive and accessible learning environments consistent with federal and state law.

Safety

Call SafeCampus at 206-685-7233 anytime – no matter where you work or study – to anonymously discuss safety and well-being concerns for yourself or others. SafeCampus’s team of caring professionals will provide individualized support, while discussing short- and long-term solutions and connecting you with additional resources when requested.

Schedule


Date Topics Suggested Readings Additional info
Mar 29 Introduction to Transfer Learning in NLP
Course Overview
NLP's ImageNet Moment Has Arrived
NLP's Clever Hans Moment Has Arrived
HW1 (group formation) out
Apr 5 Language Models Deep contextualized word representations (ELMo paper)
Understanding LSTMs
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
The Annotated Transformer
The Illustrated Transformer
HW1 due
Apr 12 Analysis Methods Belinkov and Glass, "Analysis Methods in Neural Language Processing: A Survey"
NAACL 2019 Tutorial on Transfer Learning in NLP (slides 73-96)
Rogers, Kovaleva, Rumshisky, "A Primer in BERTology: What We Know About How BERT Works"

Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies (original linguistic task paper)
Linguistic Knowledge and Transferability of Contextual Representations (prototypical probing paper)
What Does BERT Look at? An Analysis of BERT’s Attention (prototypical attention paper)
Proposal guidelines out [slides]
Apr 19 Guest lecture: Rachel Rudinger on the Universal Decompositional Semantics Initiative
[slides, decomp.io]
Other datasets
The Universal Decompositional Semantics Dataset and Decomp Toolkit
Presentation sign-up
Apr 26 Technical resources
How to write an NLP paper
AllenNLP BERT probing demo [NB: not up-to-date]
HuggingFace Transformers [paper, web]
AllenNLP [paper, web]
Using GPUs on the patas cluster
Proposal due
May 3 Special Topic 1: Machine Translation and Pro-drop in English and Japanese (Cassie and Avani) Effects of Empty Categories on Machine Translation
Translating Pro-Drop Languages With Reconstruction Models
May 10 Special Topic 1: Evaluating Visual Knowledge in LMs via Cloze Tasks (Katya, Devin, Jessica) Knowledge of animal appearance among sighted and blind adults
Inducing Relational Knowledge from BERT
Birds have four legs?! NumerSense: Probing Numerical Commonsense Knowledge of Pre-trained Language Models
Special Topic 2: Gender Bias in Natural Language Processing: Detection and Mitigation (Amy, Sara, Will) Mitigating Gender Bias in Natural Language Processing: Literature Review
Semantics derived automatically from language corpora necessarily contain human biases
May 17 Special Topic 1: Linguistic knowledge and function words in LMs (Gregory, Dongqi) Can neural networks acquire a structural bias from raw linguistic data?
Probing What Different NLP Tasks Teach Machines about Function Word Comprehension
Special Topic 2: Probing Language Models for Temporal Awareness (Kunal, Christian, Shivin) A Broad-Coverage Challenge Corpus for Sentence Understanding through Inference
Temporal Reasoning in Natural Language Inference
May 24 Special Topic 1: Levels of Language Knowledge in Humans and NLP models (Ling, Andy, Jane) What does BERT learn about the structure of language?
What Is One Grain of Sand in the Desert? Analyzing Individual Neurons in Deep NLP Models
Final paper guidelines
Special Topic 2: Analyzing the performance of BERT-based models in Hindi (Ritika, Clevis, Dolapo) Towards Emotion Recognition in Hindi-English Code-Mixed Data: A Transformer Based Approach
Unsupervised Cross-lingual Representation Learning at Scale
May 31 Memorial Day: no class

Reading List

This is a list of a snapshot of some papers on interpretability / analysis of language models, reflecting my knowledge of the state of the field circa December 2019. NB: The field is large and very fast-growing, so this is by no means exhaustive and has not been updated since December 2019. To find even more literature, I recommend:

  • The references in these papers
  • BlackboxNLP proceedings: 2018, 2019, 2020
  • Search terms in Google Scholar/SemanticScholar: probing, analysis, diagnostic classifiers

NB: the list below is an iframe, so make sure to scroll to see everything.