Course Description

Two recent trends in NLP---the application of deep neural networks and the use of transfer learning---have resulted in many models that achieve high performance on important tasks but whose behavior on those tasks is difficult to interpret. In this seminar, we will look at methods inspired by linguistics and cognitive science for analyzing what large neural language models have in fact learned: diagnostic/probing classifiers, adversarial test sets, and artificial languages, among others. Particular attention will be paid to probing these models' _semantic_ knowledge, which has received comparably little attention compared to their syntactic knowledge. Students will acquire relevant skills and (in small groups) design and execute a linguistically-informed analysis experiment, resulting in a report in the form of a publishable conference paper.

Days Time Location
Wednesday 3:30 - 5:50 PM SMI 309
https://washington.zoom.us/j/92378384460

Teaching Staff

Role Name Office Office Hours
Instructor Shane Steinert-Threlkeld GUG 418D
https://washington.zoom.us/my/shanest
Monday, 3-5 PM Pacific

Prerequisites

  • Mathematical background: linear algebra, multivariable calculus
  • LING 570 or 571
  • LING 572 recommended, but not required
  • One other linguistics course (not necessarily at UW)
  • Programming in Python
  • Linux/Unix Commands

Course Resources

Policies

As a project-oriented, student-driven, seminar-style class, active participation---in the classroom, or in Zoom, as well as on Canvas---is expected.

All student work will be carried out in small groups. Groups are free to divide up work as they see fit, but will be required to explain the division of labor with their final project. Except under rare circumstances, every member of a group will receive the same grades.

Grading

The distribution of grades for the final grade will be:

  • Final project paper: 50%
  • Project proposal: 10%
  • Special topic presentation: 30%
  • Class participation: 10%

Communication

Any questions concerning course content and logistics should be posted on the Canvas discussion board. If a more personal issue arises, you can email me personally; include "LING575" in the subject line. You can expect responses from teaching staff within 24 hours, but only during normal business hours, and excluding weekends.

Religious Accommodation

Washington state law requires that UW develop a policy for accommodation of student absences or significant hardship due to reasons of faith or conscience, or for organized religious activities. The UW’s policy, including more information about how to request an accommodation, is available at Religious Accommodations Policy (https://registrar.washington.edu/staffandfaculty/religious-accommodations-policy/). Accommodations must be requested within the first two weeks of this course using the Religious Accommodations Request form (https://registrar.washington.edu/students/religious-accommodations-request/).

Access and Accommodations

Your experience in this class is important to me. If you have already established accommodations with Disability Resources for Students (DRS), please communicate your approved accommodations to me at your earliest convenience so we can discuss your needs in this course.

If you have not yet established services through DRS, but have a temporary health condition or permanent disability that requires accommodations (conditions include but not limited to; mental health, attention-related, learning, vision, hearing, physical or health impacts), you are welcome to contact DRS at 206-543-8924 or uwdrs@uw.edu or disability.uw.edu. DRS offers resources and coordinates reasonable accommodations for students with disabilities and/or temporary health conditions. Reasonable accommodations are established through an interactive process between you, your instructor(s) and DRS. It is the policy and practice of the University of Washington to create inclusive and accessible learning environments consistent with federal and state law.

Safety

Call SafeCampus at 206-685-7233 anytime – no matter where you work or study – to anonymously discuss safety and well-being concerns for yourself or others. SafeCampus’s team of caring professionals will provide individualized support, while discussing short- and long-term solutions and connecting you with additional resources when requested.

Schedule


Date Topics Suggested Readings Additional info
Mar 30 Introduction to Transfer Learning in NLP
Course Overview
NLP's ImageNet Moment Has Arrived
NLP's Clever Hans Moment Has Arrived
HW1 (group formation) out
Apr 6 Language Models Deep contextualized word representations (ELMo paper)
Understanding LSTMs
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
The Annotated Transformer
The Illustrated Transformer
HW1 due
Apr 13 Analysis Methods Belinkov and Glass, "Analysis Methods in Neural Language Processing: A Survey"
NAACL 2019 Tutorial on Transfer Learning in NLP (slides 73-96)
Rogers, Kovaleva, Rumshisky, "A Primer in BERTology: What We Know About How BERT Works"

Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies (original linguistic task paper)
Linguistic Knowledge and Transferability of Contextual Representations (prototypical probing paper)
What Does BERT Look at? An Analysis of BERT’s Attention (prototypical attention paper)
Proposal guidelines out [slides]
Apr 20 Overflow + Datasets The Universal Decompositional Semantics Dataset and Decomp Toolkit
Presentation sign-up
Apr 27 Technical resources
How to write an NLP paper
HuggingFace Transformers [paper, web]
AllenNLP [paper, web]
Using GPUs on the patas cluster
AllenNLP BERT probing demo
Proposal due
May 4 Special Topic 1: Analyzing Comprehension of Spatial Relations in Joint Text-Image Models (Group 4) Learning Transferable Visual Models From Natural Language Supervision
Learning to Compose Visual Relations
DALL-Eval: Probing the Reasoning Skills and Social Biases of Text-to-Image Generative Transformers
Special Topic 2:
May 11 Special Topic 1: Amnesic Probing and INLP (Group 1) Null It Out: Guarding Protected Attributes by Iterative Nullspace Projection
Amnesic Probing: Behavioral Explanation with Amnesic Counterfactuals
Special Topic 2: Analyzing and Evaluating Pragmatic Knowledge in Open-Domain Dialogue Models (Group 8) How NOT To Evaluate Your Dialogue System: An Empirical Study of Unsupervised Evaluation Metrics for Dialogue Response Generation
Learning to Write with Cooperative Discriminators
Improving Dialog Evaluation with a Multi-reference Adversarial Dataset and Large Scale Pretraining
May 18 No Class
May 25 Special Topic 1: Methods to Evaluate Verb Argument Structure Knowledge in Language Models and Embeddings (Group 3) The Spray-Load Alternation
Verb Argument Structure Alternations in Word and Sentence Embeddings
BLiMP: The Benchmark of Linguistic Minimal Pairs for English
Final paper guidelines
Special Topic 2: Probing Pre-trained Language Models: A Case Study of Coordination Using Causal Mediation Analysis (Group 6) Investigating Gender Bias in Language Models Using Causal Mediation Analysis
CONJNLI: Natural Language Inference Over Conjunctive Sentences
Jun 1 Special Topic 1: Attention Heads and negation focus detection (Group 7) Negated and Misprimed Probes for Pretrained Language Models: Birds Can Talk, But Cannot Fly
How does BERT’s attention change when you fine-tune? An analysis methodology and a case study in negation scope
Special Topic 2: Social Bias (Group 5) On Measuring Social Biases in Sentence Encoders
Word embeddings quantify 100 years of gender and ethnic stereotypes

Reading List

This is a list of a snapshot of some papers on interpretability / analysis of language models, organized by some keywords, reflecting my knowledge of the state of the field circa December 2019. NB: The field is large and very fast-growing, so this is by no means exhaustive and has not been updated since December 2019. To find even more literature, I recommend:

  • The references in these papers
  • BlackboxNLP proceedings: 2018, 2019, 2020
  • Search terms in Google Scholar/SemanticScholar: probing, analysis, diagnostic classifiers

NB: the list below is an iframe, so make sure to scroll to see everything.