Course Description

Two recent trends in NLP---the application of deep neural networks and the use of transfer learning---have resulted in many models that achieve high performance on important tasks but whose behavior on those tasks is difficult to interpret. In this seminar, we will look at methods inspired by linguistics and cognitive science for analyzing what large neural language models have in fact learned: diagnostic/probing classifiers, adversarial test sets, and artificial languages, among others. Particular attention will be paid to probing these models' _semantic_ knowledge, which has received comparably little attention compared to their syntactic knowledge. Students will acquire relevant skills and (in small groups) design and execute a linguistically-informed analysis experiment, resulting in a report in the form of a publishable conference paper.

Days Time Location
Thursday 3:30 - 5:50 PM Savery 137

Teaching Staff

Role Name Office Office Hours
Instructor Shane Steinert-Threlkeld Guggenheim 418-D (and Zoom) Tuesday, 2:30 - 4:30 PM

Prerequisites

  • Mathematical background: linear algebra, multivariable calculus
  • LING 570 or 571
  • LING 572 recommended, but not required
  • One other linguistics course (not necessarily at UW)
  • Programming in Python
  • Linux/Unix Commands

Course Resources

Policies

As a project-oriented, student-driven, seminar-style class, active participation---in the classroom, or in Zoom, as well as on Canvas---is expected.

All student work will be carried out in small groups. Groups are free to divide up work as they see fit, but will be required to explain the division of labor with their final project. Except under rare circumstances, every member of a group will receive the same grades.

Grading

The distribution of grades for the final grade will be:

  • Final project paper: 50%
  • Project proposal: 10%
  • Special topic presentation: 20%
  • Final project presentation: 10%
  • Class participation: 10%

Communication

Any questions concerning course content and logistics should be posted on the Canvas discussion board. If a more personal issue arises, you can email me personally; include "LING575" in the subject line. You can expect responses from teaching staff within 24 hours, but only during normal business hours, and excluding weekends.

Religious Accommodation

Washington state law requires that UW develop a policy for accommodation of student absences or significant hardship due to reasons of faith or conscience, or for organized religious activities. The UW’s policy, including more information about how to request an accommodation, is available at Religious Accommodations Policy (https://registrar.washington.edu/staffandfaculty/religious-accommodations-policy/). Accommodations must be requested within the first two weeks of this course using the Religious Accommodations Request form (https://registrar.washington.edu/students/religious-accommodations-request/).

Access and Accommodations

Your experience in this class is important to me. If you have already established accommodations with Disability Resources for Students (DRS), please communicate your approved accommodations to me at your earliest convenience so we can discuss your needs in this course.

If you have not yet established services through DRS, but have a temporary health condition or permanent disability that requires accommodations (conditions include but not limited to; mental health, attention-related, learning, vision, hearing, physical or health impacts), you are welcome to contact DRS at 206-543-8924 or uwdrs@uw.edu or disability.uw.edu. DRS offers resources and coordinates reasonable accommodations for students with disabilities and/or temporary health conditions. Reasonable accommodations are established through an interactive process between you, your instructor(s) and DRS. It is the policy and practice of the University of Washington to create inclusive and accessible learning environments consistent with federal and state law.

Safety

Call SafeCampus at 206-685-7233 anytime – no matter where you work or study – to anonymously discuss safety and well-being concerns for yourself or others. SafeCampus’s team of caring professionals will provide individualized support, while discussing short- and long-term solutions and connecting you with additional resources when requested.

Schedule


Date Topics Suggested Readings Additional info
Jan 9 Introduction to Transfer Learning in NLP
Course Overview
NLP's ImageNet Moment Has Arrived
NLP's Clever Hans Moment Has Arrived
HW1 (group formation) out
Jan 16 Language Models Deep contextualized word representations (ELMo paper)
Understanding LSTMs
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
The Annotated Transformer
The Illustrated Transformer
HW1 due
Jan 23 Analysis Methods Belinkov and Glass, "Analysis Methods in Neural Language Processing: A Survey"
NAACL 2019 Tutorial on Transfer Learning in NLP (slides 73-96)

Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies (original linguistic task paper)
Linguistic Knowledge and Transferability of Contextual Representations (prototypical probing paper)
What Does BERT Look at? An Analysis of BERT’s Attention (prototypical attention paper)
Proposal guidelines out [slides]
Jan 30 Guest lecture: Rachel Rudinger on the Universal Decompositional Semantics Initiative
[slides, decomp.io]
Other datasets
The Universal Decompositional Semantics Dataset and Decomp Toolkit
Presentation sign-up
Feb 6 Technical resources
How to write an NLP paper
AllenNLP BERT probing demo
HuggingFace Transformers [paper, web]
AllenNLP [paper, web]
Using GPUs on the patas cluster
Proposal due
Feb 13 Special Topic 1: Hate speech classification using BERT (Courtney and David; Group 2) Hateful Symbols or Hateful People? Predictive Features for Hate Speech Detection on Twitter
MC-BERT4HATE: Hate Speech Detection using Multi-channel BERT for Different Languages and Translations
(Optional) What Is One Grain of Sand in the Desert? Analyzing Individual Neurons in Deep NLP Models
Special Topic 2: Evaluating NLI models using formal logic (Group 5) Probing Natural Language Inference Models through Semantic Fragments
A logical-based corpus for cross-lingual evaluation
Feb 20 Special Topic 1: Robust Natural Language Understanding (Xuhui and Shenghuo; Group 3) Build it Break it Fix it for Dialogue Safety: Robustness from Adversarial Human Attack
Is BERT Really Robust? A Strong Baseline for Natural Language Attack on Text Classification and Entailment
(Optional) Evaluating Common Sense in Pre-trained Language Models
(Optional) Annotation Artifacts in Natural Language Inference Data
(Optional) SWAG: A Large-Scale Adversarial Dataset for Grounded Commonsense Inference
(Optional) WinoGrande: An Adversarial Winograd Schema Challenge at Scale
Special Topic 2: Idioms (Wes and Daniel; Group 1) Still a Pain in the Neck: Evaluating Text Representations on Lexical Composition
What do Neural Networks Actually Learn, When They Learn to Identify Idioms?
Feb 27 Special Topic 1: Analysis of positional embeddings (Group 9) Towards Understanding Position Embeddings (poster)
Do We Need Word Order Information for Cross-lingual Sequence Labeling
Revealing the Dark Secrets of BERT
Special Topic 2: Multilingual Syntactic/Semantic Probes (Group 4) Are All Languages Equally Hard to Language-Model?
What do you learn from context? Probing for sentence structure in contextualized word representations
Probing for semantic evidence of composition by means of simple classification tasks
Special Topic 3: Implicature Discernment in NLI (Group 7) A large annotated corpus for learning natural language inference
Joint Inference and Disambiguation of Implicit Sentiments via Implicature Constraints
Mar 5 Special Topic 1: NER using BERT and ELMo (Group 8) Neural Architectures for Nested NER through Linearization
Transfer Learning in Biomedical Natural Langauge Processing
BioBERT Based Named Entity Recognition in Electronic Medical Record
(Optional) Neural Architectures for Named Entity Recognition
Final paper and presentation guidelines
Special Topic 2: Mighty Morpheme Tagging Rangers (Group 6) Exploring BERT's Vocabulary
Studying the Inductive Biases of RNNs with Synthetic Variations of Natural Languages
Mar 12 Project presentation fest!
Reception

Reading List

This is a list of a snapshot of some papers on interpretability / analysis of language models, reflecting my knowledge of the state of the field circa December 2019. The field is large and fast-growing, so this is by no means exhaustive. To find even more literature, I recommend:

  • The references in these papers
  • BlackboxNLP proceedings: 2018, 2019
  • Search terms in Google Scholar/SemanticScholar: probing, analysis, diagnostic classifiers

NB: the list below is an iframe, so make sure to scroll to see everything.