Papers of CogSci 2025, pt 3 #
Welcome to part three of my round-up of CogSci papers, an unordered list of papers that jumped out at me from the proceedings in one way or another, since I didn’t go in person this year.
- “Learning about Inductive Potential from Generic Statements”: I came for the title, but felt instantly attacked by the first sentence of the abstract—“Generic statements (e.g., “Climbers drive Subarus”) shape what categories people take as meaningful bases for generalization.”—as a Subaru-driving climber.
- “The origin of the possible: 12-month-olds’ understanding of certain, likely, and unlikely events”: infants can distinguish 66%- from 33%-likely events, but not from 100%-likely ones.
- “Representations of what’s possible reflect others’ epistemic states”: another modal cognition one, this one focusing on epistemic effects on non-epistemic modal spaces.
- “Iterated language learning is shaped by a drive for optimizing lossy compression”: a power-house team—led by my former masters student and long-time collaborator Nathaniel Imel—analyzes adult artificial language learning data showing that it tends towards optimal efficiency in the information-bottleneck sense.
- “Re-examining the tradeoff between lexicon size and average morphosyntactic complexity in recursive numeral systems”: re-analyzes results from Denić and Szymanik on numeral systems and provides a few interesting additional analyses.
- “Developmental evidence for sensitivity to hierarchical structure in the noun phrase”: my earlier work with Naomi Tachikawa Shapiro on artificial language learning was inspired by earlier work by this group. Exciting to see them using the same method now to conduct studies on children. Spoiler alert: they find the same preference for scope-isomorphic ordering!
- “The role of contrast in category learning”: clever experimental manipulation of contrastiveness in category learning, looking at whether the “same” category is learned differently when presented positively or merely in contrast with other ones.
- “Neglect zero: evidence from priming across constructions”: cross-construction priming suggests that neglect-zero phenomena (where empty sets and the like are ruled out in interpretation) are in fact a unified category.
- “Efficient communication drives the semantic structure of kinship terminology”: need to read this one more closely, but it looks like a clever method of using topographic similarity to reverse-engineer which semantic features matter for different languages.
- “PACE: Procedural Abstractions for Communicating Efficiently”: I also saw this one as a talk at a different event and thought it was a clever combination of ideas from abstraction-learning and RL-based communication. I want to revisit the paper to get more detail.