Papers of CogSci 2025, pt 2 #
Welcome to part two of my round-up of CogSci papers, a totally unordered list of papers that jumped out at me from the proceedings in one way or another, since I didn’t attend the conference in person. While I’m enjoying finally going through and seeing what I missed, this exercise does also highlight that what’s best about conferences is meeting new and old colleagues, in-person discussion of ideas, and the serendipity of finding these things in not-always-expected places. It also seems like there might need to be a lot of these round-ups, but I will try and not make every day this week one of them.
- “Studying Cross-linguistic Structural Transfer in Second Language Learning”: large-scale analysis of L1->L2 transfer, focused on morphosyntax. Should we start calling some of these things Hartshornian-scale?
- “Thinking through syntax: Expanding the scope of “thinking for speaking””: learning an artificial language with different syntactic structures for a simple domain may effect similarity judgments on that domain (colored objects)
- “Teasing Apart Architecture and Initial Weights as Sources of Inductive Bias in Neural Networks”: initial weights may matter as much as architecture as a source of model bias, and all models fail at out-of-domain (from meta-learning domain) generalization
- “Dimensions of Identity-Representing Belief”: I’ve recently developed, thanks to a student of mine, a small side-interest in believe versus think. These non-epistemic kinds of belief seem relevant to the difference in these verbs.
- “Reinforcement learning produces efficient case-marking systems”: I’ve thought about case-marking as a candidate domain for efficient communication analyses, and have also used RL in similar scenarios in the past. Curious to read more and see how much communication is in this model.
- “Interactions Between Linear Order and Lexical Distributions in Artificial Language Learning”: in addition to manipulating frequency of types and tokens, the authors find an effect of prefix vs suffixing in ALL, which is relevant for one of my projects.
- “Testing counterintuitive predictions about cost-based inferences in learning from the Rational Speech Act model”: little evidence of RSA’s prediction that costly signals should be preferably ambiguous, with a suggestion that RSA might be a better model of communication than of learning.