We examine the verification of simple quantifiers in natural language from a computational model perspective. We refer to previous neuropsychological investigations of the same pr...
Words are the essence of communication: they are the building blocks of any language. Learning the meaning of words is thus one of the most important aspects of language acquisiti...
With human language, the same utterance can have different meanings in different contexts. Nevertheless, listeners almost invariably converge upon the correct intended meaning. Th...
This paper reports the results of a self-paced reading experiment in Japanese in which the materials consisted of four versions of successively more nested syntactic structures. I...
We created paired moral dilemmas with minimal contrasts in wording, a research strategy that has been advocated as a way to empirically establish principles operative in a domain-...
Previous research indicates that mental representations of word meanings are distributed along both semantic and syntactic dimensions such that nouns and verbs are relatively dist...
Daniel Mirman, Ted J. Strauss, James A. Dixon, Jam...
Computational models of meaning trained on naturally occurring text successfully model human performance on tasks involving simple similarity measures, but they characterize meani...
Marco Baroni, Brian Murphy, Eduard Barbu, Massimo ...
We used a new method to assess how people can infer unobserved causal structure from patterns of observed events. Participants were taught to draw causal graphs, and then shown a ...
Tamar Kushnir, Alison Gopnik, Chris Lucas, Laura S...
With only two to five slots of visual working memory (VWM), humans are able to quickly solve complex visual problems to near optimal solutions. To explain the paradox between tigh...
Xiaohui Kong, Christian D. Schunn, Garrick L. Wall...