Acceptability ratings cannot be taken at face value
Carson T. Schütze
July 2019

This chapter addresses how linguists’ empirical (syntax) claims should be tested with non-linguists. Recent experimental work attempts to measure rates of convergence between data presented in journal articles and the results of large surveys. The chapter presents three follow-up experiments to one such study (Sprouse, Schütze, and Almeida 2013), arguing that this method may underestimate the true rate of convergence because it leaves considerable room for naïve subjects to give ratings that do not reflect their true acceptability judgments of the relevant structures. To understand what can go wrong, the experiments were conducted in two parts: the first part had visually presented sentences rated on a computer, replicating previous work. The second part was an interview where the experimenter asked each subject about the ratings they gave to particular items, in order to determine what interpretation/parse they had assigned, whether they had missed any critical words, etc.
Format: [ pdf ]
Reference: lingbuzz/004862
(please use that when you cite this article)
Published in: To appear in "Linguistic intuitions: Evidence and method", eds. Samuel Schindler, Anna Drożdżowicz & Karen Brøcker, OUP.
keywords: acceptability judgments, syntax, linguists vs. non-linguists, rate of convergence, interview, syntax
Downloaded:182 times


[ edit this article | back to article list ]