Can neural networks acquire a structural bias from raw linguistic data?
Alex Warstadt, Samuel Bowman
May 2020
 

We evaluate whether BERT, a widely used neural network for sentence processing, acquires an inductive bias towards forming structural generalizations through pretraining on raw data. We conduct four experiments testing its preference for structural vs. linear generalizations in different structure-dependent phenomena. We find that BERT makes a structural generalization in 3 out of 4 empirical domains---subject-auxiliary inversion, reflexive binding, and verb tense detection in embedded clauses---but makes a linear generalization when tested on NPI licensing. We argue that these results are the strongest evidence so far from artificial learners supporting the proposition that a structural bias can be acquired from raw data. If this conclusion is correct, it is tentative evidence that some linguistic universals can be acquired by learners without innate biases. However, the precise implications for human language acquisition are unclear, as humans learn language from significantly less data than BERT.
Format: [ pdf ]
Reference: lingbuzz/005312
(please use that when you cite this article)
Published in: To appear in Proceedings of 42nd Annual Virtual Meeting of the Cognitive Science Society
keywords: inductive bias, structure dependence, bert, learnability of grammar, poverty of the stimulus, neural network, self-supervised learning, syntax
Downloaded:188 times

 

[ edit this article | back to article list ]