語系:
繁體中文
English
說明(常見問題)
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
Neural Language Models and Human Linguistic Knowledge.
紀錄類型:
書目-語言資料,手稿 : Monograph/item
正題名/作者:
Neural Language Models and Human Linguistic Knowledge./
作者:
Hu, Jennifer.
面頁冊數:
1 online resource (156 pages)
附註:
Source: Dissertations Abstracts International, Volume: 85-10, Section: B.
Contained By:
Dissertations Abstracts International85-10B.
標題:
Cognitive psychology. -
電子資源:
click for full text (PQDT)
ISBN:
9798381955033
Neural Language Models and Human Linguistic Knowledge.
Hu, Jennifer.
Neural Language Models and Human Linguistic Knowledge.
- 1 online resource (156 pages)
Source: Dissertations Abstracts International, Volume: 85-10, Section: B.
Thesis (Ph.D.)--Massachusetts Institute of Technology, 2023.
Includes bibliographical references
Language is one of the hallmarks of intelligence, demanding explanation in a theory of human cognition. However, language presents unique practical challenges for quantitative empirical research, making many linguistic theories difficult to test at naturalistic scales. Artificial neural network language models (LMs) provide a new tool for studying language with mathematical precision and control, as they exhibit remarkably sophisticated linguistic behaviors while being fully intervenable. While LMs differ from humans in many ways, the learning outcomes of these models can reveal the behaviors that may emerge through expressive statistical learning algorithms applied to linguistic input. In this thesis, I demonstrate this approach through three case studies using LMs to investigate open questions in language acquisition and comprehension. First, I use LMs to perform controlled manipulations of language learning, and find that syntactic generalizations depend more on a learner's inductive bias than on training data size. Second, I use LMs to explain systematic variation in scalar inferences by approximating human listeners' expectations over unspoken alternative sentences (e.g., "The bill was supported overwhelmingly" implies that the bill was not supported unanimously). Finally, I show that LMs and humans exhibit similar behaviors on a set of non-literal comprehension tasks which are hypothesized to require social reasoning (e.g., inferring a speaker's intended meaning from ironic statements). These findings suggest that certain aspects of linguistic knowledge could emerge through domain-general prediction mechanisms, while other aspects may require specific inductive biases and conceptual structures.
Electronic reproduction.
Ann Arbor, Mich. :
ProQuest,
2024
Mode of access: World Wide Web
ISBN: 9798381955033Subjects--Topical Terms:
556029
Cognitive psychology.
Subjects--Index Terms:
Language modelsIndex Terms--Genre/Form:
554714
Electronic books.
Neural Language Models and Human Linguistic Knowledge.
LDR
:03115ntm a22003977 4500
001
1152750
005
20241213095551.5
006
m o d
007
cr mn ---uuuuu
008
250605s2023 xx obm 000 0 eng d
020
$a
9798381955033
035
$a
(MiAaPQ)AAI31091159
035
$a
(MiAaPQ)MIT1721_1_152578
035
$a
AAI31091159
040
$a
MiAaPQ
$b
eng
$c
MiAaPQ
$d
NTU
100
1
$a
Hu, Jennifer.
$3
1479833
245
1 0
$a
Neural Language Models and Human Linguistic Knowledge.
264
0
$c
2023
300
$a
1 online resource (156 pages)
336
$a
text
$b
txt
$2
rdacontent
337
$a
computer
$b
c
$2
rdamedia
338
$a
online resource
$b
cr
$2
rdacarrier
500
$a
Source: Dissertations Abstracts International, Volume: 85-10, Section: B.
500
$a
Advisor: Levy, Roger P.
502
$a
Thesis (Ph.D.)--Massachusetts Institute of Technology, 2023.
504
$a
Includes bibliographical references
520
$a
Language is one of the hallmarks of intelligence, demanding explanation in a theory of human cognition. However, language presents unique practical challenges for quantitative empirical research, making many linguistic theories difficult to test at naturalistic scales. Artificial neural network language models (LMs) provide a new tool for studying language with mathematical precision and control, as they exhibit remarkably sophisticated linguistic behaviors while being fully intervenable. While LMs differ from humans in many ways, the learning outcomes of these models can reveal the behaviors that may emerge through expressive statistical learning algorithms applied to linguistic input. In this thesis, I demonstrate this approach through three case studies using LMs to investigate open questions in language acquisition and comprehension. First, I use LMs to perform controlled manipulations of language learning, and find that syntactic generalizations depend more on a learner's inductive bias than on training data size. Second, I use LMs to explain systematic variation in scalar inferences by approximating human listeners' expectations over unspoken alternative sentences (e.g., "The bill was supported overwhelmingly" implies that the bill was not supported unanimously). Finally, I show that LMs and humans exhibit similar behaviors on a set of non-literal comprehension tasks which are hypothesized to require social reasoning (e.g., inferring a speaker's intended meaning from ironic statements). These findings suggest that certain aspects of linguistic knowledge could emerge through domain-general prediction mechanisms, while other aspects may require specific inductive biases and conceptual structures.
533
$a
Electronic reproduction.
$b
Ann Arbor, Mich. :
$c
ProQuest,
$d
2024
538
$a
Mode of access: World Wide Web
650
4
$a
Cognitive psychology.
$3
556029
650
4
$a
Linguistics.
$3
557829
650
4
$a
Language.
$3
571568
653
$a
Language models
653
$a
Human cognition
653
$a
Artificial neural network
653
$a
Conceptual structures
655
7
$a
Electronic books.
$2
local
$3
554714
690
$a
0679
690
$a
0290
690
$a
0633
710
2
$a
Massachusetts Institute of Technology.
$b
Department of Brain and Cognitive Sciences.
$3
1471866
710
2
$a
ProQuest Information and Learning Co.
$3
1178819
773
0
$t
Dissertations Abstracts International
$g
85-10B.
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=31091159
$z
click for full text (PQDT)
筆 0 讀者評論
多媒體
評論
新增評論
分享你的心得
Export
取書館別
處理中
...
變更密碼[密碼必須為2種組合(英文和數字)及長度為10碼以上]
登入