語系:
繁體中文
English
說明(常見問題)
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
Comparison and Fine-Grained Analysis...
~
ProQuest Information and Learning Co.
Comparison and Fine-Grained Analysis of Sequence Encoders for Natural Language Processing.
紀錄類型:
書目-語言資料,手稿 : Monograph/item
正題名/作者:
Comparison and Fine-Grained Analysis of Sequence Encoders for Natural Language Processing./
作者:
Keller, Thomas Anderson.
面頁冊數:
1 online resource (87 pages)
附註:
Source: Masters Abstracts International, Volume: 56-05.
標題:
Artificial intelligence. -
電子資源:
click for full text (PQDT)
ISBN:
9780355102765
Comparison and Fine-Grained Analysis of Sequence Encoders for Natural Language Processing.
Keller, Thomas Anderson.
Comparison and Fine-Grained Analysis of Sequence Encoders for Natural Language Processing.
- 1 online resource (87 pages)
Source: Masters Abstracts International, Volume: 56-05.
Thesis (M.S.)--University of California, San Diego, 2017.
Includes bibliographical references
Most machine learning algorithms require a fixed length input to be able to perform commonly desired tasks such as classification, clustering, and regression. For natural language processing, the inherently unbounded and recursive nature of the input poses a unique challenge when deriving such fixed length representations. Although today there is a general consensus on how to generate fixed length representations of individual words which preserve their meaning, the same cannot be said for sequences of words in sentences, paragraphs, or documents. In this work, we study the encoders commonly used to generate fixed length representations of natural language sequences, and analyze their effectiveness across a variety of high and low level tasks including sentence classification and question answering. Additionally, we propose novel improvements to the existing Skip-Thought and End-to-End Memory Network architectures and study their performance on both the original and auxiliary tasks. Ultimately, we show that the setting in which the encoders are trained, and the corpus used for training, have a greater influence of the final learned representation than the underlying sequence encoders themselves.
Electronic reproduction.
Ann Arbor, Mich. :
ProQuest,
2018
Mode of access: World Wide Web
ISBN: 9780355102765Subjects--Topical Terms:
559380
Artificial intelligence.
Index Terms--Genre/Form:
554714
Electronic books.
Comparison and Fine-Grained Analysis of Sequence Encoders for Natural Language Processing.
LDR
:02424ntm a2200349K 4500
001
913834
005
20180628103545.5
006
m o u
007
cr mn||||a|a||
008
190606s2017 xx obm 000 0 eng d
020
$a
9780355102765
035
$a
(MiAaPQ)AAI10599339
035
$a
(MiAaPQ)ucsd:16707
035
$a
AAI10599339
040
$a
MiAaPQ
$b
eng
$c
MiAaPQ
100
1
$a
Keller, Thomas Anderson.
$3
1186839
245
1 0
$a
Comparison and Fine-Grained Analysis of Sequence Encoders for Natural Language Processing.
264
0
$c
2017
300
$a
1 online resource (87 pages)
336
$a
text
$b
txt
$2
rdacontent
337
$a
computer
$b
c
$2
rdamedia
338
$a
online resource
$b
cr
$2
rdacarrier
500
$a
Source: Masters Abstracts International, Volume: 56-05.
500
$a
Adviser: Garrison W. Cottrell.
502
$a
Thesis (M.S.)--University of California, San Diego, 2017.
504
$a
Includes bibliographical references
520
$a
Most machine learning algorithms require a fixed length input to be able to perform commonly desired tasks such as classification, clustering, and regression. For natural language processing, the inherently unbounded and recursive nature of the input poses a unique challenge when deriving such fixed length representations. Although today there is a general consensus on how to generate fixed length representations of individual words which preserve their meaning, the same cannot be said for sequences of words in sentences, paragraphs, or documents. In this work, we study the encoders commonly used to generate fixed length representations of natural language sequences, and analyze their effectiveness across a variety of high and low level tasks including sentence classification and question answering. Additionally, we propose novel improvements to the existing Skip-Thought and End-to-End Memory Network architectures and study their performance on both the original and auxiliary tasks. Ultimately, we show that the setting in which the encoders are trained, and the corpus used for training, have a greater influence of the final learned representation than the underlying sequence encoders themselves.
533
$a
Electronic reproduction.
$b
Ann Arbor, Mich. :
$c
ProQuest,
$d
2018
538
$a
Mode of access: World Wide Web
650
4
$a
Artificial intelligence.
$3
559380
650
4
$a
Computer science.
$3
573171
650
4
$a
Linguistics.
$3
557829
650
4
$a
Language.
$3
571568
655
7
$a
Electronic books.
$2
local
$3
554714
690
$a
0800
690
$a
0984
690
$a
0290
690
$a
0679
710
2
$a
ProQuest Information and Learning Co.
$3
1178819
710
2
$a
University of California, San Diego.
$b
Computer Science.
$3
1182161
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=10599339
$z
click for full text (PQDT)
筆 0 讀者評論
多媒體
評論
新增評論
分享你的心得
Export
取書館別
處理中
...
變更密碼[密碼必須為2種組合(英文和數字)及長度為10碼以上]
登入