語系:
繁體中文
English
說明(常見問題)
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
Annotating and Modeling Shallow Sema...
~
University of Washington.
Annotating and Modeling Shallow Semantics Directly from Text.
紀錄類型:
書目-語言資料,手稿 : Monograph/item
正題名/作者:
Annotating and Modeling Shallow Semantics Directly from Text./
作者:
He, Luheng.
面頁冊數:
1 online resource (103 pages)
附註:
Source: Dissertation Abstracts International, Volume: 79-12(E), Section: B.
Contained By:
Dissertation Abstracts International79-12B(E).
標題:
Computer science. -
電子資源:
click for full text (PQDT)
ISBN:
9780438174009
Annotating and Modeling Shallow Semantics Directly from Text.
He, Luheng.
Annotating and Modeling Shallow Semantics Directly from Text.
- 1 online resource (103 pages)
Source: Dissertation Abstracts International, Volume: 79-12(E), Section: B.
Thesis (Ph.D.)--University of Washington, 2018.
Includes bibliographical references
One key challenge to understanding human language is to find out the word to word semantic relations, such as "who does what to whom", "when", and "where". Semantic role labeling (SRL) is the widely studied challenge of recovering such predicate-argument structure. SRL is designed to be consistent across syntactic alternations, which can potentially benefit downstream applications such as information extraction, machine translation, and summarization. However, the performance of SRL system is limited by the amount of training data and its dependence on the intermediate syntactic representation. %further impeding its usage in downstream applications. In this thesis, our goal is to develop annotation frameworks and learning models for recovering semantic structures directly from text, in an end-to-end manner.
Electronic reproduction.
Ann Arbor, Mich. :
ProQuest,
2018
Mode of access: World Wide Web
ISBN: 9780438174009Subjects--Topical Terms:
573171
Computer science.
Index Terms--Genre/Form:
554714
Electronic books.
Annotating and Modeling Shallow Semantics Directly from Text.
LDR
:03882ntm a2200373Ki 4500
001
916918
005
20180928111503.5
006
m o u
007
cr mn||||a|a||
008
190606s2018 xx obm 000 0 eng d
020
$a
9780438174009
035
$a
(MiAaPQ)AAI10822766
035
$a
(MiAaPQ)washington:18475
035
$a
AAI10822766
040
$a
MiAaPQ
$b
eng
$c
MiAaPQ
$d
NTU
100
1
$a
He, Luheng.
$3
1190788
245
1 0
$a
Annotating and Modeling Shallow Semantics Directly from Text.
264
0
$c
2018
300
$a
1 online resource (103 pages)
336
$a
text
$b
txt
$2
rdacontent
337
$a
computer
$b
c
$2
rdamedia
338
$a
online resource
$b
cr
$2
rdacarrier
500
$a
Source: Dissertation Abstracts International, Volume: 79-12(E), Section: B.
500
$a
Adviser: Luke S. Zettlemoyer.
502
$a
Thesis (Ph.D.)--University of Washington, 2018.
504
$a
Includes bibliographical references
520
$a
One key challenge to understanding human language is to find out the word to word semantic relations, such as "who does what to whom", "when", and "where". Semantic role labeling (SRL) is the widely studied challenge of recovering such predicate-argument structure. SRL is designed to be consistent across syntactic alternations, which can potentially benefit downstream applications such as information extraction, machine translation, and summarization. However, the performance of SRL system is limited by the amount of training data and its dependence on the intermediate syntactic representation. %further impeding its usage in downstream applications. In this thesis, our goal is to develop annotation frameworks and learning models for recovering semantic structures directly from text, in an end-to-end manner.
520
$a
We first introduce question-answer driven semantic role labeling (QA-SRL), an annotation framework that allows us to gather SRL information from non-expert annotators. Different from the traditional SRL formalisms (e.g. PropBank), this new task does not depend on predefined syntactic structure or frame ontology. It is simple and intuitive enough that we can train any native speaker to provide annotation, as long as they can understand the meanings of sentences.
520
$a
We also develop two general-purpose, syntax-independent neural models that lead to significant performance gains, including an over 40% error reduction over long-standing pre-neural performance levels on PropBank. Our first model, DeepSRL, uses highway BiLSTMs to make local BIO-tagging decisions for each token. While significantly out-performing previous systems, DeepSRL cannot jointly process multiple predicates or incorporate span-level features.
520
$a
To address these limitations, we further introduce a span-based neural model called the Labeled Span Graph Networks (LSGNs). Inspired by a recent state-of-the-art coreference resolution model, LSGNs build contextualized representations for all spans in the input text, and use lightweight classifiers to make independent edge labeling decisions. With LSGNs, we are able to model all predicate words and argument spans jointly, the first end-to-end result we know of. LSGNs also lead to a unified view for many NLP structures involving span-labeling or span-span relations. In addition to SRL and coreference resolution, LSGNs also achieve state-of-the-art performance when applied to named entity recognition (NER) without any feature engineering. This opens up exciting future directions to build a single, unified model for end-to-end, document-level semantic analysis.
533
$a
Electronic reproduction.
$b
Ann Arbor, Mich. :
$c
ProQuest,
$d
2018
538
$a
Mode of access: World Wide Web
650
4
$a
Computer science.
$3
573171
650
4
$a
Artificial intelligence.
$3
559380
655
7
$a
Electronic books.
$2
local
$3
554714
690
$a
0984
690
$a
0800
710
2
$a
ProQuest Information and Learning Co.
$3
1178819
710
2
$a
University of Washington.
$b
Computer Science and Engineering.
$3
1182238
773
0
$t
Dissertation Abstracts International
$g
79-12B(E).
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=10822766
$z
click for full text (PQDT)
筆 0 讀者評論
多媒體
評論
新增評論
分享你的心得
Export
取書館別
處理中
...
變更密碼[密碼必須為2種組合(英文和數字)及長度為10碼以上]
登入