語系:
繁體中文
English
說明(常見問題)
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
A Deep Learning Approach to Extractive Text Summarization Using Knowledge Graph and Language Model.
紀錄類型:
書目-語言資料,手稿 : Monograph/item
正題名/作者:
A Deep Learning Approach to Extractive Text Summarization Using Knowledge Graph and Language Model./
作者:
Gao, Yichen.
面頁冊數:
1 online resource (49 pages)
附註:
Source: Masters Abstracts International, Volume: 85-01.
Contained By:
Masters Abstracts International85-01.
標題:
Computer science. -
電子資源:
click for full text (PQDT)
ISBN:
9798379767563
A Deep Learning Approach to Extractive Text Summarization Using Knowledge Graph and Language Model.
Gao, Yichen.
A Deep Learning Approach to Extractive Text Summarization Using Knowledge Graph and Language Model.
- 1 online resource (49 pages)
Source: Masters Abstracts International, Volume: 85-01.
Thesis (M.S.)--Illinois State University, 2023.
Includes bibliographical references
Extractive summarization has been widely studied, but the summaries generated by most current extractive summarization works usually disregard the article structure of the source document. Furthermore, the produced summaries are sometimes not representative sentences in the article. In this thesis, we propose an extractive summarization algorithm with knowledge graph enhancement that leverages both the source document and a knowledge graph to predict the most representative sentences for the summary. The aid of knowledge graphs enables deep learning models with pre-trained language models to focus on article structure information in the process of generating extractive summaries. Our proposed method has an encoder and a classifier: the encoder encodes the source document and the knowledge graph separately. The classifier inter-encodes the encoded source document and knowledge graph information by the cross-attention mechanism. Then the classifier determines whether the sentences belong to summary sentences or not. The results show that our model produces higher ROUGE scores on the CNN/Daily Mail dataset than the model without the knowledge graph for assistance, compared to the extractive summarization work based on the pre-trained language model.
Electronic reproduction.
Ann Arbor, Mich. :
ProQuest,
2024
Mode of access: World Wide Web
ISBN: 9798379767563Subjects--Topical Terms:
573171
Computer science.
Subjects--Index Terms:
Deep learning modelsIndex Terms--Genre/Form:
554714
Electronic books.
A Deep Learning Approach to Extractive Text Summarization Using Knowledge Graph and Language Model.
LDR
:02611ntm a22003857 4500
001
1142927
005
20240513061026.5
006
m o d
007
cr mn ---uuuuu
008
250605s2023 xx obm 000 0 eng d
020
$a
9798379767563
035
$a
(MiAaPQ)AAI30418748
035
$a
AAI30418748
040
$a
MiAaPQ
$b
eng
$c
MiAaPQ
$d
NTU
100
1
$a
Gao, Yichen.
$3
1467457
245
1 2
$a
A Deep Learning Approach to Extractive Text Summarization Using Knowledge Graph and Language Model.
264
0
$c
2023
300
$a
1 online resource (49 pages)
336
$a
text
$b
txt
$2
rdacontent
337
$a
computer
$b
c
$2
rdamedia
338
$a
online resource
$b
cr
$2
rdacarrier
500
$a
Source: Masters Abstracts International, Volume: 85-01.
500
$a
Advisor: Han, Hoyil.
502
$a
Thesis (M.S.)--Illinois State University, 2023.
504
$a
Includes bibliographical references
520
$a
Extractive summarization has been widely studied, but the summaries generated by most current extractive summarization works usually disregard the article structure of the source document. Furthermore, the produced summaries are sometimes not representative sentences in the article. In this thesis, we propose an extractive summarization algorithm with knowledge graph enhancement that leverages both the source document and a knowledge graph to predict the most representative sentences for the summary. The aid of knowledge graphs enables deep learning models with pre-trained language models to focus on article structure information in the process of generating extractive summaries. Our proposed method has an encoder and a classifier: the encoder encodes the source document and the knowledge graph separately. The classifier inter-encodes the encoded source document and knowledge graph information by the cross-attention mechanism. Then the classifier determines whether the sentences belong to summary sentences or not. The results show that our model produces higher ROUGE scores on the CNN/Daily Mail dataset than the model without the knowledge graph for assistance, compared to the extractive summarization work based on the pre-trained language model.
533
$a
Electronic reproduction.
$b
Ann Arbor, Mich. :
$c
ProQuest,
$d
2024
538
$a
Mode of access: World Wide Web
650
4
$a
Computer science.
$3
573171
650
4
$a
Information technology.
$3
559429
653
$a
Deep learning models
653
$a
Extractive summarization
653
$a
Knowledge graph
653
$a
Language model
655
7
$a
Electronic books.
$2
local
$3
554714
690
$a
0984
690
$a
0800
690
$a
0489
710
2
$a
ProQuest Information and Learning Co.
$3
1178819
710
2
$a
Illinois State University.
$b
School of Information Technology: Information Systems.
$3
1467458
773
0
$t
Masters Abstracts International
$g
85-01.
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=30418748
$z
click for full text (PQDT)
筆 0 讀者評論
多媒體
評論
新增評論
分享你的心得
Export
取書館別
處理中
...
變更密碼[密碼必須為2種組合(英文和數字)及長度為10碼以上]
登入