語系:
繁體中文
English
說明(常見問題)
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
Exploration of Word Embeddings With Graph-Based Context Adaptation for Enhanced Word Vectors.
紀錄類型:
書目-語言資料,手稿 : Monograph/item
正題名/作者:
Exploration of Word Embeddings With Graph-Based Context Adaptation for Enhanced Word Vectors./
作者:
Sandhu, Tanvi.
面頁冊數:
1 online resource (73 pages)
附註:
Source: Masters Abstracts International, Volume: 85-12.
Contained By:
Masters Abstracts International85-12.
標題:
Computer science. -
電子資源:
click for full text (PQDT)
ISBN:
9798382903248
Exploration of Word Embeddings With Graph-Based Context Adaptation for Enhanced Word Vectors.
Sandhu, Tanvi.
Exploration of Word Embeddings With Graph-Based Context Adaptation for Enhanced Word Vectors.
- 1 online resource (73 pages)
Source: Masters Abstracts International, Volume: 85-12.
Thesis (M.Sc.)--University of Windsor (Canada), 2024.
Includes bibliographical references
In the aspect of information storage, text assumes a central role, necessitating streamlined and effective methods for swift retrieval. Among various text representations, the vector form stands out for its remarkable efficiency, especially when dealing with large datasets. Arranging words that are similar in meaning close to each other in the vectorized representation helps improve system performance in different Natural Language Processing (NLP) tasks. Previous methods, primarily centered on capturing word context through neural language models, have fallen short in delivering high scores for word similarity problems. This thesis investigates the connection between representing words in vector form and the improved performance and accuracy observed in NLP tasks. It introduces a method to represent words as a graph so that their first-order and second-order proximity are preserved, aiming to enhance overall capabilities in semantic representation. Experimental deployment of this technique across diverse text corpora underscores its superiority over conventional word embedding approaches. This method of word representation outperforms traditional word-embedding methods by 2.7% in multiple intrinsic and extrinsic tasks. The findings contribute to the evolving landscape of semantic representation learning but also illuminate their implications for text classification tasks, especially within the context of dynamic embedding models.
Electronic reproduction.
Ann Arbor, Mich. :
ProQuest,
2024
Mode of access: World Wide Web
ISBN: 9798382903248Subjects--Topical Terms:
573171
Computer science.
Subjects--Index Terms:
GraphIndex Terms--Genre/Form:
554714
Electronic books.
Exploration of Word Embeddings With Graph-Based Context Adaptation for Enhanced Word Vectors.
LDR
:02849ntm a22004097 4500
001
1150218
005
20241022111616.5
006
m o d
007
cr bn ---uuuuu
008
250605s2024 xx obm 000 0 eng d
020
$a
9798382903248
035
$a
(MiAaPQ)AAI31331434
035
$a
AAI31331434
040
$a
MiAaPQ
$b
eng
$c
MiAaPQ
$d
NTU
100
1
$a
Sandhu, Tanvi.
$3
1476661
245
1 0
$a
Exploration of Word Embeddings With Graph-Based Context Adaptation for Enhanced Word Vectors.
264
0
$c
2024
300
$a
1 online resource (73 pages)
336
$a
text
$b
txt
$2
rdacontent
337
$a
computer
$b
c
$2
rdamedia
338
$a
online resource
$b
cr
$2
rdacarrier
500
$a
Source: Masters Abstracts International, Volume: 85-12.
500
$a
Advisor: Kobti, Z.
502
$a
Thesis (M.Sc.)--University of Windsor (Canada), 2024.
504
$a
Includes bibliographical references
520
$a
In the aspect of information storage, text assumes a central role, necessitating streamlined and effective methods for swift retrieval. Among various text representations, the vector form stands out for its remarkable efficiency, especially when dealing with large datasets. Arranging words that are similar in meaning close to each other in the vectorized representation helps improve system performance in different Natural Language Processing (NLP) tasks. Previous methods, primarily centered on capturing word context through neural language models, have fallen short in delivering high scores for word similarity problems. This thesis investigates the connection between representing words in vector form and the improved performance and accuracy observed in NLP tasks. It introduces a method to represent words as a graph so that their first-order and second-order proximity are preserved, aiming to enhance overall capabilities in semantic representation. Experimental deployment of this technique across diverse text corpora underscores its superiority over conventional word embedding approaches. This method of word representation outperforms traditional word-embedding methods by 2.7% in multiple intrinsic and extrinsic tasks. The findings contribute to the evolving landscape of semantic representation learning but also illuminate their implications for text classification tasks, especially within the context of dynamic embedding models.
533
$a
Electronic reproduction.
$b
Ann Arbor, Mich. :
$c
ProQuest,
$d
2024
538
$a
Mode of access: World Wide Web
650
4
$a
Computer science.
$3
573171
650
4
$a
Computer engineering.
$3
569006
653
$a
Graph
653
$a
Natural Language Processing
653
$a
Semantic similarity
653
$a
Word Embedding
653
$a
Word similarity
653
$a
Word vectors
655
7
$a
Electronic books.
$2
local
$3
554714
690
$a
0800
690
$a
0984
690
$a
0464
710
2
$a
University of Windsor (Canada).
$b
COMPUTER SCIENCE.
$3
1182526
710
2
$a
ProQuest Information and Learning Co.
$3
1178819
773
0
$t
Masters Abstracts International
$g
85-12.
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=31331434
$z
click for full text (PQDT)
筆 0 讀者評論
多媒體
評論
新增評論
分享你的心得
Export
取書館別
處理中
...
變更密碼[密碼必須為2種組合(英文和數字)及長度為10碼以上]
登入