語系:
繁體中文
English
說明(常見問題)
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
Pretrained Transformers for Text Ranking = BERT and Beyond /
紀錄類型:
書目-語言資料,印刷品 : Monograph/item
正題名/作者:
Pretrained Transformers for Text Ranking/ by Jimmy Lin, Rodrigo Nogueira, Andrew Yates.
其他題名:
BERT and Beyond /
作者:
Lin, Jimmy.
其他作者:
Yates, Andrew.
面頁冊數:
XVII, 307 p.online resource. :
Contained By:
Springer Nature eBook
標題:
Computational Linguistics. -
電子資源:
https://doi.org/10.1007/978-3-031-02181-7
ISBN:
9783031021817
Pretrained Transformers for Text Ranking = BERT and Beyond /
Lin, Jimmy.
Pretrained Transformers for Text Ranking
BERT and Beyond /[electronic resource] :by Jimmy Lin, Rodrigo Nogueira, Andrew Yates. - 1st ed. 2022. - XVII, 307 p.online resource. - Synthesis Lectures on Human Language Technologies,1947-4059. - Synthesis Lectures on Human Language Technologies,.
Preface -- Acknowledgments -- Introduction -- Setting the Stage -- Multi-Stage Architectures for Reranking -- Refining Query and Document Representations -- Learned Dense Representations for Ranking -- Future Directions and Conclusions -- Bibliography -- Authors' Biographies.
The goal of text ranking is to generate an ordered list of texts retrieved from a corpus in response to a query. Although the most common formulation of text ranking is search, instances of the task can also be found in many natural language processing (NLP) applications.This book provides an overview of text ranking with neural network architectures known as transformers, of which BERT (Bidirectional Encoder Representations from Transformers) is the best-known example. The combination of transformers and self-supervised pretraining has been responsible for a paradigm shift in NLP, information retrieval (IR), and beyond. This book provides a synthesis of existing work as a single point of entry for practitioners who wish to gain a better understanding of how to apply transformers to text ranking problems and researchers who wish to pursue work in this area. It covers a wide range of modern techniques, grouped into two high-level categories: transformer models that perform reranking in multi-stage architectures and dense retrieval techniques that perform ranking directly. Two themes pervade the book: techniques for handling long documents, beyond typical sentence-by-sentence processing in NLP, and techniques for addressing the tradeoff between effectiveness (i.e., result quality) and efficiency (e.g., query latency, model and index size). Although transformer architectures and pretraining techniques are recent innovations, many aspects of how they are applied to text ranking are relatively well understood and represent mature techniques. However, there remain many open research questions, and thus in addition to laying out the foundations of pretrained transformers for text ranking, this book also attempts to prognosticate where the field is heading.
ISBN: 9783031021817
Standard No.: 10.1007/978-3-031-02181-7doiSubjects--Topical Terms:
670080
Computational Linguistics.
LC Class. No.: Q334-342
Dewey Class. No.: 006.3
Pretrained Transformers for Text Ranking = BERT and Beyond /
LDR
:03438nam a22003975i 4500
001
1086951
003
DE-He213
005
20220601134352.0
007
cr nn 008mamaa
008
221228s2022 sz | s |||| 0|eng d
020
$a
9783031021817
$9
978-3-031-02181-7
024
7
$a
10.1007/978-3-031-02181-7
$2
doi
035
$a
978-3-031-02181-7
050
4
$a
Q334-342
050
4
$a
TA347.A78
072
7
$a
UYQ
$2
bicssc
072
7
$a
COM004000
$2
bisacsh
072
7
$a
UYQ
$2
thema
082
0 4
$a
006.3
$2
23
100
1
$a
Lin, Jimmy.
$e
author.
$4
aut
$4
http://id.loc.gov/vocabulary/relators/aut
$3
1393837
245
1 0
$a
Pretrained Transformers for Text Ranking
$h
[electronic resource] :
$b
BERT and Beyond /
$c
by Jimmy Lin, Rodrigo Nogueira, Andrew Yates.
250
$a
1st ed. 2022.
264
1
$a
Cham :
$b
Springer International Publishing :
$b
Imprint: Springer,
$c
2022.
300
$a
XVII, 307 p.
$b
online resource.
336
$a
text
$b
txt
$2
rdacontent
337
$a
computer
$b
c
$2
rdamedia
338
$a
online resource
$b
cr
$2
rdacarrier
347
$a
text file
$b
PDF
$2
rda
490
1
$a
Synthesis Lectures on Human Language Technologies,
$x
1947-4059
505
0
$a
Preface -- Acknowledgments -- Introduction -- Setting the Stage -- Multi-Stage Architectures for Reranking -- Refining Query and Document Representations -- Learned Dense Representations for Ranking -- Future Directions and Conclusions -- Bibliography -- Authors' Biographies.
520
$a
The goal of text ranking is to generate an ordered list of texts retrieved from a corpus in response to a query. Although the most common formulation of text ranking is search, instances of the task can also be found in many natural language processing (NLP) applications.This book provides an overview of text ranking with neural network architectures known as transformers, of which BERT (Bidirectional Encoder Representations from Transformers) is the best-known example. The combination of transformers and self-supervised pretraining has been responsible for a paradigm shift in NLP, information retrieval (IR), and beyond. This book provides a synthesis of existing work as a single point of entry for practitioners who wish to gain a better understanding of how to apply transformers to text ranking problems and researchers who wish to pursue work in this area. It covers a wide range of modern techniques, grouped into two high-level categories: transformer models that perform reranking in multi-stage architectures and dense retrieval techniques that perform ranking directly. Two themes pervade the book: techniques for handling long documents, beyond typical sentence-by-sentence processing in NLP, and techniques for addressing the tradeoff between effectiveness (i.e., result quality) and efficiency (e.g., query latency, model and index size). Although transformer architectures and pretraining techniques are recent innovations, many aspects of how they are applied to text ranking are relatively well understood and represent mature techniques. However, there remain many open research questions, and thus in addition to laying out the foundations of pretrained transformers for text ranking, this book also attempts to prognosticate where the field is heading.
650
2 4
$a
Computational Linguistics.
$3
670080
650
2 4
$a
Natural Language Processing (NLP).
$3
1254293
650
1 4
$a
Artificial Intelligence.
$3
646849
650
0
$a
Computational linguistics.
$3
555811
650
0
$a
Natural language processing (Computer science).
$3
802180
650
0
$a
Artificial intelligence.
$3
559380
700
1
$a
Yates, Andrew.
$4
aut
$4
http://id.loc.gov/vocabulary/relators/aut
$3
1187095
700
1
$a
Nogueira, Rodrigo.
$e
author.
$4
aut
$4
http://id.loc.gov/vocabulary/relators/aut
$3
1393838
710
2
$a
SpringerLink (Online service)
$3
593884
773
0
$t
Springer Nature eBook
776
0 8
$i
Printed edition:
$z
9783031001925
776
0 8
$i
Printed edition:
$z
9783031010538
776
0 8
$i
Printed edition:
$z
9783031033094
830
0
$a
Synthesis Lectures on Human Language Technologies,
$x
1947-4059
$3
1389817
856
4 0
$u
https://doi.org/10.1007/978-3-031-02181-7
912
$a
ZDB-2-SXSC
950
$a
Synthesis Collection of Technology (R0) (SpringerNature-85007)
筆 0 讀者評論
多媒體
評論
新增評論
分享你的心得
Export
取書館別
處理中
...
變更密碼[密碼必須為2種組合(英文和數字)及長度為10碼以上]
登入