Language:
English
繁體中文
Help
Login
Back
Switch To:
Labeled
|
MARC Mode
|
ISBD
Pretrained Transformers for Text Ranking = BERT and Beyond /
Record Type:
Language materials, printed : Monograph/item
Title/Author:
Pretrained Transformers for Text Ranking/ by Jimmy Lin, Rodrigo Nogueira, Andrew Yates.
Reminder of title:
BERT and Beyond /
Author:
Lin, Jimmy.
other author:
Nogueira, Rodrigo.
Description:
XVII, 307 p.online resource. :
Contained By:
Springer Nature eBook
Subject:
Artificial intelligence. -
Online resource:
https://doi.org/10.1007/978-3-031-02181-7
ISBN:
9783031021817
Pretrained Transformers for Text Ranking = BERT and Beyond /
Lin, Jimmy.
Pretrained Transformers for Text Ranking
BERT and Beyond /[electronic resource] :by Jimmy Lin, Rodrigo Nogueira, Andrew Yates. - 1st ed. 2022. - XVII, 307 p.online resource. - Synthesis Lectures on Human Language Technologies,1947-4059. - Synthesis Lectures on Human Language Technologies,.
Preface -- Acknowledgments -- Introduction -- Setting the Stage -- Multi-Stage Architectures for Reranking -- Refining Query and Document Representations -- Learned Dense Representations for Ranking -- Future Directions and Conclusions -- Bibliography -- Authors' Biographies.
The goal of text ranking is to generate an ordered list of texts retrieved from a corpus in response to a query. Although the most common formulation of text ranking is search, instances of the task can also be found in many natural language processing (NLP) applications.This book provides an overview of text ranking with neural network architectures known as transformers, of which BERT (Bidirectional Encoder Representations from Transformers) is the best-known example. The combination of transformers and self-supervised pretraining has been responsible for a paradigm shift in NLP, information retrieval (IR), and beyond. This book provides a synthesis of existing work as a single point of entry for practitioners who wish to gain a better understanding of how to apply transformers to text ranking problems and researchers who wish to pursue work in this area. It covers a wide range of modern techniques, grouped into two high-level categories: transformer models that perform reranking in multi-stage architectures and dense retrieval techniques that perform ranking directly. Two themes pervade the book: techniques for handling long documents, beyond typical sentence-by-sentence processing in NLP, and techniques for addressing the tradeoff between effectiveness (i.e., result quality) and efficiency (e.g., query latency, model and index size). Although transformer architectures and pretraining techniques are recent innovations, many aspects of how they are applied to text ranking are relatively well understood and represent mature techniques. However, there remain many open research questions, and thus in addition to laying out the foundations of pretrained transformers for text ranking, this book also attempts to prognosticate where the field is heading.
ISBN: 9783031021817
Standard No.: 10.1007/978-3-031-02181-7doiSubjects--Topical Terms:
559380
Artificial intelligence.
LC Class. No.: Q334-342
Dewey Class. No.: 006.3
Pretrained Transformers for Text Ranking = BERT and Beyond /
LDR
:03438nam a22003975i 4500
001
1086951
003
DE-He213
005
20220601134352.0
007
cr nn 008mamaa
008
221228s2022 sz | s |||| 0|eng d
020
$a
9783031021817
$9
978-3-031-02181-7
024
7
$a
10.1007/978-3-031-02181-7
$2
doi
035
$a
978-3-031-02181-7
050
4
$a
Q334-342
050
4
$a
TA347.A78
072
7
$a
UYQ
$2
bicssc
072
7
$a
COM004000
$2
bisacsh
072
7
$a
UYQ
$2
thema
082
0 4
$a
006.3
$2
23
100
1
$a
Lin, Jimmy.
$e
author.
$4
aut
$4
http://id.loc.gov/vocabulary/relators/aut
$3
1393837
245
1 0
$a
Pretrained Transformers for Text Ranking
$h
[electronic resource] :
$b
BERT and Beyond /
$c
by Jimmy Lin, Rodrigo Nogueira, Andrew Yates.
250
$a
1st ed. 2022.
264
1
$a
Cham :
$b
Springer International Publishing :
$b
Imprint: Springer,
$c
2022.
300
$a
XVII, 307 p.
$b
online resource.
336
$a
text
$b
txt
$2
rdacontent
337
$a
computer
$b
c
$2
rdamedia
338
$a
online resource
$b
cr
$2
rdacarrier
347
$a
text file
$b
PDF
$2
rda
490
1
$a
Synthesis Lectures on Human Language Technologies,
$x
1947-4059
505
0
$a
Preface -- Acknowledgments -- Introduction -- Setting the Stage -- Multi-Stage Architectures for Reranking -- Refining Query and Document Representations -- Learned Dense Representations for Ranking -- Future Directions and Conclusions -- Bibliography -- Authors' Biographies.
520
$a
The goal of text ranking is to generate an ordered list of texts retrieved from a corpus in response to a query. Although the most common formulation of text ranking is search, instances of the task can also be found in many natural language processing (NLP) applications.This book provides an overview of text ranking with neural network architectures known as transformers, of which BERT (Bidirectional Encoder Representations from Transformers) is the best-known example. The combination of transformers and self-supervised pretraining has been responsible for a paradigm shift in NLP, information retrieval (IR), and beyond. This book provides a synthesis of existing work as a single point of entry for practitioners who wish to gain a better understanding of how to apply transformers to text ranking problems and researchers who wish to pursue work in this area. It covers a wide range of modern techniques, grouped into two high-level categories: transformer models that perform reranking in multi-stage architectures and dense retrieval techniques that perform ranking directly. Two themes pervade the book: techniques for handling long documents, beyond typical sentence-by-sentence processing in NLP, and techniques for addressing the tradeoff between effectiveness (i.e., result quality) and efficiency (e.g., query latency, model and index size). Although transformer architectures and pretraining techniques are recent innovations, many aspects of how they are applied to text ranking are relatively well understood and represent mature techniques. However, there remain many open research questions, and thus in addition to laying out the foundations of pretrained transformers for text ranking, this book also attempts to prognosticate where the field is heading.
650
0
$a
Artificial intelligence.
$3
559380
650
0
$a
Natural language processing (Computer science).
$3
802180
650
0
$a
Computational linguistics.
$3
555811
650
1 4
$a
Artificial Intelligence.
$3
646849
650
2 4
$a
Natural Language Processing (NLP).
$3
1254293
650
2 4
$a
Computational Linguistics.
$3
670080
700
1
$a
Nogueira, Rodrigo.
$e
author.
$4
aut
$4
http://id.loc.gov/vocabulary/relators/aut
$3
1393838
700
1
$a
Yates, Andrew.
$4
aut
$4
http://id.loc.gov/vocabulary/relators/aut
$3
1187095
710
2
$a
SpringerLink (Online service)
$3
593884
773
0
$t
Springer Nature eBook
776
0 8
$i
Printed edition:
$z
9783031001925
776
0 8
$i
Printed edition:
$z
9783031010538
776
0 8
$i
Printed edition:
$z
9783031033094
830
0
$a
Synthesis Lectures on Human Language Technologies,
$x
1947-4059
$3
1389817
856
4 0
$u
https://doi.org/10.1007/978-3-031-02181-7
912
$a
ZDB-2-SXSC
950
$a
Synthesis Collection of Technology (R0) (SpringerNature-85007)
based on 0 review(s)
Multimedia
Reviews
Add a review
and share your thoughts with other readers
Export
pickup library
Processing
...
Change password
Login