語系:
繁體中文
English
說明(常見問題)
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
Foundation models for natural language processing = pre-trained language models integrating media /
紀錄類型:
書目-語言資料,印刷品 : Monograph/item
正題名/作者:
Foundation models for natural language processing/ by Gerhard Paass, Sven Giesselbach.
其他題名:
pre-trained language models integrating media /
作者:
Paass, Gerhard.
其他作者:
Giesselbach, Sven.
出版者:
Cham :Springer International Publishing : : 2023.,
面頁冊數:
xviii, 436 p. :ill. (some col.), digital ; : 24 cm.;
Contained By:
Springer Nature eBook
標題:
Natural language processing (Computer science) -
電子資源:
https://doi.org/10.1007/978-3-031-23190-2
ISBN:
9783031231902
Foundation models for natural language processing = pre-trained language models integrating media /
Paass, Gerhard.
Foundation models for natural language processing
pre-trained language models integrating media /[electronic resource] :by Gerhard Paass, Sven Giesselbach. - Cham :Springer International Publishing :2023. - xviii, 436 p. :ill. (some col.), digital ;24 cm. - Artificial intelligence: foundations, theory, and algorithms,2365-306X. - Artificial intelligence: foundations, theory, and algorithms..
1. Introduction -- 2. Pre-trained Language Models -- 3. Improving Pre-trained Language Models -- 4. Knowledge Acquired by Foundation Models -- 5. Foundation Models for Information Extraction -- 6. Foundation Models for Text Generation -- 7. Foundation Models for Speech, Images, Videos, and Control -- 8. Summary and Outlook.
Open access.
This open access book provides a comprehensive overview of the state of the art in research and applications of Foundation Models and is intended for readers familiar with basic Natural Language Processing (NLP) concepts. Over the recent years, a revolutionary new paradigm has been developed for training models for NLP. These models are first pre-trained on large collections of text documents to acquire general syntactic knowledge and semantic information. Then, they are fine-tuned for specific tasks, which they can often solve with superhuman accuracy. When the models are large enough, they can be instructed by prompts to solve new tasks without any fine-tuning. Moreover, they can be applied to a wide range of different media and problem domains, ranging from image and video processing to robot control learning. Because they provide a blueprint for solving many tasks in artificial intelligence, they have been called Foundation Models. After a brief introduction tobasic NLP models the main pre-trained language models BERT, GPT and sequence-to-sequence transformer are described, as well as the concepts of self-attention and context-sensitive embedding. Then, different approaches to improving these models are discussed, such as expanding the pre-training criteria, increasing the length of input texts, or including extra knowledge. An overview of the best-performing models for about twenty application areas is then presented, e.g., question answering, translation, story generation, dialog systems, generating images from text, etc. For each application area, the strengths and weaknesses of current models are discussed, and an outlook on further developments is given. In addition, links are provided to freely available program code. A concluding chapter summarizes the economic opportunities, mitigation of risks, and potential developments of AI.
ISBN: 9783031231902
Standard No.: 10.1007/978-3-031-23190-2doiSubjects--Topical Terms:
641811
Natural language processing (Computer science)
LC Class. No.: QA76.9.N38
Dewey Class. No.: 006.35
Foundation models for natural language processing = pre-trained language models integrating media /
LDR
:03359nam a2200349 a 4500
001
1121266
003
DE-He213
005
20240312140100.0
006
m d
007
cr nn 008maaau
008
240618s2023 sz s 0 eng d
020
$a
9783031231902
$q
(electronic bk.)
020
$a
9783031231896
$q
(paper)
024
7
$a
10.1007/978-3-031-23190-2
$2
doi
035
$a
978-3-031-23190-2
040
$a
GP
$c
GP
041
0
$a
eng
050
4
$a
QA76.9.N38
072
7
$a
UYQL
$2
bicssc
072
7
$a
COM073000
$2
bisacsh
072
7
$a
UYQL
$2
thema
082
0 4
$a
006.35
$2
23
090
$a
QA76.9.N38
$b
P111 2023
100
1
$a
Paass, Gerhard.
$3
1437008
245
1 0
$a
Foundation models for natural language processing
$h
[electronic resource] :
$b
pre-trained language models integrating media /
$c
by Gerhard Paass, Sven Giesselbach.
260
$a
Cham :
$c
2023.
$b
Springer International Publishing :
$b
Imprint: Springer,
300
$a
xviii, 436 p. :
$b
ill. (some col.), digital ;
$c
24 cm.
490
1
$a
Artificial intelligence: foundations, theory, and algorithms,
$x
2365-306X
505
0
$a
1. Introduction -- 2. Pre-trained Language Models -- 3. Improving Pre-trained Language Models -- 4. Knowledge Acquired by Foundation Models -- 5. Foundation Models for Information Extraction -- 6. Foundation Models for Text Generation -- 7. Foundation Models for Speech, Images, Videos, and Control -- 8. Summary and Outlook.
506
$a
Open access.
520
$a
This open access book provides a comprehensive overview of the state of the art in research and applications of Foundation Models and is intended for readers familiar with basic Natural Language Processing (NLP) concepts. Over the recent years, a revolutionary new paradigm has been developed for training models for NLP. These models are first pre-trained on large collections of text documents to acquire general syntactic knowledge and semantic information. Then, they are fine-tuned for specific tasks, which they can often solve with superhuman accuracy. When the models are large enough, they can be instructed by prompts to solve new tasks without any fine-tuning. Moreover, they can be applied to a wide range of different media and problem domains, ranging from image and video processing to robot control learning. Because they provide a blueprint for solving many tasks in artificial intelligence, they have been called Foundation Models. After a brief introduction tobasic NLP models the main pre-trained language models BERT, GPT and sequence-to-sequence transformer are described, as well as the concepts of self-attention and context-sensitive embedding. Then, different approaches to improving these models are discussed, such as expanding the pre-training criteria, increasing the length of input texts, or including extra knowledge. An overview of the best-performing models for about twenty application areas is then presented, e.g., question answering, translation, story generation, dialog systems, generating images from text, etc. For each application area, the strengths and weaknesses of current models are discussed, and an outlook on further developments is given. In addition, links are provided to freely available program code. A concluding chapter summarizes the economic opportunities, mitigation of risks, and potential developments of AI.
650
0
$a
Natural language processing (Computer science)
$3
641811
650
0
$a
Artificial intelligence.
$3
559380
650
1 4
$a
Natural Language Processing (NLP)
$3
1211064
650
2 4
$a
Computational Linguistics.
$3
670080
650
2 4
$a
Artificial Intelligence.
$3
646849
650
2 4
$a
Knowledge Based Systems.
$3
1365951
650
2 4
$a
Machine Learning.
$3
1137723
700
1
$a
Giesselbach, Sven.
$3
1437009
710
2
$a
SpringerLink (Online service)
$3
593884
773
0
$t
Springer Nature eBook
830
0
$a
Artificial intelligence: foundations, theory, and algorithms.
$3
1067745
856
4 0
$u
https://doi.org/10.1007/978-3-031-23190-2
950
$a
Computer Science (SpringerNature-11645)
筆 0 讀者評論
多媒體
評論
新增評論
分享你的心得
Export
取書館別
處理中
...
變更密碼[密碼必須為2種組合(英文和數字)及長度為10碼以上]
登入