語系:
繁體中文
English
說明(常見問題)
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
Multi-armed bandits : = theory and a...
~
Zhao, Qing ((Ph.D. in electrical engineering),)
Multi-armed bandits : = theory and applications to online learning in networks /
紀錄類型:
書目-語言資料,印刷品 : Monograph/item
正題名/作者:
Multi-armed bandits :/ Qing Zhao.
其他題名:
theory and applications to online learning in networks /
作者:
Zhao, Qing
面頁冊數:
1 PDF (xviii, 147 pages) :illustrations. :
附註:
Part of: Synthesis digital library of engineering and computer science.
標題:
Machine learning. -
電子資源:
https://doi.org/10.2200/S00941ED2V01Y201907CNT022
電子資源:
https://ieeexplore.ieee.org/servlet/opac?bknumber=8910671
ISBN:
9781627058711
Multi-armed bandits : = theory and applications to online learning in networks /
Zhao, Qing(Ph.D. in electrical engineering),
Multi-armed bandits :
theory and applications to online learning in networks /Qing Zhao. - 1 PDF (xviii, 147 pages) :illustrations. - Synthesis lectures on communication networks ,#221935-4193 ;. - Synthesis digital library of engineering and computer science..
Part of: Synthesis digital library of engineering and computer science.
Includes bibliographical references (pages 127-145).
1. Introduction -- 1.1. Multi-armed bandit problems -- 1.2. An essential conflict : exploration vs. Exploitation -- 1.3. Two formulations : Bayesian and frequentist -- 1.4. Notation
Abstract freely available; full-text restricted to subscribers or individual document purchasers.
Compendex
Multi-armed bandit problems pertain to optimal sequential decision making and learning in unknown environments. Since the first bandit problem posed by Thompson in 1933 for the application of clinical trials, bandit problems have enjoyed lasting attention from multiple research communities and have found a wide range of applications across diverse domains. This book covers classic results and recent development on both Bayesian and frequentist bandit problems. We start in Chapter 1 with a brief overview on the history of bandit problems, contrasting the two schools--Bayesian and frequentist--of approaches and highlighting foundational results and key applications. Chapters 2 and 4 cover, respectively, the canonical Bayesian and frequentist bandit models. In Chapters 3 and 5, we discuss major variants of the canonical bandit models that lead to new directions, bring in new techniques, and broaden the applications of this classical problem. In Chapter 6, we present several representative application examples in communication networks and social-economic systems, aiming to illuminate the connections between the Bayesian and the frequentist formulations of bandit problems and how structural results pertaining to one may be leveraged to obtain solutions under the other.
Mode of access: World Wide Web.
ISBN: 9781627058711
Standard No.: 10.2200/S00941ED2V01Y201907CNT022doiSubjects--Topical Terms:
561253
Machine learning.
Subjects--Index Terms:
multi-armed banditIndex Terms--Genre/Form:
554714
Electronic books.
LC Class. No.: Q325.5 / .Z536 2020eb
Dewey Class. No.: 006.3/1
Multi-armed bandits : = theory and applications to online learning in networks /
LDR
:04801nam 2200601 i 4500
001
959783
003
IEEE
005
20191127190106.0
006
m eo d
007
cr bn |||m|||a
008
201209s2020 caua fob 000 0 eng d
020
$a
9781627058711
$q
electronic
020
$z
9781681736372
$q
hardcover
020
$z
9781627056380
$q
paperback
024
7
$a
10.2200/S00941ED2V01Y201907CNT022
$2
doi
035
$a
(CaBNVSL)thg00979755
035
$a
(OCoLC)1129092706
035
$a
8910671
040
$a
CaBNVSL
$b
eng
$e
rda
$c
CaBNVSL
$d
CaBNVSL
050
4
$a
Q325.5
$b
.Z536 2020eb
082
0 4
$a
006.3/1
$2
23
100
1
$a
Zhao, Qing
$c
(Ph.D. in electrical engineering),
$e
author.
$3
1253150
245
1 0
$a
Multi-armed bandits :
$b
theory and applications to online learning in networks /
$c
Qing Zhao.
264
1
$a
[San Rafael, California] :
$b
Morgan & Claypool,
$c
[2020]
300
$a
1 PDF (xviii, 147 pages) :
$b
illustrations.
336
$a
text
$2
rdacontent
337
$a
electronic
$2
isbdmedia
338
$a
online resource
$2
rdacarrier
490
1
$a
Synthesis lectures on communication networks ,
$x
1935-4193 ;
$v
#22
500
$a
Part of: Synthesis digital library of engineering and computer science.
504
$a
Includes bibliographical references (pages 127-145).
505
0
$a
1. Introduction -- 1.1. Multi-armed bandit problems -- 1.2. An essential conflict : exploration vs. Exploitation -- 1.3. Two formulations : Bayesian and frequentist -- 1.4. Notation
505
8
$a
2. Bayesian bandit model and Gittins index -- 2.1. Markov decision processes -- 2.2. The Bayesian bandit model -- 2.3. Gittins index -- 2.4. Optimality of the Gittins index policy -- 2.5. Computing Gittins index -- 2.6. Semi-Markov bandit processes
505
8
$a
3. Variants of the Bayesian bandit model -- 3.1. Necessary assumptions for the index theorem -- 3.2. Variations in the action space -- 3.3. Variations in the system dynamics -- 3.4. Variations in the reward structure -- 3.5. Variations in performance measure
505
8
$a
4. Frequentist bandit model -- 4.1. Basic formulations and regret measures -- 4.2. Lower bounds on regret -- 4.3. Online learning algorithms -- 4.4. Connections between Bayesian and frequentist bandit models
505
8
$a
5. Variants of the frequentist bandit model -- 5.1. Variations in the reward model -- 5.2. Variations in the action space -- 5.3. Variations in the observation model -- 5.4. Variations in the performance measure -- 5.5. Learning in context : bandits with side information -- 5.6. Learning under competition : bandits with multiple players
505
8
$a
6. Application examples -- 6.1. Communication and computer networks -- 6.2. Social-economic networks.
506
$a
Abstract freely available; full-text restricted to subscribers or individual document purchasers.
510
0
$a
Compendex
510
0
$a
INSPEC
510
0
$a
Google scholar
510
0
$a
Google book search
520
$a
Multi-armed bandit problems pertain to optimal sequential decision making and learning in unknown environments. Since the first bandit problem posed by Thompson in 1933 for the application of clinical trials, bandit problems have enjoyed lasting attention from multiple research communities and have found a wide range of applications across diverse domains. This book covers classic results and recent development on both Bayesian and frequentist bandit problems. We start in Chapter 1 with a brief overview on the history of bandit problems, contrasting the two schools--Bayesian and frequentist--of approaches and highlighting foundational results and key applications. Chapters 2 and 4 cover, respectively, the canonical Bayesian and frequentist bandit models. In Chapters 3 and 5, we discuss major variants of the canonical bandit models that lead to new directions, bring in new techniques, and broaden the applications of this classical problem. In Chapter 6, we present several representative application examples in communication networks and social-economic systems, aiming to illuminate the connections between the Bayesian and the frequentist formulations of bandit problems and how structural results pertaining to one may be leveraged to obtain solutions under the other.
530
$a
Also available in print.
538
$a
Mode of access: World Wide Web.
538
$a
System requirements: Adobe Acrobat Reader.
588
$a
Title from PDF title page (viewed on November 27, 2019).
650
0
$a
Machine learning.
$3
561253
650
0
$a
Reinforcement learning.
$3
815404
653
$a
multi-armed bandit
653
$a
machine learning
653
$a
online learning
653
$a
reinforcement learning
653
$a
Markov decision processes
655
0
$a
Electronic books.
$2
local
$3
554714
776
0 8
$i
Print version:
$z
9781627056380
$z
9781681736372
830
0
$a
Synthesis digital library of engineering and computer science.
$3
598254
830
0
$a
Synthesis lectures on communication networks ;
$v
# 14
$3
931433
856
4 0
$3
Abstract with links to full text
$u
https://doi.org/10.2200/S00941ED2V01Y201907CNT022
856
4 2
$3
Abstract with links to resource
$u
https://ieeexplore.ieee.org/servlet/opac?bknumber=8910671
筆 0 讀者評論
多媒體
評論
新增評論
分享你的心得
Export
取書館別
處理中
...
變更密碼[密碼必須為2種組合(英文和數字)及長度為10碼以上]
登入