語系:
繁體中文
English
說明(常見問題)
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
Bandits and Preference Learning.
~
Aniruddha, Bhargava.
Bandits and Preference Learning.
紀錄類型:
書目-語言資料,手稿 : Monograph/item
正題名/作者:
Bandits and Preference Learning./
作者:
Aniruddha, Bhargava.
面頁冊數:
1 online resource (168 pages)
附註:
Source: Dissertation Abstracts International, Volume: 79-03(E), Section: B.
Contained By:
Dissertation Abstracts International79-03B(E).
標題:
Electrical engineering. -
電子資源:
click for full text (PQDT)
ISBN:
9780355524703
Bandits and Preference Learning.
Aniruddha, Bhargava.
Bandits and Preference Learning.
- 1 online resource (168 pages)
Source: Dissertation Abstracts International, Volume: 79-03(E), Section: B.
Thesis (Ph.D.)--The University of Wisconsin - Madison, 2017.
Includes bibliographical references
The internet revolution has brought a large population access to a vast array of infor- mation since the mid 1990s. More recently, with the advent of smartphones, it has become an essential part of our everyday life. This has lead to, among many other developments, the personalization of the online experience with great benefits to all involved. Companies have particular interest in showing products and advertisements that match what particular users are looking for, and users desire getting personalized recommendations from internet for entertainment and consumer goods that suit them as individuals. In machine learning, this is popularly achieved using the theory of the multi-armed bandits, methods which allow us to zero in on the consumer's personal preferences.
Electronic reproduction.
Ann Arbor, Mich. :
ProQuest,
2018
Mode of access: World Wide Web
ISBN: 9780355524703Subjects--Topical Terms:
596380
Electrical engineering.
Index Terms--Genre/Form:
554714
Electronic books.
Bandits and Preference Learning.
LDR
:03018ntm a2200361Ki 4500
001
916849
005
20180928111502.5
006
m o u
007
cr mn||||a|a||
008
190606s2017 xx obm 000 0 eng d
020
$a
9780355524703
035
$a
(MiAaPQ)AAI10688213
035
$a
(MiAaPQ)wisc:14999
035
$a
AAI10688213
040
$a
MiAaPQ
$b
eng
$c
MiAaPQ
$d
NTU
100
1
$a
Aniruddha, Bhargava.
$3
1190700
245
1 0
$a
Bandits and Preference Learning.
264
0
$c
2017
300
$a
1 online resource (168 pages)
336
$a
text
$b
txt
$2
rdacontent
337
$a
computer
$b
c
$2
rdamedia
338
$a
online resource
$b
cr
$2
rdacarrier
500
$a
Source: Dissertation Abstracts International, Volume: 79-03(E), Section: B.
500
$a
Adviser: Robert D. Nowak.
502
$a
Thesis (Ph.D.)--The University of Wisconsin - Madison, 2017.
504
$a
Includes bibliographical references
520
$a
The internet revolution has brought a large population access to a vast array of infor- mation since the mid 1990s. More recently, with the advent of smartphones, it has become an essential part of our everyday life. This has lead to, among many other developments, the personalization of the online experience with great benefits to all involved. Companies have particular interest in showing products and advertisements that match what particular users are looking for, and users desire getting personalized recommendations from internet for entertainment and consumer goods that suit them as individuals. In machine learning, this is popularly achieved using the theory of the multi-armed bandits, methods which allow us to zero in on the consumer's personal preferences.
520
$a
The last few decades have seen great advances in the theory and practice of multi- armed bandits exploiting either the context of the user, the context of the objects, or both. Great theoretical improvements have brought algorithms' performance close to their theoretical optimal. However, various challenges exist in the practical use of multi-armed bandits. In this thesis, we explore some of these challenges and endeavor to overcome them. First, we examine how multiple populations can be catered to si- multaneously. We then address the issue of scaling multi-armed bandits to situations where there are many arms. We also look at how to incorporate generalized linear re- ward models while maintaining computational efficiency. Finally, we address how we can use feature feedback to focus the bandits exploration to a limited subset of features. This leads to algorithms that are still tractable for high-dimensional datasets where the preferences of the user are explained by a sparse subset of them.
533
$a
Electronic reproduction.
$b
Ann Arbor, Mich. :
$c
ProQuest,
$d
2018
538
$a
Mode of access: World Wide Web
650
4
$a
Electrical engineering.
$3
596380
650
4
$a
Computer science.
$3
573171
650
4
$a
Artificial intelligence.
$3
559380
655
7
$a
Electronic books.
$2
local
$3
554714
690
$a
0544
690
$a
0984
690
$a
0800
710
2
$a
ProQuest Information and Learning Co.
$3
1178819
710
2
$a
The University of Wisconsin - Madison.
$b
Electrical Engineering.
$3
1178889
773
0
$t
Dissertation Abstracts International
$g
79-03B(E).
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=10688213
$z
click for full text (PQDT)
筆 0 讀者評論
多媒體
評論
新增評論
分享你的心得
Export
取書館別
處理中
...
變更密碼[密碼必須為2種組合(英文和數字)及長度為10碼以上]
登入