語系:
繁體中文
English
說明(常見問題)
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
Parallel Machine Learning Using Conc...
~
University of California, Berkeley.
Parallel Machine Learning Using Concurrency Control.
紀錄類型:
書目-語言資料,手稿 : Monograph/item
正題名/作者:
Parallel Machine Learning Using Concurrency Control./
作者:
Pan, Xinghao.
面頁冊數:
1 online resource (162 pages)
附註:
Source: Dissertation Abstracts International, Volume: 79-05(E), Section: B.
Contained By:
Dissertation Abstracts International79-05B(E).
標題:
Computer science. -
電子資源:
click for full text (PQDT)
ISBN:
9780355574517
Parallel Machine Learning Using Concurrency Control.
Pan, Xinghao.
Parallel Machine Learning Using Concurrency Control.
- 1 online resource (162 pages)
Source: Dissertation Abstracts International, Volume: 79-05(E), Section: B.
Thesis (Ph.D.)
Includes bibliographical references
Many machine learning algorithms iteratively process datapoints and transform global model parameters. It has become increasingly impractical to serially execute such iterative algorithms as processor speeds fail to catch up to the growth in dataset sizes.
Electronic reproduction.
Ann Arbor, Mich. :
ProQuest,
2018
Mode of access: World Wide Web
ISBN: 9780355574517Subjects--Topical Terms:
573171
Computer science.
Index Terms--Genre/Form:
554714
Electronic books.
Parallel Machine Learning Using Concurrency Control.
LDR
:03447ntm a2200385Ki 4500
001
910823
005
20180517112611.5
006
m o u
007
cr mn||||a|a||
008
190606s2017 xx obm 000 0 eng d
020
$a
9780355574517
035
$a
(MiAaPQ)AAI10619895
035
$a
(MiAaPQ)berkeley:17288
035
$a
AAI10619895
040
$a
MiAaPQ
$b
eng
$c
MiAaPQ
099
$a
TUL
$f
hyy
$c
available through World Wide Web
100
1
$a
Pan, Xinghao.
$3
1182297
245
1 0
$a
Parallel Machine Learning Using Concurrency Control.
264
0
$c
2017
300
$a
1 online resource (162 pages)
336
$a
text
$b
txt
$2
rdacontent
337
$a
computer
$b
c
$2
rdamedia
338
$a
online resource
$b
cr
$2
rdacarrier
500
$a
Source: Dissertation Abstracts International, Volume: 79-05(E), Section: B.
500
$a
Adviser: Michael I. Jordan.
502
$a
Thesis (Ph.D.)
$c
University of California, Berkeley
$d
2017.
504
$a
Includes bibliographical references
520
$a
Many machine learning algorithms iteratively process datapoints and transform global model parameters. It has become increasingly impractical to serially execute such iterative algorithms as processor speeds fail to catch up to the growth in dataset sizes.
520
$a
To address these problems, the machine learning community has turned to two parallelization strategies: bulk synchronous parallel (BSP), and coordination-free. BSP algorithms partition computational work among workers, with occasional synchronization at global barriers, but has only been applied to 'embarrassingly parallel' problems where work is trivially factorizable. Coordination-free algorithms simply allow concurrent processors to execute in parallel, interleaving transformations and possibly introducing inconsistencies. Theoretical analysis is then required to prove that the coordination-free algorithm produces a reasonable approximation to the desired outcome, under assumptions on the problem and system.
520
$a
In this dissertation, we propose and explore a third approach by applying concurrency control to manage parallel transformations in machine learning algorithms. We identify points of possible interference between parallel iterations by examining the semantics of the serial algorithm. Coordination is then introduced to either avoid or resolve such conflicts, whereas non-conflicting transformations are allowed to execute concurrently. Our parallel algorithms are thus engineered to produce the same exact output as the serial machine learning algorithm, preserving the serial algorithm's theoretical guarantees of correctness while maximizing concurrency.
520
$a
We demonstrate the feasibility of our approach to parallelizing a variety of machine learning algorithms, including nonparametric unsupervised learning, graph clustering, discrete optimization, and sparse convex optimization. We theoretically prove and empirically verify that our parallel algorithms produce equivalent output to their serial counterparts. We also theoretically analyze the expected concurrency of our parallel algorithms, and empirically demonstrate their scalability.
533
$a
Electronic reproduction.
$b
Ann Arbor, Mich. :
$c
ProQuest,
$d
2018
538
$a
Mode of access: World Wide Web
650
4
$a
Computer science.
$3
573171
650
4
$a
Artificial intelligence.
$3
559380
655
7
$a
Electronic books.
$2
local
$3
554714
690
$a
0984
690
$a
0800
710
2
$a
ProQuest Information and Learning Co.
$3
1178819
710
2
$a
University of California, Berkeley.
$b
Electrical Engineering & Computer Sciences.
$3
845564
773
0
$t
Dissertation Abstracts International
$g
79-05B(E).
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=10619895
$z
click for full text (PQDT)
筆 0 讀者評論
多媒體
評論
新增評論
分享你的心得
Export
取書館別
處理中
...
變更密碼[密碼必須為2種組合(英文和數字)及長度為10碼以上]
登入