語系:
繁體中文
English
說明(常見問題)
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
Optimization Algorithms for Machine ...
~
ProQuest Information and Learning Co.
Optimization Algorithms for Machine Learning Designed for Parallel and Distributed Environments.
紀錄類型:
書目-語言資料,手稿 : Monograph/item
正題名/作者:
Optimization Algorithms for Machine Learning Designed for Parallel and Distributed Environments./
作者:
Yektamaram, Seyedalireza.
面頁冊數:
1 online resource (173 pages)
附註:
Source: Dissertation Abstracts International, Volume: 79-07(E), Section: B.
標題:
Operations research. -
電子資源:
click for full text (PQDT)
ISBN:
9780355669633
Optimization Algorithms for Machine Learning Designed for Parallel and Distributed Environments.
Yektamaram, Seyedalireza.
Optimization Algorithms for Machine Learning Designed for Parallel and Distributed Environments.
- 1 online resource (173 pages)
Source: Dissertation Abstracts International, Volume: 79-07(E), Section: B.
Thesis (Ph.D.)--Lehigh University, 2018.
Includes bibliographical references
This thesis proposes several optimization methods that utilize parallel algorithms for large-scale machine learning problems. The overall theme is network-based machine learning algorithms; in particular, we consider two machine learning models: graphical models and neural networks. Graphical models are methods categorized under unsupervised machine learning, aiming at recovering conditional dependencies among random variables from observed samples of a multivariable distribution. Neural networks, on the other hand, are methods that learn an implicit approximation to underlying true nonlinear functions based on sample data and utilize that information to generalize to validation data. The goal of finding the best methods relies on an optimization problem tasked with training such models. Improvements in current methods of solving the optimization problem for graphical models are obtained by parallelization and the use of a new update and a new step-size selection rule in the coordinate descent algorithms designed for large-scale problems. For training deep neural networks, we consider the second-order optimization algorithms within trust-region-like optimization frameworks. Deep networks are represented using large-scale vectors of weights and are trained based on very large datasets. Hence, obtaining second-order information is very expensive for these networks. In this thesis, we undertake an extensive exploration of algorithms that use a small number of curvature evaluations and are hence faster than other existing methods.
Electronic reproduction.
Ann Arbor, Mich. :
ProQuest,
2018
Mode of access: World Wide Web
ISBN: 9780355669633Subjects--Topical Terms:
573517
Operations research.
Index Terms--Genre/Form:
554714
Electronic books.
Optimization Algorithms for Machine Learning Designed for Parallel and Distributed Environments.
LDR
:02728ntm a2200325K 4500
001
912191
005
20180608102941.5
006
m o u
007
cr mn||||a|a||
008
190606s2018 xx obm 000 0 eng d
020
$a
9780355669633
035
$a
(MiAaPQ)AAI10685697
035
$a
(MiAaPQ)lehigh:11842
035
$a
AAI10685697
040
$a
MiAaPQ
$b
eng
$c
MiAaPQ
100
1
$a
Yektamaram, Seyedalireza.
$3
1184439
245
1 0
$a
Optimization Algorithms for Machine Learning Designed for Parallel and Distributed Environments.
264
0
$c
2018
300
$a
1 online resource (173 pages)
336
$a
text
$b
txt
$2
rdacontent
337
$a
computer
$b
c
$2
rdamedia
338
$a
online resource
$b
cr
$2
rdacarrier
500
$a
Source: Dissertation Abstracts International, Volume: 79-07(E), Section: B.
500
$a
Adviser: Katya Scheinberg.
502
$a
Thesis (Ph.D.)--Lehigh University, 2018.
504
$a
Includes bibliographical references
520
$a
This thesis proposes several optimization methods that utilize parallel algorithms for large-scale machine learning problems. The overall theme is network-based machine learning algorithms; in particular, we consider two machine learning models: graphical models and neural networks. Graphical models are methods categorized under unsupervised machine learning, aiming at recovering conditional dependencies among random variables from observed samples of a multivariable distribution. Neural networks, on the other hand, are methods that learn an implicit approximation to underlying true nonlinear functions based on sample data and utilize that information to generalize to validation data. The goal of finding the best methods relies on an optimization problem tasked with training such models. Improvements in current methods of solving the optimization problem for graphical models are obtained by parallelization and the use of a new update and a new step-size selection rule in the coordinate descent algorithms designed for large-scale problems. For training deep neural networks, we consider the second-order optimization algorithms within trust-region-like optimization frameworks. Deep networks are represented using large-scale vectors of weights and are trained based on very large datasets. Hence, obtaining second-order information is very expensive for these networks. In this thesis, we undertake an extensive exploration of algorithms that use a small number of curvature evaluations and are hence faster than other existing methods.
533
$a
Electronic reproduction.
$b
Ann Arbor, Mich. :
$c
ProQuest,
$d
2018
538
$a
Mode of access: World Wide Web
650
4
$a
Operations research.
$3
573517
650
4
$a
Artificial intelligence.
$3
559380
655
7
$a
Electronic books.
$2
local
$3
554714
690
$a
0796
690
$a
0800
710
2
$a
ProQuest Information and Learning Co.
$3
1178819
710
2
$a
Lehigh University.
$b
Industrial Engineering.
$3
1182234
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=10685697
$z
click for full text (PQDT)
筆 0 讀者評論
多媒體
評論
新增評論
分享你的心得
Export
取書館別
處理中
...
變更密碼[密碼必須為2種組合(英文和數字)及長度為10碼以上]
登入