語系:
繁體中文
English
說明(常見問題)
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
High Dimensional Learning with Struc...
~
University of Minnesota.
High Dimensional Learning with Structure Inducing Constraints and Regularizers.
紀錄類型:
書目-語言資料,手稿 : Monograph/item
正題名/作者:
High Dimensional Learning with Structure Inducing Constraints and Regularizers./
作者:
Asiaeetaheri, Amir Asiaee.
面頁冊數:
1 online resource (138 pages)
附註:
Source: Dissertation Abstracts International, Volume: 79-04(E), Section: B.
Contained By:
Dissertation Abstracts International79-04B(E).
標題:
Computer science. -
電子資源:
click for full text (PQDT)
ISBN:
9780355415926
High Dimensional Learning with Structure Inducing Constraints and Regularizers.
Asiaeetaheri, Amir Asiaee.
High Dimensional Learning with Structure Inducing Constraints and Regularizers.
- 1 online resource (138 pages)
Source: Dissertation Abstracts International, Volume: 79-04(E), Section: B.
Thesis (Ph.D.)--University of Minnesota, 2017.
Includes bibliographical references
Explosive growth in data generation through science and technology calls for new computational and analytical tools. To the statistical machine learning community, one major challenge is the data sets with dimensions larger than the number of samples. Low sample-high dimension regime violates the core assumption of most traditional learning methods. To address this new challenge, over the past decade many high-dimensional learning algorithms have been developed.
Electronic reproduction.
Ann Arbor, Mich. :
ProQuest,
2018
Mode of access: World Wide Web
ISBN: 9780355415926Subjects--Topical Terms:
573171
Computer science.
Index Terms--Genre/Form:
554714
Electronic books.
High Dimensional Learning with Structure Inducing Constraints and Regularizers.
LDR
:03533ntm a2200373Ki 4500
001
920661
005
20181203094031.5
006
m o u
007
cr mn||||a|a||
008
190606s2017 xx obm 000 0 eng d
020
$a
9780355415926
035
$a
(MiAaPQ)AAI10624438
035
$a
(MiAaPQ)umn:18583
035
$a
AAI10624438
040
$a
MiAaPQ
$b
eng
$c
MiAaPQ
$d
NTU
100
1
$a
Asiaeetaheri, Amir Asiaee.
$3
1195527
245
1 0
$a
High Dimensional Learning with Structure Inducing Constraints and Regularizers.
264
0
$c
2017
300
$a
1 online resource (138 pages)
336
$a
text
$b
txt
$2
rdacontent
337
$a
computer
$b
c
$2
rdamedia
338
$a
online resource
$b
cr
$2
rdacarrier
500
$a
Source: Dissertation Abstracts International, Volume: 79-04(E), Section: B.
500
$a
Adviser: Arindam Banerjee.
502
$a
Thesis (Ph.D.)--University of Minnesota, 2017.
504
$a
Includes bibliographical references
520
$a
Explosive growth in data generation through science and technology calls for new computational and analytical tools. To the statistical machine learning community, one major challenge is the data sets with dimensions larger than the number of samples. Low sample-high dimension regime violates the core assumption of most traditional learning methods. To address this new challenge, over the past decade many high-dimensional learning algorithms have been developed.
520
$a
One of the significant high-dimensional problems in machine learning is the linear regression where the number of features is greater than the number of samples. In the beginning, the primary focus of high-dimensional linear regression literature was on estimating sparse coefficient through l1-norm regularization. In a more general framework, one can assume that the underlying parameter has an intrinsic "low dimensional complexity" or structure. Recently, researchers have looked at structures beyond sparsity that are induced by any norm as the regularizer or constraint.
520
$a
In this thesis, we focus on two variants of the high-dimensional linear model, i.e., data sharing and errors-in-variables where the structure of the parameter is captured with a suitable norm. We introduce estimators for these models and study their theoretical properties. We characterize the sample complexity of our estimators and establish non-asymptotic high probability error bounds for them. Finally, we utilize dictionary learning and sparse coding to perform Twitter sentiment analysis as an application of high dimensional learning.
520
$a
Some discrete machine learning problems can also be posed as constrained set function optimization, where the constraints induce a structure over the solution set. In the second part of the thesis, we investigate a prominent set function optimization problem, the social influence maximization, under the novel "heat conduction" influence propagation model. We formulate the problem as a submodular maximization with cardinality constraints and provide an efficient algorithm for it. Through extensive experiments on several large real and synthetic networks, we show that our algorithm outperforms the well-studied methods from influence maximization literature.
533
$a
Electronic reproduction.
$b
Ann Arbor, Mich. :
$c
ProQuest,
$d
2018
538
$a
Mode of access: World Wide Web
650
4
$a
Computer science.
$3
573171
650
4
$a
Statistics.
$3
556824
655
7
$a
Electronic books.
$2
local
$3
554714
690
$a
0984
690
$a
0463
710
2
$a
ProQuest Information and Learning Co.
$3
1178819
710
2
$a
University of Minnesota.
$b
Computer Science.
$3
1180176
773
0
$t
Dissertation Abstracts International
$g
79-04B(E).
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=10624438
$z
click for full text (PQDT)
筆 0 讀者評論
多媒體
評論
新增評論
分享你的心得
Export
取書館別
處理中
...
變更密碼[密碼必須為2種組合(英文和數字)及長度為10碼以上]
登入