語系:
繁體中文
English
說明(常見問題)
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
Methodologies for Learning Robust Fe...
~
State University of New York at Buffalo.
Methodologies for Learning Robust Feature Representations.
紀錄類型:
書目-語言資料,手稿 : Monograph/item
正題名/作者:
Methodologies for Learning Robust Feature Representations./
作者:
Arpit, Devansh.
面頁冊數:
1 online resource (159 pages)
附註:
Source: Dissertation Abstracts International, Volume: 78-07(E), Section: B.
標題:
Computer engineering. -
電子資源:
click for full text (PQDT)
ISBN:
9781369592634
Methodologies for Learning Robust Feature Representations.
Arpit, Devansh.
Methodologies for Learning Robust Feature Representations.
- 1 online resource (159 pages)
Source: Dissertation Abstracts International, Volume: 78-07(E), Section: B.
Thesis (Ph.D.)--State University of New York at Buffalo, 2017.
Includes bibliographical references
In order to accurately draw inferences and make predictions based on a given set of data samples, one needs to find a suitable feature representation that efficiently models the underlying data manifold. The model should reflect the compact global structure, capture the behavior of data, and be robust in the presence of noise. While learning the manifold is not feasible if the data belongs to an arbitrary distribution, most real world data does have a rich structure, and thus lends itself to being modeled in a compact form. However, given that data sampling is always finite and tends to be noisy, this calls for addressing the technical challenge of selecting an appropriate manifold model and designing suitable regularizations. This thesis focuses on studying these problems in the context of data with independent subspace structure and deep learning algorithms.
Electronic reproduction.
Ann Arbor, Mich. :
ProQuest,
2018
Mode of access: World Wide Web
ISBN: 9781369592634Subjects--Topical Terms:
569006
Computer engineering.
Index Terms--Genre/Form:
554714
Electronic books.
Methodologies for Learning Robust Feature Representations.
LDR
:05184ntm a2200373K 4500
001
915261
005
20180727125211.5
006
m o u
007
cr mn||||a|a||
008
190606s2017 xx obm 000 0 eng d
020
$a
9781369592634
035
$a
(MiAaPQ)AAI10196307
035
$a
(MiAaPQ)buffalo:14896
035
$a
AAI10196307
040
$a
MiAaPQ
$b
eng
$c
MiAaPQ
100
1
$a
Arpit, Devansh.
$3
1188567
245
1 0
$a
Methodologies for Learning Robust Feature Representations.
264
0
$c
2017
300
$a
1 online resource (159 pages)
336
$a
text
$b
txt
$2
rdacontent
337
$a
computer
$b
c
$2
rdamedia
338
$a
online resource
$b
cr
$2
rdacarrier
500
$a
Source: Dissertation Abstracts International, Volume: 78-07(E), Section: B.
500
$a
Adviser: Venu Govindaraju.
502
$a
Thesis (Ph.D.)--State University of New York at Buffalo, 2017.
504
$a
Includes bibliographical references
520
$a
In order to accurately draw inferences and make predictions based on a given set of data samples, one needs to find a suitable feature representation that efficiently models the underlying data manifold. The model should reflect the compact global structure, capture the behavior of data, and be robust in the presence of noise. While learning the manifold is not feasible if the data belongs to an arbitrary distribution, most real world data does have a rich structure, and thus lends itself to being modeled in a compact form. However, given that data sampling is always finite and tends to be noisy, this calls for addressing the technical challenge of selecting an appropriate manifold model and designing suitable regularizations. This thesis focuses on studying these problems in the context of data with independent subspace structure and deep learning algorithms.
520
$a
Extant literature has predominantly analyzed data with independent subspace structure, i.e., given a K class problem, each class lies near a linear subspace independent of others, making each pair of subspaces disjoint. Face images and motion in videos have been found to exhibit this property of subspace linearity. In this thesis, we propose three different dimensionality reduction algorithms that deal with data which approximately satisfy this independent subspace property: 1) we show that random projections preserve the independence between subspaces even without the knowledge of the actual data; 2) we develop an efficient supervised algorithm that preserves the subspace structure of data with K classes using just 2K projection vectors; 3) we develop an algorithm that learns an embedding for labeled data such that samples from each class lies in a low dimensional subspace.
520
$a
However, the independent subspace structure assumption has a restricted range of applications. Thus, in this thesis we also study a class of algorithms that have wider applicability and can model data sampled from more general distributions. Deep learning algorithms have recently become popular for automatically learning useful features from given data and omit the need for designing hand-crafted features. We provide novel analysis and algorithms in this direction.
520
$a
Auto-Encoders (AE) are a sub-class of algorithms commonly used by the deep learning community and have recently become popular for learning data distribution. While doing so, they exploit sparse distributed structure (many-to-many relationship between the original and latent feature space) present in the data distribution. In this thesis, we analytically show the conditions on activation functions and regularizations that encourage sparsity in the hidden representation of AEs. Our analysis shows multiple regularized AEs and activation functions share similar underlying properties that encourages sparsity.
520
$a
We also study the first layer of neural networks with rectified linear and sigmoid activation in this thesis. We show that if we assume the observed data is generated from the true first layer hidden representation and if the distribution of the hidden representation is bounded independent non-negative and sparse (BINS), then this representation can be recovered for every corresponding data sample under a PAC bound by forward propagating data. We show that this view unifies multiple existing but disparate techniques in the deep learning community.
520
$a
Finally, we propose a novel technique called Normalization Propagation for avoiding the problem of internal covariate shift (ICS) which has been shown to slow down convergence while training deep neural networks. Since ICS is caused due to shifting distribution of the input to hidden layers during training, our algorithm propagates the normalization done at data level to all higher layers and ensures that each hidden layer's input is a standard Normal distribution. We show our proposed algorithm achieves state of the art results on multiple benchmark datasets.
533
$a
Electronic reproduction.
$b
Ann Arbor, Mich. :
$c
ProQuest,
$d
2018
538
$a
Mode of access: World Wide Web
650
4
$a
Computer engineering.
$3
569006
655
7
$a
Electronic books.
$2
local
$3
554714
690
$a
0464
710
2
$a
ProQuest Information and Learning Co.
$3
1178819
710
2
$a
State University of New York at Buffalo.
$b
Computer Science and Engineering.
$3
1180201
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=10196307
$z
click for full text (PQDT)
筆 0 讀者評論
多媒體
評論
新增評論
分享你的心得
Export
取書館別
處理中
...
變更密碼[密碼必須為2種組合(英文和數字)及長度為10碼以上]
登入