語系:
繁體中文
English
說明(常見問題)
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
Deep Generative Models for Image Rep...
~
Duke University.
Deep Generative Models for Image Representation Learning.
紀錄類型:
書目-語言資料,手稿 : Monograph/item
正題名/作者:
Deep Generative Models for Image Representation Learning./
作者:
Pu, Yunchen.
面頁冊數:
1 online resource (115 pages)
附註:
Source: Dissertation Abstracts International, Volume: 79-09(E), Section: B.
Contained By:
Dissertation Abstracts International79-09B(E).
標題:
Artificial intelligence. -
電子資源:
click for full text (PQDT)
ISBN:
9780355872774
Deep Generative Models for Image Representation Learning.
Pu, Yunchen.
Deep Generative Models for Image Representation Learning.
- 1 online resource (115 pages)
Source: Dissertation Abstracts International, Volume: 79-09(E), Section: B.
Thesis (Ph.D.)--Duke University, 2018.
Includes bibliographical references
Recently there has been increasing interest in developing generative models of data, offering the promise of learning based on the often vast quantity of unlabeled data. With such learning, one typically seeks to build rich, hierarchical probabilistic models that are able to fit to the distribution of complex real data, and are also capable of realistic data synthesis. In this dissertation, novel models and learning algorithms are proposed for deep generative models. This disseration consists of three main parts.
Electronic reproduction.
Ann Arbor, Mich. :
ProQuest,
2018
Mode of access: World Wide Web
ISBN: 9780355872774Subjects--Topical Terms:
559380
Artificial intelligence.
Index Terms--Genre/Form:
554714
Electronic books.
Deep Generative Models for Image Representation Learning.
LDR
:04584ntm a2200373Ki 4500
001
916861
005
20180928111502.5
006
m o u
007
cr mn||||a|a||
008
190606s2018 xx obm 000 0 eng d
020
$a
9780355872774
035
$a
(MiAaPQ)AAI10745361
035
$a
(MiAaPQ)duke:14409
035
$a
AAI10745361
040
$a
MiAaPQ
$b
eng
$c
MiAaPQ
$d
NTU
100
1
$a
Pu, Yunchen.
$3
1190715
245
1 0
$a
Deep Generative Models for Image Representation Learning.
264
0
$c
2018
300
$a
1 online resource (115 pages)
336
$a
text
$b
txt
$2
rdacontent
337
$a
computer
$b
c
$2
rdamedia
338
$a
online resource
$b
cr
$2
rdacarrier
500
$a
Source: Dissertation Abstracts International, Volume: 79-09(E), Section: B.
500
$a
Adviser: Lawrence Carin.
502
$a
Thesis (Ph.D.)--Duke University, 2018.
504
$a
Includes bibliographical references
520
$a
Recently there has been increasing interest in developing generative models of data, offering the promise of learning based on the often vast quantity of unlabeled data. With such learning, one typically seeks to build rich, hierarchical probabilistic models that are able to fit to the distribution of complex real data, and are also capable of realistic data synthesis. In this dissertation, novel models and learning algorithms are proposed for deep generative models. This disseration consists of three main parts.
520
$a
The first part developed a deep generative model joint analysis of images and associated labels or captions. The model is efficiently learned using variational autoencoder. A multilayered (deep) convolutional dictionary representation is employed as a decoder of the latent image features. Stochastic unpooling is employed to link consecutive layers in the image model, yielding top-down image generation. A deep Convolutional Neural Network (CNN) is used as an image encoder; the CNN is used to approximate a distribution for the latent DGDN features/code. The latent code is also linked to generative models for labels (Bayesian support vector machine) or captions (recurrent neural network). When predicting a label/caption for a new image at test, averaging is performed across the distribution of latent codes; this is computationally efficient as a consequence of the learned CNN-based encoder. Since the framework is capable of modeling the image in the presence/absence of associated labels/captions, a new semi-supervised setting is manifested for CNN learning with images; the framework even allows unsupervised CNN learning, based on images alone. Excellent results are obtained on several benchmark datasets, including ImageNet, demonstrating that the proposed model achieves results that are highly competitive with similarly sized convolutional neural networks.
520
$a
The second part developed a new method for learning variational autoencoders (VAEs), based on Stein variational gradient descent. A key advantage of this approach is that one need not make parametric assumptions about the form of the encoder distribution. Performance is further enhanced by integrating the proposed encoder with importance sampling. Excellent performance is demonstrated across multiple unsupervised and semi-supervised problems, including semi-supervised analysis of the ImageNet data, demonstrating the scalability of the model to large datasets.
520
$a
The third part developed a new form of variational autoencoder, in which the joint distribution of data and codes is considered in two (symmetric) forms: (i) from observed data fed through the encoder to yield codes, and (ii) from latent codes drawn from a simple prior and propagated through the decoder to manifest data. Lower bounds are learned for marginal log-likelihood fits observed data and latent codes. When learning with the variational bound, one seeks to minimize the symmetric Kullback-Leibler divergence of joint density functions from (i) and (ii), while simultaneously seeking to maximize the two marginal log-likelihoods. To facilitate learning, a new form of adversarial training is developed. An extensive set of experiments is performed, in which we demonstrate state-of-the-art data reconstruction and generation on several image benchmark datasets.
533
$a
Electronic reproduction.
$b
Ann Arbor, Mich. :
$c
ProQuest,
$d
2018
538
$a
Mode of access: World Wide Web
650
4
$a
Artificial intelligence.
$3
559380
650
4
$a
Canadian history.
$3
1183479
655
7
$a
Electronic books.
$2
local
$3
554714
690
$a
0800
690
$a
0334
710
2
$a
ProQuest Information and Learning Co.
$3
1178819
710
2
$a
Duke University.
$b
Electrical and Computer Engineering.
$3
845695
773
0
$t
Dissertation Abstracts International
$g
79-09B(E).
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=10745361
$z
click for full text (PQDT)
筆 0 讀者評論
多媒體
評論
新增評論
分享你的心得
Export
取書館別
處理中
...
變更密碼[密碼必須為2種組合(英文和數字)及長度為10碼以上]
登入