語系:
繁體中文
English
說明(常見問題)
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
Deep Learning Models for Unsupervise...
~
Srivastava, Nitish.
Deep Learning Models for Unsupervised and Transfer Learning.
紀錄類型:
書目-語言資料,手稿 : Monograph/item
正題名/作者:
Deep Learning Models for Unsupervised and Transfer Learning./
作者:
Srivastava, Nitish.
面頁冊數:
1 online resource (124 pages)
附註:
Source: Dissertation Abstracts International, Volume: 79-04(E), Section: B.
標題:
Computer science. -
電子資源:
click for full text (PQDT)
ISBN:
9780355530469
Deep Learning Models for Unsupervised and Transfer Learning.
Srivastava, Nitish.
Deep Learning Models for Unsupervised and Transfer Learning.
- 1 online resource (124 pages)
Source: Dissertation Abstracts International, Volume: 79-04(E), Section: B.
Thesis (Ph.D.)--University of Toronto (Canada), 2017.
Includes bibliographical references
This thesis is a compilation of five research contributions whose goal is to do unsupervised and transfer learning by designing models that learn distributed representations using deep neural networks. First, we describe a Deep Boltzmann Machine model applied to image-text and audio-video multi-modal data. We show that the learned generative probabilistic model can jointly model both modalities and also produce good conditional distributions on each modality given the other. We use this model to infer fused high-level representations and evaluate them using retrieval and classification tasks.
Electronic reproduction.
Ann Arbor, Mich. :
ProQuest,
2018
Mode of access: World Wide Web
ISBN: 9780355530469Subjects--Topical Terms:
573171
Computer science.
Index Terms--Genre/Form:
554714
Electronic books.
Deep Learning Models for Unsupervised and Transfer Learning.
LDR
:03513ntm a2200361K 4500
001
914367
005
20180703084808.5
006
m o u
007
cr mn||||a|a||
008
190606s2017 xx obm 000 0 eng d
020
$a
9780355530469
035
$a
(MiAaPQ)AAI10287978
035
$a
(MiAaPQ)toronto:15749
035
$a
AAI10287978
040
$a
MiAaPQ
$b
eng
$c
MiAaPQ
100
1
$a
Srivastava, Nitish.
$3
1187599
245
1 0
$a
Deep Learning Models for Unsupervised and Transfer Learning.
264
0
$c
2017
300
$a
1 online resource (124 pages)
336
$a
text
$b
txt
$2
rdacontent
337
$a
computer
$b
c
$2
rdamedia
338
$a
online resource
$b
cr
$2
rdacarrier
500
$a
Source: Dissertation Abstracts International, Volume: 79-04(E), Section: B.
500
$a
Advisers: Geoffrey E. Hinton; Ruslan R. Salakhutdinov.
502
$a
Thesis (Ph.D.)--University of Toronto (Canada), 2017.
504
$a
Includes bibliographical references
520
$a
This thesis is a compilation of five research contributions whose goal is to do unsupervised and transfer learning by designing models that learn distributed representations using deep neural networks. First, we describe a Deep Boltzmann Machine model applied to image-text and audio-video multi-modal data. We show that the learned generative probabilistic model can jointly model both modalities and also produce good conditional distributions on each modality given the other. We use this model to infer fused high-level representations and evaluate them using retrieval and classification tasks.
520
$a
Second, we propose a Boltzmann Machine based topic model for modeling bag-of-words documents. This model augments the Replicated Softmax Model with a second hidden layer of latent words without sacrificing RBM-like inference and training. We describe how this can be viewed as a beneficial modification of the otherwise rigid, complementary prior that is implicit in RBM-like models.
520
$a
Third, we describe an RNN-based encoder-decoder model that learns to represent video sequences. This model is inspired by sequence-to-sequence learning for machine translation. We train an RNN encoder to come up with a representation of the input sequence that can be used to both decode the input back, and predict the future sequence. This representation is evaluated using action recognition benchmarks.
520
$a
Fourth, we develop a theory of directional units and use them to construct Boltzmann Machines and Autoencoders. A directional unit is a structured, vector-valued hidden unit which represents a continuous space of features. The magnitude and direction of a directional unit represent the strength and pose of a feature within this space, respectively. Networks of these units can potentially do better coincidence detection and learn general equivariance classes. Temporal coherence based learning can be used with these units to factor out the dynamic properties of a feature, part, or object from static properties such as identity.
520
$a
Last, we describe a contribution to transfer learning. We show how a deep convolutional net trained to classify among a given set of categories can transfer its knowledge to new categories even when very few labelled examples are available for the new categories.
533
$a
Electronic reproduction.
$b
Ann Arbor, Mich. :
$c
ProQuest,
$d
2018
538
$a
Mode of access: World Wide Web
650
4
$a
Computer science.
$3
573171
655
7
$a
Electronic books.
$2
local
$3
554714
690
$a
0984
710
2
$a
ProQuest Information and Learning Co.
$3
1178819
710
2
$a
University of Toronto (Canada).
$b
Computer Science.
$3
845521
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=10287978
$z
click for full text (PQDT)
筆 0 讀者評論
多媒體
評論
新增評論
分享你的心得
Export
取書館別
處理中
...
變更密碼[密碼必須為2種組合(英文和數字)及長度為10碼以上]
登入