語系:
繁體中文
English
說明(常見問題)
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
Unsupervised Learning under Uncertainty.
~
New York University.
Unsupervised Learning under Uncertainty.
紀錄類型:
書目-語言資料,手稿 : Monograph/item
正題名/作者:
Unsupervised Learning under Uncertainty./
作者:
Mathieu, Michael.
面頁冊數:
1 online resource (159 pages)
附註:
Source: Dissertation Abstracts International, Volume: 79-02(E), Section: B.
標題:
Computer science. -
電子資源:
click for full text (PQDT)
ISBN:
9780355406771
Unsupervised Learning under Uncertainty.
Mathieu, Michael.
Unsupervised Learning under Uncertainty.
- 1 online resource (159 pages)
Source: Dissertation Abstracts International, Volume: 79-02(E), Section: B.
Thesis (Ph.D.)--New York University, 2017.
Includes bibliographical references
Deep learning, in particular neural networks, achieved remarkable success in the recent years. However, most of it is based on supervised learning, and relies on ever larger datasets, and immense computing power. One step towards general artificial intelligence is to build a model of the world, with enough knowledge to acquire a kind of ``common sense''. Representations learned by such a model could be reused in a number of other tasks. It would reduce the requirement for labelled samples and possibly acquire a deeper understanding of the problem. The vast quantities of knowledge required to build common sense precludes the use of supervised learning, and suggests to rely on unsupervised learning instead.
Electronic reproduction.
Ann Arbor, Mich. :
ProQuest,
2018
Mode of access: World Wide Web
ISBN: 9780355406771Subjects--Topical Terms:
573171
Computer science.
Index Terms--Genre/Form:
554714
Electronic books.
Unsupervised Learning under Uncertainty.
LDR
:03150ntm a2200349K 4500
001
913674
005
20180622095235.5
006
m o u
007
cr mn||||a|a||
008
190606s2017 xx obm 000 0 eng d
020
$a
9780355406771
035
$a
(MiAaPQ)AAI10261120
035
$a
(MiAaPQ)nyu:12919
035
$a
AAI10261120
040
$a
MiAaPQ
$b
eng
$c
MiAaPQ
100
1
$a
Mathieu, Michael.
$3
1186606
245
1 0
$a
Unsupervised Learning under Uncertainty.
264
0
$c
2017
300
$a
1 online resource (159 pages)
336
$a
text
$b
txt
$2
rdacontent
337
$a
computer
$b
c
$2
rdamedia
338
$a
online resource
$b
cr
$2
rdacarrier
500
$a
Source: Dissertation Abstracts International, Volume: 79-02(E), Section: B.
500
$a
Adviser: Yann LeCun.
502
$a
Thesis (Ph.D.)--New York University, 2017.
504
$a
Includes bibliographical references
520
$a
Deep learning, in particular neural networks, achieved remarkable success in the recent years. However, most of it is based on supervised learning, and relies on ever larger datasets, and immense computing power. One step towards general artificial intelligence is to build a model of the world, with enough knowledge to acquire a kind of ``common sense''. Representations learned by such a model could be reused in a number of other tasks. It would reduce the requirement for labelled samples and possibly acquire a deeper understanding of the problem. The vast quantities of knowledge required to build common sense precludes the use of supervised learning, and suggests to rely on unsupervised learning instead.
520
$a
The concept of uncertainty is central to unsupervised learning. The task is usually to learn a complex, multimodal distribution. Density estimation and generative models aim at representing the whole distribution of the data, while predictive learning consists of predicting the state of the world given the context and, more often than not, the prediction is not unique. That may be because the model lacks the capacity or the computing power to make a certain prediction, or because the future depends on parameters that are not part of the observation. Finally, the world can be chaotic of truly stochastic. Representing complex, multimodal continuous distributions with deep neural networks is still an open problem.
520
$a
In this thesis, we first assess the difficulties of representing probabilities in high dimensional spaces, and review the related work in this domain. We then introduce two methods to address the problem of video prediction, first using a novel form of linearizing auto-encoders and latent variables, and secondly using Generative Adversarial Networks (GANs). We show how GANs can be seen as trainable loss functions to represent uncertainty, then how they can be used to disentangle factors of variation. Finally, we explore a new non-probabilistic framework for GANs.
533
$a
Electronic reproduction.
$b
Ann Arbor, Mich. :
$c
ProQuest,
$d
2018
538
$a
Mode of access: World Wide Web
650
4
$a
Computer science.
$3
573171
650
4
$a
Artificial intelligence.
$3
559380
655
7
$a
Electronic books.
$2
local
$3
554714
690
$a
0984
690
$a
0800
710
2
$a
ProQuest Information and Learning Co.
$3
1178819
710
2
$a
New York University.
$b
Computer Science.
$3
1184382
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=10261120
$z
click for full text (PQDT)
筆 0 讀者評論
多媒體
評論
新增評論
分享你的心得
Export
取書館別
處理中
...
變更密碼[密碼必須為2種組合(英文和數字)及長度為10碼以上]
登入