語系:
繁體中文
English
說明(常見問題)
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
Deep Non-Negative Matrix Factorization.
~
Flenner, Jennifer.
Deep Non-Negative Matrix Factorization.
紀錄類型:
書目-語言資料,手稿 : Monograph/item
正題名/作者:
Deep Non-Negative Matrix Factorization./
作者:
Flenner, Jennifer.
面頁冊數:
1 online resource (182 pages)
附註:
Source: Dissertation Abstracts International, Volume: 78-10(E), Section: B.
標題:
Mathematics. -
電子資源:
click for full text (PQDT)
ISBN:
9781369781199
Deep Non-Negative Matrix Factorization.
Flenner, Jennifer.
Deep Non-Negative Matrix Factorization.
- 1 online resource (182 pages)
Source: Dissertation Abstracts International, Volume: 78-10(E), Section: B.
Thesis (Ph.D.)--The Claremont Graduate University, 2017.
Includes bibliographical references
Machine learning and artificial intelligence is a field of a study that learns structure from data and is an important application area of statistics and optimization. Representation learning is one of the oldest machine learning techniques and the principal component analysis (PCA) algorithm of Carl Pearson is one of the oldest methods. Recently, deep neural network algorithms have emerged as one of the most successful representation learning strategies by obtaining state of the art results for classification of large data sets. Their success is due to advancement in computing power and the development of new computational and regularization techniques. The drawbacks to these deep neural networks are that they often only perform well on large data sets, they are not always convergent, are not well understood as to how and when they will work mathematically and the output classifications can randomly fail without warning. Other strategies for data classification and feature extraction, such as topic modeling based strategies, have also recently progressed; it is now possible to quickly perform topic modeling on large data sets. These topic models combine data modeling with optimization to learn interpretable and consistent feature structures in data. We illustrate that it is possible to combine the interpretability and predictability of topic modeling learned representations with some of the attributes of deep neural networks by introducing a deep non-negative matrix factorization (NMF). This framework is capable of producing reliable, interpretable, predictable hierarchical classification of many types of sensor data. Furthermore, we make a connection between sparse representations and deep representations by empirically demonstrating that connecting multiple representations through a non-linear function promotes a sparser representation.
Electronic reproduction.
Ann Arbor, Mich. :
ProQuest,
2018
Mode of access: World Wide Web
ISBN: 9781369781199Subjects--Topical Terms:
527692
Mathematics.
Index Terms--Genre/Form:
554714
Electronic books.
Deep Non-Negative Matrix Factorization.
LDR
:02978ntm a2200313K 4500
001
915290
005
20180727125212.5
006
m o u
007
cr mn||||a|a||
008
190606s2017 xx obm 000 0 eng d
020
$a
9781369781199
035
$a
(MiAaPQ)AAI10265148
035
$a
(MiAaPQ)cgu:11027
035
$a
AAI10265148
040
$a
MiAaPQ
$b
eng
$c
MiAaPQ
100
1
$a
Flenner, Jennifer.
$3
1188603
245
1 0
$a
Deep Non-Negative Matrix Factorization.
264
0
$c
2017
300
$a
1 online resource (182 pages)
336
$a
text
$b
txt
$2
rdacontent
337
$a
computer
$b
c
$2
rdamedia
338
$a
online resource
$b
cr
$2
rdacarrier
500
$a
Source: Dissertation Abstracts International, Volume: 78-10(E), Section: B.
500
$a
Adviser: Blake Hunter.
502
$a
Thesis (Ph.D.)--The Claremont Graduate University, 2017.
504
$a
Includes bibliographical references
520
$a
Machine learning and artificial intelligence is a field of a study that learns structure from data and is an important application area of statistics and optimization. Representation learning is one of the oldest machine learning techniques and the principal component analysis (PCA) algorithm of Carl Pearson is one of the oldest methods. Recently, deep neural network algorithms have emerged as one of the most successful representation learning strategies by obtaining state of the art results for classification of large data sets. Their success is due to advancement in computing power and the development of new computational and regularization techniques. The drawbacks to these deep neural networks are that they often only perform well on large data sets, they are not always convergent, are not well understood as to how and when they will work mathematically and the output classifications can randomly fail without warning. Other strategies for data classification and feature extraction, such as topic modeling based strategies, have also recently progressed; it is now possible to quickly perform topic modeling on large data sets. These topic models combine data modeling with optimization to learn interpretable and consistent feature structures in data. We illustrate that it is possible to combine the interpretability and predictability of topic modeling learned representations with some of the attributes of deep neural networks by introducing a deep non-negative matrix factorization (NMF). This framework is capable of producing reliable, interpretable, predictable hierarchical classification of many types of sensor data. Furthermore, we make a connection between sparse representations and deep representations by empirically demonstrating that connecting multiple representations through a non-linear function promotes a sparser representation.
533
$a
Electronic reproduction.
$b
Ann Arbor, Mich. :
$c
ProQuest,
$d
2018
538
$a
Mode of access: World Wide Web
650
4
$a
Mathematics.
$3
527692
655
7
$a
Electronic books.
$2
local
$3
554714
690
$a
0405
710
2
$a
ProQuest Information and Learning Co.
$3
1178819
710
2
$a
The Claremont Graduate University.
$b
Mathematical Sciences.
$3
1188604
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=10265148
$z
click for full text (PQDT)
筆 0 讀者評論
多媒體
評論
新增評論
分享你的心得
Export
取書館別
處理中
...
變更密碼[密碼必須為2種組合(英文和數字)及長度為10碼以上]
登入