語系:
繁體中文
English
說明(常見問題)
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
Hierarchical Decomposition of Large ...
~
Rochester Institute of Technology.
Hierarchical Decomposition of Large Deep Networks.
紀錄類型:
書目-語言資料,手稿 : Monograph/item
正題名/作者:
Hierarchical Decomposition of Large Deep Networks./
作者:
Chennupati, Sumanth.
面頁冊數:
1 online resource (89 pages)
附註:
Source: Masters Abstracts International, Volume: 56-02.
標題:
Computer engineering. -
電子資源:
click for full text (PQDT)
ISBN:
9781369443462
Hierarchical Decomposition of Large Deep Networks.
Chennupati, Sumanth.
Hierarchical Decomposition of Large Deep Networks.
- 1 online resource (89 pages)
Source: Masters Abstracts International, Volume: 56-02.
Thesis (M.S.)--Rochester Institute of Technology, 2016.
Includes bibliographical references
Teaching computers how to recognize people and objects from visual cues in images and videos is an interesting challenge. The computer vision and pattern recognition communities have already demonstrated the ability of intelligent algorithms to detect and classify objects in difficult conditions such as pose, occlusions and image fidelity. Recent deep learning approaches in the ImageNet Large Scale Visual Recognition Challenge (ILSVRC) are built using very large and deep convolution neural network architectures. In 2015, such architectures outperformed human performance (94.9% human vs 95.06% machine) for top-5 validation accuracies on the ImageNet dataset, and earlier this year deep learning approaches demonstrated a remarkable 96.43% accuracy. These successes have been made possible by deep architectures such as VGG, GoogLeNet, and most recently by deep residual models with as many as 152 weight layers. Training of these deep models is a difficult task due to compute intensive learning of millions of parameters. Due to the inevitability of these parameters, very small filters of size 3x3 are used in convolutional layers to reduce the parameters in very deep networks. On the other hand, deep networks generalize well on other datasets and outperform complex datasets with less features or Images.
Electronic reproduction.
Ann Arbor, Mich. :
ProQuest,
2018
Mode of access: World Wide Web
ISBN: 9781369443462Subjects--Topical Terms:
569006
Computer engineering.
Index Terms--Genre/Form:
554714
Electronic books.
Hierarchical Decomposition of Large Deep Networks.
LDR
:03359ntm a2200337K 4500
001
915266
005
20180727125211.5
006
m o u
007
cr mn||||a|a||
008
190606s2016 xx obm 000 0 eng d
020
$a
9781369443462
035
$a
(MiAaPQ)AAI10244927
035
$a
(MiAaPQ)rit:12493
035
$a
AAI10244927
040
$a
MiAaPQ
$b
eng
$c
MiAaPQ
100
1
$a
Chennupati, Sumanth.
$3
1188572
245
1 0
$a
Hierarchical Decomposition of Large Deep Networks.
264
0
$c
2016
300
$a
1 online resource (89 pages)
336
$a
text
$b
txt
$2
rdacontent
337
$a
computer
$b
c
$2
rdamedia
338
$a
online resource
$b
cr
$2
rdacarrier
500
$a
Source: Masters Abstracts International, Volume: 56-02.
500
$a
Adviser: Raymond W. Ptucha.
502
$a
Thesis (M.S.)--Rochester Institute of Technology, 2016.
504
$a
Includes bibliographical references
520
$a
Teaching computers how to recognize people and objects from visual cues in images and videos is an interesting challenge. The computer vision and pattern recognition communities have already demonstrated the ability of intelligent algorithms to detect and classify objects in difficult conditions such as pose, occlusions and image fidelity. Recent deep learning approaches in the ImageNet Large Scale Visual Recognition Challenge (ILSVRC) are built using very large and deep convolution neural network architectures. In 2015, such architectures outperformed human performance (94.9% human vs 95.06% machine) for top-5 validation accuracies on the ImageNet dataset, and earlier this year deep learning approaches demonstrated a remarkable 96.43% accuracy. These successes have been made possible by deep architectures such as VGG, GoogLeNet, and most recently by deep residual models with as many as 152 weight layers. Training of these deep models is a difficult task due to compute intensive learning of millions of parameters. Due to the inevitability of these parameters, very small filters of size 3x3 are used in convolutional layers to reduce the parameters in very deep networks. On the other hand, deep networks generalize well on other datasets and outperform complex datasets with less features or Images.
520
$a
This thesis proposes a robust approach for large scale visual recognition by introducing a framework that automatically analyses the similarity between different classes among the dataset and configures a family of smaller networks that replace a single larger network. Classes that are similar are grouped together and are learnt by a smaller network. This allows one to divide and conquer the large classification problem by identifying the class category from its coarse label to its fine label, deploying two or more stages of networks. In this way the proposed framework learns the natural hierarchy and effectively uses it for the classification problem. A comprehensive analysis of the proposed methods show that hierarchical models outperform traditional models in terms of accuracy, reduced computations and attribute to expanding the ability to learn large scale visual information effectively.
533
$a
Electronic reproduction.
$b
Ann Arbor, Mich. :
$c
ProQuest,
$d
2018
538
$a
Mode of access: World Wide Web
650
4
$a
Computer engineering.
$3
569006
650
4
$a
Computer science.
$3
573171
655
7
$a
Electronic books.
$2
local
$3
554714
690
$a
0464
690
$a
0984
710
2
$a
ProQuest Information and Learning Co.
$3
1178819
710
2
$a
Rochester Institute of Technology.
$b
Computer Engineering.
$3
1184443
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=10244927
$z
click for full text (PQDT)
筆 0 讀者評論
多媒體
評論
新增評論
分享你的心得
Export
取書館別
處理中
...
變更密碼[密碼必須為2種組合(英文和數字)及長度為10碼以上]
登入