語系:
繁體中文
English
說明(常見問題)
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
Compressed Deep Supervision and Resi...
~
Qassim, Hussam.
Compressed Deep Supervision and Residual Learning Network for Scene Recognition.
紀錄類型:
書目-語言資料,手稿 : Monograph/item
正題名/作者:
Compressed Deep Supervision and Residual Learning Network for Scene Recognition./
作者:
Qassim, Hussam.
面頁冊數:
1 online resource (65 pages)
附註:
Source: Masters Abstracts International, Volume: 57-06.
Contained By:
Masters Abstracts International57-06(E).
標題:
Computer science. -
電子資源:
click for full text (PQDT)
ISBN:
9780438055445
Compressed Deep Supervision and Residual Learning Network for Scene Recognition.
Qassim, Hussam.
Compressed Deep Supervision and Residual Learning Network for Scene Recognition.
- 1 online resource (65 pages)
Source: Masters Abstracts International, Volume: 57-06.
Thesis (M.S.)--California State University, Fullerton, 2018.
Includes bibliographical references
One of the promising processes to elevate the accuracy of the convolutional neural networks is by increasing the depth of the convolutional neural networks. However, increasing the depth of the convolutional neural network leads to a boost in the number of layers, which means an increase in the number of parameters. Which drive the depth convolutional neural network to be slow in convergence during the backpropagation process and prone to overfitting and degradation. We used two different techniques, the residual learning plus the deep supervision, to build the models. We trained the models to classify a large-scale scene dataset MIT Places 205 and MIT Places 365-Standard. The result from the experiments proved that the proposed models named (Residual-CNDS) have addressed the problems of overfitting, slower convergence, and degradation. The proposed models came in two models (Residual-CNDS8), and (Residual-CNDS10), which include eight and ten convolutional layers sequentially. Furthermore, reforming the proposed Residual-CNDS8 by applying a compression method to optimize the size and the time needed to train the Residual-CNDS8. Therefore, we proposed a Residual Squeeze CNDS, which address the issue of speed and size while maintaining addressing the issues of overfitting, slower convergence, and degradation. With matching the accuracy of Residual-CNDS8 on MIT Places 365-Standard scene dataset, the Residual Squeeze CNDS is 87.64% smaller in size and 13.33% faster in the training time.
Electronic reproduction.
Ann Arbor, Mich. :
ProQuest,
2018
Mode of access: World Wide Web
ISBN: 9780438055445Subjects--Topical Terms:
573171
Computer science.
Index Terms--Genre/Form:
554714
Electronic books.
Compressed Deep Supervision and Residual Learning Network for Scene Recognition.
LDR
:02731ntm a2200337Ki 4500
001
916906
005
20180928111503.5
006
m o u
007
cr mn||||a|a||
008
190606s2018 xx obm 000 0 eng d
020
$a
9780438055445
035
$a
(MiAaPQ)AAI10810175
035
$a
(MiAaPQ)fullerton:10525
035
$a
AAI10810175
040
$a
MiAaPQ
$b
eng
$c
MiAaPQ
$d
NTU
100
1
$a
Qassim, Hussam.
$3
1190773
245
1 0
$a
Compressed Deep Supervision and Residual Learning Network for Scene Recognition.
264
0
$c
2018
300
$a
1 online resource (65 pages)
336
$a
text
$b
txt
$2
rdacontent
337
$a
computer
$b
c
$2
rdamedia
338
$a
online resource
$b
cr
$2
rdacarrier
500
$a
Source: Masters Abstracts International, Volume: 57-06.
500
$a
Adviser: Michael Shafae.
502
$a
Thesis (M.S.)--California State University, Fullerton, 2018.
504
$a
Includes bibliographical references
520
$a
One of the promising processes to elevate the accuracy of the convolutional neural networks is by increasing the depth of the convolutional neural networks. However, increasing the depth of the convolutional neural network leads to a boost in the number of layers, which means an increase in the number of parameters. Which drive the depth convolutional neural network to be slow in convergence during the backpropagation process and prone to overfitting and degradation. We used two different techniques, the residual learning plus the deep supervision, to build the models. We trained the models to classify a large-scale scene dataset MIT Places 205 and MIT Places 365-Standard. The result from the experiments proved that the proposed models named (Residual-CNDS) have addressed the problems of overfitting, slower convergence, and degradation. The proposed models came in two models (Residual-CNDS8), and (Residual-CNDS10), which include eight and ten convolutional layers sequentially. Furthermore, reforming the proposed Residual-CNDS8 by applying a compression method to optimize the size and the time needed to train the Residual-CNDS8. Therefore, we proposed a Residual Squeeze CNDS, which address the issue of speed and size while maintaining addressing the issues of overfitting, slower convergence, and degradation. With matching the accuracy of Residual-CNDS8 on MIT Places 365-Standard scene dataset, the Residual Squeeze CNDS is 87.64% smaller in size and 13.33% faster in the training time.
533
$a
Electronic reproduction.
$b
Ann Arbor, Mich. :
$c
ProQuest,
$d
2018
538
$a
Mode of access: World Wide Web
650
4
$a
Computer science.
$3
573171
650
4
$a
Artificial intelligence.
$3
559380
655
7
$a
Electronic books.
$2
local
$3
554714
690
$a
0984
690
$a
0800
710
2
$a
ProQuest Information and Learning Co.
$3
1178819
710
2
$a
California State University, Fullerton.
$b
Computer Science.
$3
1190774
773
0
$t
Masters Abstracts International
$g
57-06(E).
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=10810175
$z
click for full text (PQDT)
筆 0 讀者評論
多媒體
評論
新增評論
分享你的心得
Export
取書館別
處理中
...
變更密碼[密碼必須為2種組合(英文和數字)及長度為10碼以上]
登入