語系:
繁體中文
English
說明(常見問題)
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
Improving Facial Action Unit Recogni...
~
ProQuest Information and Learning Co.
Improving Facial Action Unit Recognition Using Convolutional Neural Networks.
紀錄類型:
書目-語言資料,手稿 : Monograph/item
正題名/作者:
Improving Facial Action Unit Recognition Using Convolutional Neural Networks./
作者:
Han, Shizhong.
面頁冊數:
1 online resource (70 pages)
附註:
Source: Dissertation Abstracts International, Volume: 79-08(E), Section: B.
Contained By:
Dissertation Abstracts International79-08B(E).
標題:
Computer science. -
電子資源:
click for full text (PQDT)
ISBN:
9780355666175
Improving Facial Action Unit Recognition Using Convolutional Neural Networks.
Han, Shizhong.
Improving Facial Action Unit Recognition Using Convolutional Neural Networks.
- 1 online resource (70 pages)
Source: Dissertation Abstracts International, Volume: 79-08(E), Section: B.
Thesis (Ph.D.)--University of South Carolina, 2017.
Includes bibliographical references
Recognizing facial action units (AUs) from spontaneous facial expression is a challenging problem, because of subtle facial appearance changes, free head movements, occlusions, and limited AU-coded training data. Most recently, convolutional neural networks (CNNs) have shown promise on facial AU recognition. However, CNNs are often overfitted and do not generalize well to unseen subject due to limited AU-coded training images. In order to improve the performance of facial AU recognition, we developed two novel CNN frameworks, by substituting the traditional decision layer and convolutional layer with the incremental boosting layer and adaptive convolutional layer respectively, to recognize the AUs from static image.
Electronic reproduction.
Ann Arbor, Mich. :
ProQuest,
2018
Mode of access: World Wide Web
ISBN: 9780355666175Subjects--Topical Terms:
573171
Computer science.
Index Terms--Genre/Form:
554714
Electronic books.
Improving Facial Action Unit Recognition Using Convolutional Neural Networks.
LDR
:04015ntm a2200361Ki 4500
001
916812
005
20180928111501.5
006
m o u
007
cr mn||||a|a||
008
190606s2017 xx obm 000 0 eng d
020
$a
9780355666175
035
$a
(MiAaPQ)AAI10635287
035
$a
(MiAaPQ)sc:15294
035
$a
AAI10635287
040
$a
MiAaPQ
$b
eng
$c
MiAaPQ
$d
NTU
100
1
$a
Han, Shizhong.
$3
1190655
245
1 0
$a
Improving Facial Action Unit Recognition Using Convolutional Neural Networks.
264
0
$c
2017
300
$a
1 online resource (70 pages)
336
$a
text
$b
txt
$2
rdacontent
337
$a
computer
$b
c
$2
rdamedia
338
$a
online resource
$b
cr
$2
rdacarrier
500
$a
Source: Dissertation Abstracts International, Volume: 79-08(E), Section: B.
500
$a
Adviser: Yan Tong.
502
$a
Thesis (Ph.D.)--University of South Carolina, 2017.
504
$a
Includes bibliographical references
520
$a
Recognizing facial action units (AUs) from spontaneous facial expression is a challenging problem, because of subtle facial appearance changes, free head movements, occlusions, and limited AU-coded training data. Most recently, convolutional neural networks (CNNs) have shown promise on facial AU recognition. However, CNNs are often overfitted and do not generalize well to unseen subject due to limited AU-coded training images. In order to improve the performance of facial AU recognition, we developed two novel CNN frameworks, by substituting the traditional decision layer and convolutional layer with the incremental boosting layer and adaptive convolutional layer respectively, to recognize the AUs from static image.
520
$a
First, in order to handle the limited AU-coded training data and reduce the overfitting, we proposed a novel Incremental Boosting CNN (IB-CNN) to integrate boosting into the CNN via an incremental boosting layer that selects discriminative neurons from the lower layer and is incrementally updated on successive mini-batches. In addition, a novel loss function that accounts for errors from both the incremental boosted classifier and individual weak classifiers was proposed to fine-tune the IB-CNN. Experimental results on four benchmark AU databases have demonstrated that the IB-CNN yields significant improvement over the traditional CNN and the boosting CNN without incremental learning, as well as outperforming the state-of-the-art CNN-based methods in AU recognition. The improvement is more impressive for the AUs that have the lowest frequencies in the databases.
520
$a
Second, all current CNNs use predefined and fixed convolutional filter size. However, AUs activated by different facial muscles cause facial appearance changes at different scales and thus favor different filter sizes. The traditional strategy is to experimentally select the best filter size for each AU in each convolutional layer, but it suffers from expensive training cost, especially when the networks become deeper and deeper. We proposed a novel Optimized Filter Size CNN (OFS-CNN), where the filter sizes and weights of all convolutional layers are learned simultaneously from the training data along with learning convolutional filters. Specifically, the filter size is defined as a continuous variable, which is optimized by minimizing the training loss. Experimental results on four AU-coded databases and one spontaneous facial expression database outperforms traditional CNNs with fixed filter sizes and achieves state-of-the-art recognition performance. Furthermore, the OFS-CNN also beats traditional CNNs using the best filter size obtained by exhaustive search and is capable of estimating optimal filter size for varying image resolution.
533
$a
Electronic reproduction.
$b
Ann Arbor, Mich. :
$c
ProQuest,
$d
2018
538
$a
Mode of access: World Wide Web
650
4
$a
Computer science.
$3
573171
650
4
$a
Artificial intelligence.
$3
559380
655
7
$a
Electronic books.
$2
local
$3
554714
690
$a
0984
690
$a
0800
710
2
$a
ProQuest Information and Learning Co.
$3
1178819
710
2
$a
University of South Carolina.
$b
Computer Science and Engineering.
$3
1190656
773
0
$t
Dissertation Abstracts International
$g
79-08B(E).
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=10635287
$z
click for full text (PQDT)
筆 0 讀者評論
多媒體
評論
新增評論
分享你的心得
Export
取書館別
處理中
...
變更密碼[密碼必須為2種組合(英文和數字)及長度為10碼以上]
登入