語系:
繁體中文
English
說明(常見問題)
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
Human Motion Detection and Action Re...
~
Liu, Chang.
Human Motion Detection and Action Recognition.
紀錄類型:
書目-語言資料,手稿 : Monograph/item
正題名/作者:
Human Motion Detection and Action Recognition./
作者:
Liu, Chang.
面頁冊數:
1 online resource (179 pages)
附註:
Source: Dissertation Abstracts International, Volume: 72-02, Section: B, page: 9790.
Contained By:
Dissertation Abstracts International72-02B.
標題:
Computer science. -
電子資源:
click for full text (PQDT)
ISBN:
9781124417264
Human Motion Detection and Action Recognition.
Liu, Chang.
Human Motion Detection and Action Recognition.
- 1 online resource (179 pages)
Source: Dissertation Abstracts International, Volume: 72-02, Section: B, page: 9790.
Thesis (Ph.D.)--Hong Kong Baptist University (Hong Kong), 2010.
Includes bibliographical references
Human action analysis has been received increasing attentions from researchers in the last decade. The objective of human action analysis is to detect and recognize human actions from videos so that the computer system is able to understand human behaviors and make further semantic description of the scene. Computer systems understand human actions from the scene involving two major steps: human motion detection and human action recognition. There are challenges in both of these two research areas. Generally speaking, the main challenge of human motion detection from video is to detect humans with different moving speeds in complex background clusters, and under different illumination changes. For human action recognition, there have been a number of action classifiers proposed, but the crucial factors are how to give effective and efficient representations of high dimensional human actions for categorization or recognition, and how to employ unlabeled video data to train and enhance the performance of the action recognition system. In this thesis, new algorithms are proposed from spatio-temporal based approach to solve these problems in human action detection and recognition.
Electronic reproduction.
Ann Arbor, Mich. :
ProQuest,
2018
Mode of access: World Wide Web
ISBN: 9781124417264Subjects--Topical Terms:
573171
Computer science.
Index Terms--Genre/Form:
554714
Electronic books.
Human Motion Detection and Action Recognition.
LDR
:06335ntm a2200361Ki 4500
001
916018
005
20180907134546.5
006
m o u
007
cr mn||||a|a||
008
190606s2010 xx obm 000 0 eng d
020
$a
9781124417264
035
$a
(MiAaPQ)AAI3438365
035
$a
AAI3438365
040
$a
MiAaPQ
$b
eng
$c
MiAaPQ
$d
NTU
100
1
$a
Liu, Chang.
$3
1179760
245
1 0
$a
Human Motion Detection and Action Recognition.
264
0
$c
2010
300
$a
1 online resource (179 pages)
336
$a
text
$b
txt
$2
rdacontent
337
$a
computer
$b
c
$2
rdamedia
338
$a
online resource
$b
cr
$2
rdacarrier
500
$a
Source: Dissertation Abstracts International, Volume: 72-02, Section: B, page: 9790.
500
$a
Adviser: Pong Chi Yuen.
502
$a
Thesis (Ph.D.)--Hong Kong Baptist University (Hong Kong), 2010.
504
$a
Includes bibliographical references
520
$a
Human action analysis has been received increasing attentions from researchers in the last decade. The objective of human action analysis is to detect and recognize human actions from videos so that the computer system is able to understand human behaviors and make further semantic description of the scene. Computer systems understand human actions from the scene involving two major steps: human motion detection and human action recognition. There are challenges in both of these two research areas. Generally speaking, the main challenge of human motion detection from video is to detect humans with different moving speeds in complex background clusters, and under different illumination changes. For human action recognition, there have been a number of action classifiers proposed, but the crucial factors are how to give effective and efficient representations of high dimensional human actions for categorization or recognition, and how to employ unlabeled video data to train and enhance the performance of the action recognition system. In this thesis, new algorithms are proposed from spatio-temporal based approach to solve these problems in human action detection and recognition.
520
$a
This thesis proposes to employ the visual saliency for human motion detection via direct analysis from videos. Object saliency is represented by an Information Saliency Map (ISM), which is calculated from spatio-temporal volumes. Both spatial saliency and temporal saliency are calculated and a dynamic fusion method is developed for incorporation. Principal component analysis and kernel density estimation are used to develop an efficient information theoretic based procedure for constructing the ISM. The ISM is then used for measuring visual saliency and detecting foreground objects. Experimental results on publicly available video surveillance databases show that the proposed method is robust for both detecting fast and slow moving object under illumination changes.
520
$a
This thesis further explores the use of ISM for human action recognition. A Boosting EigenAction algorithm is proposed to recognize human action from video. A human action is segmented into a set of primitive periodic motion cycles from information saliency curve. Each cycle of motion is represented by a Saliency Action Unit (SAU), which is used to determine the EigenAction using principal component analysis. A human action classifier is developed using multi-class Adaboost algorithm with Bayesian hypothesis as the weak classifier. Given a human action video sequence, the proposed method effectively and efficiently locates the SAU(s) in the video, trains an action classifier, and recognizes the human actions by categorizing these SAU(s).
520
$a
This thesis develops a semi-supervised algorithm for human action recognition, as labeled data are costly to obtain whereas unlabeled data are abundantly available. A boosted Co-Training algorithm for human action recognition is proposed. Two confidence measures namely inter-view confidence and intra-view confidence are proposed and estimated to solve the two main problems in the Co-Training method, namely view dependency and view insufficiency, and are dynamically fused into one semi-supervised learning process. Mutual information measure is employed to quantify the inter-view uncertainty and measure the independence among respective views. Intra-view confidence is estimated from boosted hypotheses to measure the total data inconsistency among labeled data and unlabeled data. Two discriminative views from temporal and spatial information of the video, namely action saliency view and action eigen-projection view, are proposed as input data in practice. Given a small set of labeled videos and a large set of unlabeled videos, the proposed semi-supervised learning algorithm trains a classifier by maximizing the inter-view confidence and intra-view confidence, and dynamically incorporating unlabeled data into the labeled data set, the performance of the classifier will be improved in each iteration. The final classifier is able to classify different human actions with action video clips as the input data.
520
$a
The proposed methods have been extensive evaluated using publicly available databases such as CAVIAR, PETS, OTCBVS-Bench video surveillance databases, and Weizmann, KTH human action recognition databases. Comparison between the proposed methods and existing state-of-the-art methods are also reported in this thesis. In short, the major contributions of this thesis are summarized as follows: (1)An Information Saliency Map (ISM) is proposed to detect human motions. The ISM is robust for object detection under illumination changes. (2) Salient Action Unit (SAU) is proposed to represent primitive human actions, the SAU can be efficiently extracted from information saliency curve, and used for training the human action classifier. (3) A boosted Co-Training algorithm for human action recognition is proposed. Inter-view confidence and intra-view confidence are proposed and estimated to solve the view dependency and view insufficiency problems in Co-Training.
533
$a
Electronic reproduction.
$b
Ann Arbor, Mich. :
$c
ProQuest,
$d
2018
538
$a
Mode of access: World Wide Web
650
4
$a
Computer science.
$3
573171
655
7
$a
Electronic books.
$2
local
$3
554714
690
$a
0984
710
2
$a
ProQuest Information and Learning Co.
$3
1178819
710
2
$a
Hong Kong Baptist University (Hong Kong).
$3
1189587
773
0
$t
Dissertation Abstracts International
$g
72-02B.
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=3438365
$z
click for full text (PQDT)
筆 0 讀者評論
多媒體
評論
新增評論
分享你的心得
Export
取書館別
處理中
...
變更密碼[密碼必須為2種組合(英文和數字)及長度為10碼以上]
登入