語系:
繁體中文
English
說明(常見問題)
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
Monocular human pose tracking and ac...
~
University of Southern California.
Monocular human pose tracking and action recognition in dynamic environments.
紀錄類型:
書目-語言資料,手稿 : Monograph/item
正題名/作者:
Monocular human pose tracking and action recognition in dynamic environments./
作者:
Singh, Vivek Kumar.
面頁冊數:
1 online resource (137 pages)
附註:
Source: Dissertation Abstracts International, Volume: 73-04, Section: B, page: 2318.
Contained By:
Dissertation Abstracts International73-04B.
標題:
Computer science. -
電子資源:
click for full text (PQDT)
ISBN:
9781267077523
Monocular human pose tracking and action recognition in dynamic environments.
Singh, Vivek Kumar.
Monocular human pose tracking and action recognition in dynamic environments.
- 1 online resource (137 pages)
Source: Dissertation Abstracts International, Volume: 73-04, Section: B, page: 2318.
Thesis (Ph.D.)
Includes bibliographical references
The objective of this work is to develop an efficient method to find human in videos captured from a single camera, and recognize the action being performed. Automatic detection of humans in a scene and understanding the ongoing activities has been extensively studied, as solution to this problem finds applications in diverse areas such as surveillance, video summarization, content mining and human computer interaction, among others.
Electronic reproduction.
Ann Arbor, Mich. :
ProQuest,
2018
Mode of access: World Wide Web
ISBN: 9781267077523Subjects--Topical Terms:
573171
Computer science.
Index Terms--Genre/Form:
554714
Electronic books.
Monocular human pose tracking and action recognition in dynamic environments.
LDR
:04295ntm a2200373Ki 4500
001
910669
005
20180517123959.5
006
m o u
007
cr mn||||a|a||
008
190606s2011 xx obm 000 0 eng d
020
$a
9781267077523
035
$a
(MiAaPQ)AAI3487994
035
$a
(MiAaPQ)usc:12756
035
$a
AAI3487994
040
$a
MiAaPQ
$b
eng
$c
MiAaPQ
099
$a
TUL
$f
hyy
$c
available through World Wide Web
100
1
$a
Singh, Vivek Kumar.
$3
1182078
245
1 0
$a
Monocular human pose tracking and action recognition in dynamic environments.
264
0
$c
2011
300
$a
1 online resource (137 pages)
336
$a
text
$b
txt
$2
rdacontent
337
$a
computer
$b
c
$2
rdamedia
338
$a
online resource
$b
cr
$2
rdacarrier
500
$a
Source: Dissertation Abstracts International, Volume: 73-04, Section: B, page: 2318.
500
$a
Adviser: Ramakant Nevatia.
502
$a
Thesis (Ph.D.)
$c
University of Southern California
$d
2011.
504
$a
Includes bibliographical references
520
$a
The objective of this work is to develop an efficient method to find human in videos captured from a single camera, and recognize the action being performed. Automatic detection of humans in a scene and understanding the ongoing activities has been extensively studied, as solution to this problem finds applications in diverse areas such as surveillance, video summarization, content mining and human computer interaction, among others.
520
$a
Though significant advances have made towards finding human in specific poses such as upright pose in cluttered scenes, the problem of finding a human in an arbitrary pose in an unknown environment is still a challenge. We address the problem of estimating human pose using a part based approach, that first finds body part candidates using part detectors and then enforce kinematic constraints using a tree-structured graphical model. For inference, we present a collaborative branch and bound algorithm that uses branch and bound method to search for each part and use kinematics from neighboring parts to guide the branching behavior and compute bounds on the best part estimate. We use multiple, heterogeneous part detectors with varying accuracy and computation requirements, ordered in a hierarchy, to achieve more accurate and efficient pose estimation.
520
$a
While the above approach deals well with pose articulations, it still fails to find human in poses with heavy self occlusion such as crouch, as it does not model inter part occlusion. Thus, recognizing actions from inferred poses would be unreliable. In order to deal with this issue, we propose a joint tracking and recognition approach which tracks the actor pose by sampling from 3D action models and localizing each pose sample; this also allows view-invariant action recognition. We model an action as a sequence of transformations between keyposes. These action models can be obtained by annotating only a few keyposes in 2D; this avoids large training data and MoCAP. For efficiently localizing a sampled pose, we generate a Pose-Specific Part Model (PSPM) which captures appropriate kinematic and occlusion constraints in a tree-structure. In addition, our approach also does not require pose silhouettes and thus also works well in presence of background motion. We show improvements to previous results on two publicly available datasets as well as on a novel, augmented dataset with dynamic backgrounds.
520
$a
Since the poses are sampled from action models, the above activity driven approach works well if the actor only performs actions for which models are available, and does not generalize well to unseen poses and actions. We address this by proposing an activity assisted tracking framework that combines the activity driven tracking with the bottom up pose estimation by using pose samples obtained using part models, in addition to those sampled from action models. We demonstrate the effectiveness of our approach on long video sequences with hand gestures.
533
$a
Electronic reproduction.
$b
Ann Arbor, Mich. :
$c
ProQuest,
$d
2018
538
$a
Mode of access: World Wide Web
650
4
$a
Computer science.
$3
573171
655
7
$a
Electronic books.
$2
local
$3
554714
690
$a
0984
710
2
$a
ProQuest Information and Learning Co.
$3
1178819
710
2
$a
University of Southern California.
$b
Computer Science.
$3
1182079
773
0
$t
Dissertation Abstracts International
$g
73-04B.
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=3487994
$z
click for full text (PQDT)
筆 0 讀者評論
多媒體
評論
新增評論
分享你的心得
Export
取書館別
處理中
...
變更密碼[密碼必須為2種組合(英文和數字)及長度為10碼以上]
登入