語系:
繁體中文
English
說明(常見問題)
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
Action Recognition from Videos using...
~
Ghewari, Rishikesh Sanjay.
Action Recognition from Videos using Deep Neural Networks.
紀錄類型:
書目-語言資料,手稿 : Monograph/item
正題名/作者:
Action Recognition from Videos using Deep Neural Networks./
作者:
Ghewari, Rishikesh Sanjay.
面頁冊數:
1 online resource (48 pages)
附註:
Source: Masters Abstracts International, Volume: 56-05.
標題:
Artificial intelligence. -
電子資源:
click for full text (PQDT)
ISBN:
9780355068559
Action Recognition from Videos using Deep Neural Networks.
Ghewari, Rishikesh Sanjay.
Action Recognition from Videos using Deep Neural Networks.
- 1 online resource (48 pages)
Source: Masters Abstracts International, Volume: 56-05.
Thesis (M.S.)--University of California, San Diego, 2017.
Includes bibliographical references
Convolutional neural network(CNN) models have been extensively used in recent years to solve the problem of image understanding giving state-of-the-art results in tasks like classification, recognition, retrieval, segmentation and object detection. Motivated by this success there have been several attempts to extend convolutional neural networks for video understanding and classification. An important distinction between images and videos is the temporal information that is encoded by the sequence of frames. Most CNN models fail to capture this temporal information. Recurrent neural networks have shown promising results in modelling sequences.
Electronic reproduction.
Ann Arbor, Mich. :
ProQuest,
2018
Mode of access: World Wide Web
ISBN: 9780355068559Subjects--Topical Terms:
559380
Artificial intelligence.
Index Terms--Genre/Form:
554714
Electronic books.
Action Recognition from Videos using Deep Neural Networks.
LDR
:02826ntm a2200337K 4500
001
912162
005
20180608102940.5
006
m o u
007
cr mn||||a|a||
008
190606s2017 xx obm 000 0 eng d
020
$a
9780355068559
035
$a
(MiAaPQ)AAI10285805
035
$a
(MiAaPQ)ucsd:16571
035
$a
AAI10285805
040
$a
MiAaPQ
$b
eng
$c
MiAaPQ
100
1
$a
Ghewari, Rishikesh Sanjay.
$3
1184393
245
1 0
$a
Action Recognition from Videos using Deep Neural Networks.
264
0
$c
2017
300
$a
1 online resource (48 pages)
336
$a
text
$b
txt
$2
rdacontent
337
$a
computer
$b
c
$2
rdamedia
338
$a
online resource
$b
cr
$2
rdacarrier
500
$a
Source: Masters Abstracts International, Volume: 56-05.
500
$a
Adviser: Garrison W. Cottrell.
502
$a
Thesis (M.S.)--University of California, San Diego, 2017.
504
$a
Includes bibliographical references
520
$a
Convolutional neural network(CNN) models have been extensively used in recent years to solve the problem of image understanding giving state-of-the-art results in tasks like classification, recognition, retrieval, segmentation and object detection. Motivated by this success there have been several attempts to extend convolutional neural networks for video understanding and classification. An important distinction between images and videos is the temporal information that is encoded by the sequence of frames. Most CNN models fail to capture this temporal information. Recurrent neural networks have shown promising results in modelling sequences.
520
$a
In this work we present a neural network model which combines convolutional neural networks and recurrent neural networks. We first evaluate the effect of the convolutional network used for understanding static frames on action recognition. Following this we explore properties that are inherent in the dataset. We combine the representation we get from the convolutional network, the temporal information we get from the sequence of video frames and other properties of the dataset to create a unified model which is trained on the UCF-101 dataset for action recognition. We evaluate our model on the pre-defined test set splits of the UCF-101 dataset. We show that our model is able to achieve an improvement over the baseline model. We show comparison between our models and various models proposed in other related works on the UCF-101 dataset. We observe that a good model for action recognition not only needs to understand static frames but also needs to encode the temporal information across a sequence of frames.
533
$a
Electronic reproduction.
$b
Ann Arbor, Mich. :
$c
ProQuest,
$d
2018
538
$a
Mode of access: World Wide Web
650
4
$a
Artificial intelligence.
$3
559380
650
4
$a
Computer science.
$3
573171
655
7
$a
Electronic books.
$2
local
$3
554714
690
$a
0800
690
$a
0984
710
2
$a
ProQuest Information and Learning Co.
$3
1178819
710
2
$a
University of California, San Diego.
$b
Computer Science.
$3
1182161
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=10285805
$z
click for full text (PQDT)
筆 0 讀者評論
多媒體
評論
新增評論
分享你的心得
Export
取書館別
處理中
...
變更密碼[密碼必須為2種組合(英文和數字)及長度為10碼以上]
登入