Language:
English
繁體中文
Help
Login
Back
Switch To:
Labeled
|
MARC Mode
|
ISBD
CortexNet : = A Robust Predictive De...
~
ProQuest Information and Learning Co.
CortexNet : = A Robust Predictive Deep Neural Network Trained on Videos.
Record Type:
Language materials, manuscript : Monograph/item
Title/Author:
CortexNet :/
Reminder of title:
A Robust Predictive Deep Neural Network Trained on Videos.
Author:
Canziani, Alfredo.
Description:
1 online resource (96 pages)
Notes:
Source: Dissertation Abstracts International, Volume: 79-03(E), Section: B.
Subject:
Artificial intelligence. -
Online resource:
click for full text (PQDT)
ISBN:
9780355254938
CortexNet : = A Robust Predictive Deep Neural Network Trained on Videos.
Canziani, Alfredo.
CortexNet :
A Robust Predictive Deep Neural Network Trained on Videos. - 1 online resource (96 pages)
Source: Dissertation Abstracts International, Volume: 79-03(E), Section: B.
Thesis (Ph.D.)--Purdue University, 2017.
Includes bibliographical references
In the past five years we have observed the rise of incredibly well performing feed-forward neural networks trained with supervision for vision related tasks. These models have achieved super-human performance on object recognition, localisation, and detection in still images. However, there is a need to identify the best strategy to employ these networks with temporal visual inputs and obtain a robust and stable representation of video data. Inspired by the human visual system, I propose a deep neural network family, CortexNet, which features not only bottom-up feed-forward connections, but also it models the abundant top-down feedback and lateral connections, which are present in our visual cortex. I introduce two training schemes --- the unsupervised MatchNet and weakly supervised TempoNet modes --- where a network learns how to correctly anticipate a subsequent frame in a video clip or the identity of its predominant subject, by learning egomotion cues and how to automatically track several objects in the current scene. Find the project website at tinyurl.com/CortexNet.
Electronic reproduction.
Ann Arbor, Mich. :
ProQuest,
2018
Mode of access: World Wide Web
ISBN: 9780355254938Subjects--Topical Terms:
559380
Artificial intelligence.
Index Terms--Genre/Form:
554714
Electronic books.
CortexNet : = A Robust Predictive Deep Neural Network Trained on Videos.
LDR
:02264ntm a2200337K 4500
001
912167
005
20180608102940.5
006
m o u
007
cr mn||||a|a||
008
190606s2017 xx obm 000 0 eng d
020
$a
9780355254938
035
$a
(MiAaPQ)AAI10607656
035
$a
(MiAaPQ)purdue:21744
035
$a
AAI10607656
040
$a
MiAaPQ
$b
eng
$c
MiAaPQ
100
1
$a
Canziani, Alfredo.
$3
1184402
245
1 0
$a
CortexNet :
$b
A Robust Predictive Deep Neural Network Trained on Videos.
264
0
$c
2017
300
$a
1 online resource (96 pages)
336
$a
text
$b
txt
$2
rdacontent
337
$a
computer
$b
c
$2
rdamedia
338
$a
online resource
$b
cr
$2
rdacarrier
500
$a
Source: Dissertation Abstracts International, Volume: 79-03(E), Section: B.
500
$a
Adviser: Eugenio Culurciello.
502
$a
Thesis (Ph.D.)--Purdue University, 2017.
504
$a
Includes bibliographical references
520
$a
In the past five years we have observed the rise of incredibly well performing feed-forward neural networks trained with supervision for vision related tasks. These models have achieved super-human performance on object recognition, localisation, and detection in still images. However, there is a need to identify the best strategy to employ these networks with temporal visual inputs and obtain a robust and stable representation of video data. Inspired by the human visual system, I propose a deep neural network family, CortexNet, which features not only bottom-up feed-forward connections, but also it models the abundant top-down feedback and lateral connections, which are present in our visual cortex. I introduce two training schemes --- the unsupervised MatchNet and weakly supervised TempoNet modes --- where a network learns how to correctly anticipate a subsequent frame in a video clip or the identity of its predominant subject, by learning egomotion cues and how to automatically track several objects in the current scene. Find the project website at tinyurl.com/CortexNet.
533
$a
Electronic reproduction.
$b
Ann Arbor, Mich. :
$c
ProQuest,
$d
2018
538
$a
Mode of access: World Wide Web
650
4
$a
Artificial intelligence.
$3
559380
650
4
$a
Neurosciences.
$3
593561
650
4
$a
Biomedical engineering.
$3
588770
655
7
$a
Electronic books.
$2
local
$3
554714
690
$a
0800
690
$a
0317
690
$a
0541
710
2
$a
ProQuest Information and Learning Co.
$3
1178819
710
2
$a
Purdue University.
$b
Biomedical Engineering.
$3
1184403
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=10607656
$z
click for full text (PQDT)
based on 0 review(s)
Multimedia
Reviews
Add a review
and share your thoughts with other readers
Export
pickup library
Processing
...
Change password
Login