語系:
繁體中文
English
說明(常見問題)
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
Robots that Anticipate Pain : = Anti...
~
Sur, Indranil.
Robots that Anticipate Pain : = Anticipating Physical Perturbations from Visual Cues through Deep Predictive Models.
紀錄類型:
書目-語言資料,手稿 : Monograph/item
正題名/作者:
Robots that Anticipate Pain :/
其他題名:
Anticipating Physical Perturbations from Visual Cues through Deep Predictive Models.
作者:
Sur, Indranil.
面頁冊數:
1 online resource (49 pages)
附註:
Source: Masters Abstracts International, Volume: 56-04.
標題:
Robotics. -
電子資源:
click for full text (PQDT)
ISBN:
9781369743524
Robots that Anticipate Pain : = Anticipating Physical Perturbations from Visual Cues through Deep Predictive Models.
Sur, Indranil.
Robots that Anticipate Pain :
Anticipating Physical Perturbations from Visual Cues through Deep Predictive Models. - 1 online resource (49 pages)
Source: Masters Abstracts International, Volume: 56-04.
Thesis (M.S.)--Arizona State University, 2017.
Includes bibliographical references
To ensure system integrity, robots need to proactively avoid any unwanted physical perturbation that may cause damage to the underlying hardware. In this thesis work, we investigate a machine learning approach that allows robots to anticipate impending physical perturbations from perceptual cues. In contrast to other approaches that require knowledge about sources of perturbation to be encoded before deployment, our method is based on experiential learning. Robots learn to associate visual cues with subsequent physical perturbations and contacts. In turn, these extracted visual cues are then used to predict potential future perturbations acting on the robot. To this end, we introduce a novel deep network architecture which combines multiple subnetworks for dealing with robot dynamics and perceptual input from the environment. We present a self-supervised approach for training the system that does not require any labeling of training data. Extensive experiments in a human-robot interaction task show that a robot can learn to predict physical contact by a human interaction partner without any prior information or labeling. Furthermore, the network is able to successfully predict physical contact from either depth stream input or traditional video input or using both modalities as input.
Electronic reproduction.
Ann Arbor, Mich. :
ProQuest,
2018
Mode of access: World Wide Web
ISBN: 9781369743524Subjects--Topical Terms:
561941
Robotics.
Index Terms--Genre/Form:
554714
Electronic books.
Robots that Anticipate Pain : = Anticipating Physical Perturbations from Visual Cues through Deep Predictive Models.
LDR
:02478ntm a2200325K 4500
001
912157
005
20180608102940.5
006
m o u
007
cr mn||||a|a||
008
190606s2017 xx obm 000 0 eng d
020
$a
9781369743524
035
$a
(MiAaPQ)AAI10272754
035
$a
(MiAaPQ)asu:16839
035
$a
AAI10272754
040
$a
MiAaPQ
$b
eng
$c
MiAaPQ
100
1
$a
Sur, Indranil.
$3
1184388
245
1 0
$a
Robots that Anticipate Pain :
$b
Anticipating Physical Perturbations from Visual Cues through Deep Predictive Models.
264
0
$c
2017
300
$a
1 online resource (49 pages)
336
$a
text
$b
txt
$2
rdacontent
337
$a
computer
$b
c
$2
rdamedia
338
$a
online resource
$b
cr
$2
rdacarrier
500
$a
Source: Masters Abstracts International, Volume: 56-04.
500
$a
Adviser: Heni B. Amor.
502
$a
Thesis (M.S.)--Arizona State University, 2017.
504
$a
Includes bibliographical references
520
$a
To ensure system integrity, robots need to proactively avoid any unwanted physical perturbation that may cause damage to the underlying hardware. In this thesis work, we investigate a machine learning approach that allows robots to anticipate impending physical perturbations from perceptual cues. In contrast to other approaches that require knowledge about sources of perturbation to be encoded before deployment, our method is based on experiential learning. Robots learn to associate visual cues with subsequent physical perturbations and contacts. In turn, these extracted visual cues are then used to predict potential future perturbations acting on the robot. To this end, we introduce a novel deep network architecture which combines multiple subnetworks for dealing with robot dynamics and perceptual input from the environment. We present a self-supervised approach for training the system that does not require any labeling of training data. Extensive experiments in a human-robot interaction task show that a robot can learn to predict physical contact by a human interaction partner without any prior information or labeling. Furthermore, the network is able to successfully predict physical contact from either depth stream input or traditional video input or using both modalities as input.
533
$a
Electronic reproduction.
$b
Ann Arbor, Mich. :
$c
ProQuest,
$d
2018
538
$a
Mode of access: World Wide Web
650
4
$a
Robotics.
$3
561941
650
4
$a
Artificial intelligence.
$3
559380
655
7
$a
Electronic books.
$2
local
$3
554714
690
$a
0771
690
$a
0800
710
2
$a
ProQuest Information and Learning Co.
$3
1178819
710
2
$a
Arizona State University.
$b
Computer Science.
$3
845377
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=10272754
$z
click for full text (PQDT)
筆 0 讀者評論
多媒體
評論
新增評論
分享你的心得
Export
取書館別
處理中
...
變更密碼[密碼必須為2種組合(英文和數字)及長度為10碼以上]
登入