語系:
繁體中文
English
說明(常見問題)
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
Antipodal Robotic Grasping using Dee...
~
Rochester Institute of Technology.
Antipodal Robotic Grasping using Deep Learning.
紀錄類型:
書目-語言資料,印刷品 : Monograph/item
正題名/作者:
Antipodal Robotic Grasping using Deep Learning./
作者:
Joshi, Shirin.
出版者:
Ann Arbor : ProQuest Dissertations & Theses, : 2020,
面頁冊數:
72 p.
附註:
Source: Masters Abstracts International, Volume: 82-03.
Contained By:
Masters Abstracts International82-03.
標題:
Artificial intelligence. -
電子資源:
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=28088105
ISBN:
9798664739336
Antipodal Robotic Grasping using Deep Learning.
Joshi, Shirin.
Antipodal Robotic Grasping using Deep Learning.
- Ann Arbor : ProQuest Dissertations & Theses, 2020 - 72 p.
Source: Masters Abstracts International, Volume: 82-03.
Thesis (M.S.)--Rochester Institute of Technology, 2020.
This item must not be sold to any third party vendors.
In this work, we discuss two implementations that predict antipodal grasps for novel objects: A deep Q-learning approach and a Generative Residual Convolutional Neural Network approach. We present a deep reinforcement learning based method to solve the problem of robotic grasping using visio-motor feedback. The use of a deep learning based approach reduces the complexity caused by the use of hand-designed features. Our method uses an off-policy reinforcement learning framework to learn the grasping policy. We use the double deep Q-learning framework along with a novel Grasp-Q-Network to output grasp probabilities used to learn grasps that maximize the pick success. We propose a visual servoing mechanism that uses a multi-view camera setup that observes the scene which contains the objects of interest. We performed experiments using a Baxter Gazebo simulated environment as well as on the actual robot. The results show that our proposed method outperforms the baseline Q-learning framework and increases grasping accuracy by adapting a multi-view model in comparison to a single-view model. The second method tackles the problem of generating antipodal robotic grasps for unknown objects from an n-channel image of the scene. We propose a novel Generative Residual Convolutional Neural Network (GR-ConvNet) model that can generate robust antipodal grasps from n-channel input at real-time speeds (20ms). We evaluate the proposed model architecture on standard dataset and previously unseen household objects. We achieved state-of-the-art accuracy of 97.7% on Cornell grasp dataset. We also demonstrate a 93.5% grasp success rate on previously unseen real-world objects.
ISBN: 9798664739336Subjects--Topical Terms:
559380
Artificial intelligence.
Subjects--Index Terms:
Antipodal grasping
Antipodal Robotic Grasping using Deep Learning.
LDR
:02829nam a2200373 4500
001
1038023
005
20210910100657.5
008
211029s2020 ||||||||||||||||| ||eng d
020
$a
9798664739336
035
$a
(MiAaPQ)AAI28088105
035
$a
AAI28088105
040
$a
MiAaPQ
$c
MiAaPQ
100
1
$a
Joshi, Shirin.
$3
1335354
245
1 0
$a
Antipodal Robotic Grasping using Deep Learning.
260
1
$a
Ann Arbor :
$b
ProQuest Dissertations & Theses,
$c
2020
300
$a
72 p.
500
$a
Source: Masters Abstracts International, Volume: 82-03.
500
$a
Advisor: Sahin, Ferat.
502
$a
Thesis (M.S.)--Rochester Institute of Technology, 2020.
506
$a
This item must not be sold to any third party vendors.
520
$a
In this work, we discuss two implementations that predict antipodal grasps for novel objects: A deep Q-learning approach and a Generative Residual Convolutional Neural Network approach. We present a deep reinforcement learning based method to solve the problem of robotic grasping using visio-motor feedback. The use of a deep learning based approach reduces the complexity caused by the use of hand-designed features. Our method uses an off-policy reinforcement learning framework to learn the grasping policy. We use the double deep Q-learning framework along with a novel Grasp-Q-Network to output grasp probabilities used to learn grasps that maximize the pick success. We propose a visual servoing mechanism that uses a multi-view camera setup that observes the scene which contains the objects of interest. We performed experiments using a Baxter Gazebo simulated environment as well as on the actual robot. The results show that our proposed method outperforms the baseline Q-learning framework and increases grasping accuracy by adapting a multi-view model in comparison to a single-view model. The second method tackles the problem of generating antipodal robotic grasps for unknown objects from an n-channel image of the scene. We propose a novel Generative Residual Convolutional Neural Network (GR-ConvNet) model that can generate robust antipodal grasps from n-channel input at real-time speeds (20ms). We evaluate the proposed model architecture on standard dataset and previously unseen household objects. We achieved state-of-the-art accuracy of 97.7% on Cornell grasp dataset. We also demonstrate a 93.5% grasp success rate on previously unseen real-world objects.
590
$a
School code: 0465.
650
4
$a
Artificial intelligence.
$3
559380
650
4
$a
Robotics.
$3
561941
653
$a
Antipodal grasping
653
$a
Convolutional neural network
653
$a
Deep learning
653
$a
Deep reinforcement learning
653
$a
Machine learning
653
$a
Robotic grasping
690
$a
0771
690
$a
0800
710
2
$a
Rochester Institute of Technology.
$b
Electrical Engineering.
$3
1184409
773
0
$t
Masters Abstracts International
$g
82-03.
790
$a
0465
791
$a
M.S.
792
$a
2020
793
$a
English
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=28088105
筆 0 讀者評論
多媒體
評論
新增評論
分享你的心得
Export
取書館別
處理中
...
變更密碼[密碼必須為2種組合(英文和數字)及長度為10碼以上]
登入