語系:
繁體中文
English
說明(常見問題)
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
A Deep Reinforcement Learning Approach for Robotic Bicycle Stabilization.
紀錄類型:
書目-語言資料,印刷品 : Monograph/item
正題名/作者:
A Deep Reinforcement Learning Approach for Robotic Bicycle Stabilization./
作者:
Turakhia, Shubham.
出版者:
Ann Arbor : ProQuest Dissertations & Theses, : 2020,
面頁冊數:
68 p.
附註:
Source: Masters Abstracts International, Volume: 82-07.
Contained By:
Masters Abstracts International82-07.
標題:
Robotics. -
電子資源:
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=28157038
ISBN:
9798557028929
A Deep Reinforcement Learning Approach for Robotic Bicycle Stabilization.
Turakhia, Shubham.
A Deep Reinforcement Learning Approach for Robotic Bicycle Stabilization.
- Ann Arbor : ProQuest Dissertations & Theses, 2020 - 68 p.
Source: Masters Abstracts International, Volume: 82-07.
Thesis (M.S.)--Arizona State University, 2020.
This item must not be sold to any third party vendors.
Bicycle stabilization has become a popular topic because of its complex dynamic behavior and the large body of bicycle modeling research. Riding a bicycle requires accurately performing several tasks, such as balancing and navigation which may be difficult for disabled people. Their problems could be partially reduced by providing steering assistance. For stabilization of these highly maneuverable and efficient machines, many control techniques have been applied – achieving interesting results, but with some limitations which includes strict environmental requirements. This thesis expands on the work of Randlov and Alstrom, using reinforcement learning for bicycle self-stabilization with robotic steering. This thesis applies the deep deterministic policy gradient algorithm, which can handle continuous action spaces which is not possible for Q-learning technique. The research involved algorithm training on virtual environments followed by simulations to assess its results. Furthermore, hardware testing was also conducted on Arizona State University’s RISE lab Smart bicycle platform for testing its self-balancing performance. Detailed analysis of the bicycle trial runs are presented. Validation of testing was done by plotting the real-time states and actions collected during the outdoor testing which included the roll angle of bicycle. Further improvements in regard to model training and hardware testing are also presented.
ISBN: 9798557028929Subjects--Topical Terms:
561941
Robotics.
Subjects--Index Terms:
Bicycle
A Deep Reinforcement Learning Approach for Robotic Bicycle Stabilization.
LDR
:02595nam a2200385 4500
001
1067152
005
20220823142302.5
008
221020s2020 ||||||||||||||||| ||eng d
020
$a
9798557028929
035
$a
(MiAaPQ)AAI28157038
035
$a
AAI28157038
040
$a
MiAaPQ
$c
MiAaPQ
100
1
$a
Turakhia, Shubham.
$3
1372501
245
1 0
$a
A Deep Reinforcement Learning Approach for Robotic Bicycle Stabilization.
260
1
$a
Ann Arbor :
$b
ProQuest Dissertations & Theses,
$c
2020
300
$a
68 p.
500
$a
Source: Masters Abstracts International, Volume: 82-07.
500
$a
Advisor: Zhang, Wenlong.
502
$a
Thesis (M.S.)--Arizona State University, 2020.
506
$a
This item must not be sold to any third party vendors.
520
$a
Bicycle stabilization has become a popular topic because of its complex dynamic behavior and the large body of bicycle modeling research. Riding a bicycle requires accurately performing several tasks, such as balancing and navigation which may be difficult for disabled people. Their problems could be partially reduced by providing steering assistance. For stabilization of these highly maneuverable and efficient machines, many control techniques have been applied – achieving interesting results, but with some limitations which includes strict environmental requirements. This thesis expands on the work of Randlov and Alstrom, using reinforcement learning for bicycle self-stabilization with robotic steering. This thesis applies the deep deterministic policy gradient algorithm, which can handle continuous action spaces which is not possible for Q-learning technique. The research involved algorithm training on virtual environments followed by simulations to assess its results. Furthermore, hardware testing was also conducted on Arizona State University’s RISE lab Smart bicycle platform for testing its self-balancing performance. Detailed analysis of the bicycle trial runs are presented. Validation of testing was done by plotting the real-time states and actions collected during the outdoor testing which included the roll angle of bicycle. Further improvements in regard to model training and hardware testing are also presented.
590
$a
School code: 0010.
650
4
$a
Robotics.
$3
561941
650
4
$a
Automotive engineering.
$3
1104081
650
4
$a
Computer engineering.
$3
569006
650
4
$a
Mechanical engineering.
$3
557493
653
$a
Bicycle
653
$a
Hardware testing
653
$a
Reinforcement Learning
653
$a
Self balancing
653
$a
Stabilization
690
$a
0548
690
$a
0464
690
$a
0540
690
$a
0771
710
2
$a
Arizona State University.
$b
Mechanical Engineering.
$3
845641
773
0
$t
Masters Abstracts International
$g
82-07.
790
$a
0010
791
$a
M.S.
792
$a
2020
793
$a
English
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=28157038
筆 0 讀者評論
多媒體
評論
新增評論
分享你的心得
Export
取書館別
處理中
...
變更密碼[密碼必須為2種組合(英文和數字)及長度為10碼以上]
登入