語系:
繁體中文
English
說明(常見問題)
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
Reinforcement Learning for Optimal F...
~
Dixon, Warren.
Reinforcement Learning for Optimal Feedback Control = A Lyapunov-Based Approach /
紀錄類型:
書目-語言資料,印刷品 : Monograph/item
正題名/作者:
Reinforcement Learning for Optimal Feedback Control/ by Rushikesh Kamalapurkar, Patrick Walters, Joel Rosenfeld, Warren Dixon.
其他題名:
A Lyapunov-Based Approach /
作者:
Kamalapurkar, Rushikesh.
其他作者:
Walters, Patrick.
面頁冊數:
XVI, 293 p.online resource. :
Contained By:
Springer Nature eBook
標題:
Control engineering. -
電子資源:
https://doi.org/10.1007/978-3-319-78384-0
ISBN:
9783319783840
Reinforcement Learning for Optimal Feedback Control = A Lyapunov-Based Approach /
Kamalapurkar, Rushikesh.
Reinforcement Learning for Optimal Feedback Control
A Lyapunov-Based Approach /[electronic resource] :by Rushikesh Kamalapurkar, Patrick Walters, Joel Rosenfeld, Warren Dixon. - 1st ed. 2018. - XVI, 293 p.online resource. - Communications and Control Engineering,0178-5354. - Communications and Control Engineering,.
Chapter 1. Optimal control -- Chapter 2. Approximate dynamic programming -- Chapter 3. Excitation-based online approximate optimal control -- Chapter 4. Model-based reinforcement learning for approximate optimal control -- Chapter 5. Differential Graphical Games -- Chapter 6. Applications -- Chapter 7. Computational considerations -- Reference -- Index.
Reinforcement Learning for Optimal Feedback Control develops model-based and data-driven reinforcement learning methods for solving optimal control problems in nonlinear deterministic dynamical systems. In order to achieve learning under uncertainty, data-driven methods for identifying system models in real-time are also developed. The book illustrates the advantages gained from the use of a model and the use of previous experience in the form of recorded data through simulations and experiments. The book’s focus on deterministic systems allows for an in-depth Lyapunov-based analysis of the performance of the methods described during the learning phase and during execution. To yield an approximate optimal controller, the authors focus on theories and methods that fall under the umbrella of actor–critic methods for machine learning. They concentrate on establishing stability during the learning phase and the execution phase, and adaptive model-based and data-driven reinforcement learning, to assist readers in the learning process, which typically relies on instantaneous input-output measurements. This monograph provides academic researchers with backgrounds in diverse disciplines from aerospace engineering to computer science, who are interested in optimal reinforcement learning functional analysis and functional approximation theory, with a good introduction to the use of model-based methods. The thorough treatment of an advanced treatment to control will also interest practitioners working in the chemical-process and power-supply industry.
ISBN: 9783319783840
Standard No.: 10.1007/978-3-319-78384-0doiSubjects--Topical Terms:
1249728
Control engineering.
LC Class. No.: TJ212-225
Dewey Class. No.: 629.8
Reinforcement Learning for Optimal Feedback Control = A Lyapunov-Based Approach /
LDR
:03375nam a22004095i 4500
001
989284
003
DE-He213
005
20200701185031.0
007
cr nn 008mamaa
008
201225s2018 gw | s |||| 0|eng d
020
$a
9783319783840
$9
978-3-319-78384-0
024
7
$a
10.1007/978-3-319-78384-0
$2
doi
035
$a
978-3-319-78384-0
050
4
$a
TJ212-225
072
7
$a
TJFM
$2
bicssc
072
7
$a
TEC004000
$2
bisacsh
072
7
$a
TJFM
$2
thema
082
0 4
$a
629.8
$2
23
100
1
$a
Kamalapurkar, Rushikesh.
$4
aut
$4
http://id.loc.gov/vocabulary/relators/aut
$3
1204836
245
1 0
$a
Reinforcement Learning for Optimal Feedback Control
$h
[electronic resource] :
$b
A Lyapunov-Based Approach /
$c
by Rushikesh Kamalapurkar, Patrick Walters, Joel Rosenfeld, Warren Dixon.
250
$a
1st ed. 2018.
264
1
$a
Cham :
$b
Springer International Publishing :
$b
Imprint: Springer,
$c
2018.
300
$a
XVI, 293 p.
$b
online resource.
336
$a
text
$b
txt
$2
rdacontent
337
$a
computer
$b
c
$2
rdamedia
338
$a
online resource
$b
cr
$2
rdacarrier
347
$a
text file
$b
PDF
$2
rda
490
1
$a
Communications and Control Engineering,
$x
0178-5354
505
0
$a
Chapter 1. Optimal control -- Chapter 2. Approximate dynamic programming -- Chapter 3. Excitation-based online approximate optimal control -- Chapter 4. Model-based reinforcement learning for approximate optimal control -- Chapter 5. Differential Graphical Games -- Chapter 6. Applications -- Chapter 7. Computational considerations -- Reference -- Index.
520
$a
Reinforcement Learning for Optimal Feedback Control develops model-based and data-driven reinforcement learning methods for solving optimal control problems in nonlinear deterministic dynamical systems. In order to achieve learning under uncertainty, data-driven methods for identifying system models in real-time are also developed. The book illustrates the advantages gained from the use of a model and the use of previous experience in the form of recorded data through simulations and experiments. The book’s focus on deterministic systems allows for an in-depth Lyapunov-based analysis of the performance of the methods described during the learning phase and during execution. To yield an approximate optimal controller, the authors focus on theories and methods that fall under the umbrella of actor–critic methods for machine learning. They concentrate on establishing stability during the learning phase and the execution phase, and adaptive model-based and data-driven reinforcement learning, to assist readers in the learning process, which typically relies on instantaneous input-output measurements. This monograph provides academic researchers with backgrounds in diverse disciplines from aerospace engineering to computer science, who are interested in optimal reinforcement learning functional analysis and functional approximation theory, with a good introduction to the use of model-based methods. The thorough treatment of an advanced treatment to control will also interest practitioners working in the chemical-process and power-supply industry.
650
0
$a
Control engineering.
$3
1249728
650
0
$a
Calculus of variations.
$3
527927
650
0
$a
System theory.
$3
566168
650
0
$a
Electrical engineering.
$3
596380
650
1 4
$a
Control and Systems Theory.
$3
1211358
650
2 4
$a
Calculus of Variations and Optimal Control; Optimization.
$3
593942
650
2 4
$a
Systems Theory, Control.
$3
669337
650
2 4
$a
Communications Engineering, Networks.
$3
669809
700
1
$a
Walters, Patrick.
$e
author.
$4
aut
$4
http://id.loc.gov/vocabulary/relators/aut
$3
1281211
700
1
$a
Rosenfeld, Joel.
$e
author.
$4
aut
$4
http://id.loc.gov/vocabulary/relators/aut
$3
1281212
700
1
$a
Dixon, Warren.
$4
aut
$4
http://id.loc.gov/vocabulary/relators/aut
$3
842743
710
2
$a
SpringerLink (Online service)
$3
593884
773
0
$t
Springer Nature eBook
776
0 8
$i
Printed edition:
$z
9783319783833
776
0 8
$i
Printed edition:
$z
9783319783857
776
0 8
$i
Printed edition:
$z
9783030086893
830
0
$a
Communications and Control Engineering,
$x
0178-5354
$3
1254247
856
4 0
$u
https://doi.org/10.1007/978-3-319-78384-0
912
$a
ZDB-2-ENG
912
$a
ZDB-2-SXE
950
$a
Engineering (SpringerNature-11647)
950
$a
Engineering (R0) (SpringerNature-43712)
筆 0 讀者評論
多媒體
評論
新增評論
分享你的心得
Export
取書館別
處理中
...
變更密碼[密碼必須為2種組合(英文和數字)及長度為10碼以上]
登入