語系:
繁體中文
English
說明(常見問題)
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
Simulation-Based Optimization = Para...
~
Gosavi, Abhijit.
Simulation-Based Optimization = Parametric Optimization Techniques and Reinforcement Learning /
紀錄類型:
書目-語言資料,印刷品 : Monograph/item
正題名/作者:
Simulation-Based Optimization/ by Abhijit Gosavi.
其他題名:
Parametric Optimization Techniques and Reinforcement Learning /
作者:
Gosavi, Abhijit.
面頁冊數:
XXVI, 508 p. 42 illus.online resource. :
Contained By:
Springer Nature eBook
標題:
Operations research. -
電子資源:
https://doi.org/10.1007/978-1-4899-7491-4
ISBN:
9781489974914
Simulation-Based Optimization = Parametric Optimization Techniques and Reinforcement Learning /
Gosavi, Abhijit.
Simulation-Based Optimization
Parametric Optimization Techniques and Reinforcement Learning /[electronic resource] :by Abhijit Gosavi. - 2nd ed. 2015. - XXVI, 508 p. 42 illus.online resource. - Operations Research/Computer Science Interfaces Series,551387-666X ;. - Operations Research/Computer Science Interfaces Series,59.
Background -- Simulation basics -- Simulation optimization: an overview -- Response surfaces and neural nets -- Parametric optimization -- Dynamic programming -- Reinforcement learning -- Stochastic search for controls -- Convergence: background material -- Convergence: parametric optimization -- Convergence: control optimization -- Case studies.
Simulation-Based Optimization: Parametric Optimization Techniques and Reinforcement Learning introduces the evolving area of static and dynamic simulation-based optimization. Covered in detail are model-free optimization techniques – especially designed for those discrete-event, stochastic systems which can be simulated but whose analytical models are difficult to find in closed mathematical forms. Key features of this revised and improved Second Edition include: · Extensive coverage, via step-by-step recipes, of powerful new algorithms for static simulation optimization, including simultaneous perturbation, backtracking adaptive search, and nested partitions, in addition to traditional methods, such as response surfaces, Nelder-Mead search, and meta-heuristics (simulated annealing, tabu search, and genetic algorithms) · Detailed coverage of the Bellman equation framework for Markov Decision Processes (MDPs), along with dynamic programming (value and policy iteration) for discounted, average, and total reward performance metrics · An in-depth consideration of dynamic simulation optimization via temporal differences and Reinforcement Learning: Q-Learning, SARSA, and R-SMART algorithms, and policy search, via API, Q-P-Learning, actor-critics, and learning automata · A special examination of neural-network-based function approximation for Reinforcement Learning, semi-Markov decision processes (SMDPs), finite-horizon problems, two time scales, case studies for industrial tasks, computer codes (placed online), and convergence proofs, via Banach fixed point theory and Ordinary Differential Equations Themed around three areas in separate sets of chapters – Static Simulation Optimization, Reinforcement Learning, and Convergence Analysis – this book is written for researchers and students in the fields of engineering (industrial, systems, electrical, and computer), operations research, computer science, and applied mathematics.
ISBN: 9781489974914
Standard No.: 10.1007/978-1-4899-7491-4doiSubjects--Topical Terms:
573517
Operations research.
LC Class. No.: HD30.23
Dewey Class. No.: 658.40301
Simulation-Based Optimization = Parametric Optimization Techniques and Reinforcement Learning /
LDR
:03790nam a22004215i 4500
001
965190
003
DE-He213
005
20200919003258.0
007
cr nn 008mamaa
008
201211s2015 xxu| s |||| 0|eng d
020
$a
9781489974914
$9
978-1-4899-7491-4
024
7
$a
10.1007/978-1-4899-7491-4
$2
doi
035
$a
978-1-4899-7491-4
050
4
$a
HD30.23
072
7
$a
KJT
$2
bicssc
072
7
$a
BUS049000
$2
bisacsh
072
7
$a
KJT
$2
thema
072
7
$a
KJMD
$2
thema
082
0 4
$a
658.40301
$2
23
100
1
$a
Gosavi, Abhijit.
$4
aut
$4
http://id.loc.gov/vocabulary/relators/aut
$3
1063100
245
1 0
$a
Simulation-Based Optimization
$h
[electronic resource] :
$b
Parametric Optimization Techniques and Reinforcement Learning /
$c
by Abhijit Gosavi.
250
$a
2nd ed. 2015.
264
1
$a
New York, NY :
$b
Springer US :
$b
Imprint: Springer,
$c
2015.
300
$a
XXVI, 508 p. 42 illus.
$b
online resource.
336
$a
text
$b
txt
$2
rdacontent
337
$a
computer
$b
c
$2
rdamedia
338
$a
online resource
$b
cr
$2
rdacarrier
347
$a
text file
$b
PDF
$2
rda
490
1
$a
Operations Research/Computer Science Interfaces Series,
$x
1387-666X ;
$v
55
505
0
$a
Background -- Simulation basics -- Simulation optimization: an overview -- Response surfaces and neural nets -- Parametric optimization -- Dynamic programming -- Reinforcement learning -- Stochastic search for controls -- Convergence: background material -- Convergence: parametric optimization -- Convergence: control optimization -- Case studies.
520
$a
Simulation-Based Optimization: Parametric Optimization Techniques and Reinforcement Learning introduces the evolving area of static and dynamic simulation-based optimization. Covered in detail are model-free optimization techniques – especially designed for those discrete-event, stochastic systems which can be simulated but whose analytical models are difficult to find in closed mathematical forms. Key features of this revised and improved Second Edition include: · Extensive coverage, via step-by-step recipes, of powerful new algorithms for static simulation optimization, including simultaneous perturbation, backtracking adaptive search, and nested partitions, in addition to traditional methods, such as response surfaces, Nelder-Mead search, and meta-heuristics (simulated annealing, tabu search, and genetic algorithms) · Detailed coverage of the Bellman equation framework for Markov Decision Processes (MDPs), along with dynamic programming (value and policy iteration) for discounted, average, and total reward performance metrics · An in-depth consideration of dynamic simulation optimization via temporal differences and Reinforcement Learning: Q-Learning, SARSA, and R-SMART algorithms, and policy search, via API, Q-P-Learning, actor-critics, and learning automata · A special examination of neural-network-based function approximation for Reinforcement Learning, semi-Markov decision processes (SMDPs), finite-horizon problems, two time scales, case studies for industrial tasks, computer codes (placed online), and convergence proofs, via Banach fixed point theory and Ordinary Differential Equations Themed around three areas in separate sets of chapters – Static Simulation Optimization, Reinforcement Learning, and Convergence Analysis – this book is written for researchers and students in the fields of engineering (industrial, systems, electrical, and computer), operations research, computer science, and applied mathematics.
650
0
$a
Operations research.
$3
573517
650
0
$a
Decision making.
$3
528319
650
0
$a
Management science.
$3
719678
650
0
$a
Computer simulation.
$3
560190
650
1 4
$a
Operations Research/Decision Theory.
$3
669176
650
2 4
$a
Operations Research, Management Science.
$3
785065
650
2 4
$a
Simulation and Modeling.
$3
669249
710
2
$a
SpringerLink (Online service)
$3
593884
773
0
$t
Springer Nature eBook
776
0 8
$i
Printed edition:
$z
9781489974907
776
0 8
$i
Printed edition:
$z
9781489974921
776
0 8
$i
Printed edition:
$z
9781489977311
830
0
$a
Operations Research/Computer Science Interfaces Series,
$x
1387-666X ;
$v
59
$3
1255663
856
4 0
$u
https://doi.org/10.1007/978-1-4899-7491-4
912
$a
ZDB-2-SBE
912
$a
ZDB-2-SXBM
950
$a
Business and Economics (SpringerNature-11643)
950
$a
Business and Management (R0) (SpringerNature-43719)
筆 0 讀者評論
多媒體
評論
新增評論
分享你的心得
Export
取書館別
處理中
...
變更密碼[密碼必須為2種組合(英文和數字)及長度為10碼以上]
登入