Language:
English
繁體中文
Help
Login
Back
Switch To:
Labeled
|
MARC Mode
|
ISBD
Analysis and Solution of Markov Deci...
~
ProQuest Information and Learning Co.
Analysis and Solution of Markov Decision Problems with a Continuous, Stochastic State Component.
Record Type:
Language materials, manuscript : Monograph/item
Title/Author:
Analysis and Solution of Markov Decision Problems with a Continuous, Stochastic State Component./
Author:
Sukumar, Shruthi.
Description:
1 online resource (51 pages)
Notes:
Source: Masters Abstracts International, Volume: 57-01.
Contained By:
Masters Abstracts International57-01(E).
Subject:
Electrical engineering. -
Online resource:
click for full text (PQDT)
ISBN:
9780355300581
Analysis and Solution of Markov Decision Problems with a Continuous, Stochastic State Component.
Sukumar, Shruthi.
Analysis and Solution of Markov Decision Problems with a Continuous, Stochastic State Component.
- 1 online resource (51 pages)
Source: Masters Abstracts International, Volume: 57-01.
Thesis (M.S.)--University of Colorado at Boulder, 2017.
Includes bibliographical references
Markov Decision Processes (MDPs) are discrete-time random processes that provide a framework to model sequential decision problems in stochastic environments. However, the use of MDPs to model real-world decision problems is restricted, since they often have continuous variables as part of their state space. Common approaches to extending the use of MDPs to solve these problems include discretization which suffers from inefficiency and inaccuracy. Here, we solve MDPs with continuous and discrete state variables by assuming the reward to be piecewise linear. We however allow for the continuous variable to have an infinite and continuous transition function. We then use our approach to solve an MDP modeling human behaviour in a specific task called delayed gratification. Simulation results are presented to analyze the model predictions which are fit post-hoc to synthetic as well as human data, to justify the approach solving the MDP and modeling behaviour.
Electronic reproduction.
Ann Arbor, Mich. :
ProQuest,
2018
Mode of access: World Wide Web
ISBN: 9780355300581Subjects--Topical Terms:
596380
Electrical engineering.
Index Terms--Genre/Form:
554714
Electronic books.
Analysis and Solution of Markov Decision Problems with a Continuous, Stochastic State Component.
LDR
:02182ntm a2200325Ki 4500
001
919588
005
20181129115238.5
006
m o u
007
cr mn||||a|a||
008
190606s2017 xx obm 000 0 eng d
020
$a
9780355300581
035
$a
(MiAaPQ)AAI10604299
035
$a
(MiAaPQ)colorado:15006
035
$a
AAI10604299
040
$a
MiAaPQ
$b
eng
$c
MiAaPQ
$d
NTU
100
1
$a
Sukumar, Shruthi.
$3
1194200
245
1 0
$a
Analysis and Solution of Markov Decision Problems with a Continuous, Stochastic State Component.
264
0
$c
2017
300
$a
1 online resource (51 pages)
336
$a
text
$b
txt
$2
rdacontent
337
$a
computer
$b
c
$2
rdamedia
338
$a
online resource
$b
cr
$2
rdacarrier
500
$a
Source: Masters Abstracts International, Volume: 57-01.
500
$a
Adviser: Jason R. Marden.
502
$a
Thesis (M.S.)--University of Colorado at Boulder, 2017.
504
$a
Includes bibliographical references
520
$a
Markov Decision Processes (MDPs) are discrete-time random processes that provide a framework to model sequential decision problems in stochastic environments. However, the use of MDPs to model real-world decision problems is restricted, since they often have continuous variables as part of their state space. Common approaches to extending the use of MDPs to solve these problems include discretization which suffers from inefficiency and inaccuracy. Here, we solve MDPs with continuous and discrete state variables by assuming the reward to be piecewise linear. We however allow for the continuous variable to have an infinite and continuous transition function. We then use our approach to solve an MDP modeling human behaviour in a specific task called delayed gratification. Simulation results are presented to analyze the model predictions which are fit post-hoc to synthetic as well as human data, to justify the approach solving the MDP and modeling behaviour.
533
$a
Electronic reproduction.
$b
Ann Arbor, Mich. :
$c
ProQuest,
$d
2018
538
$a
Mode of access: World Wide Web
650
4
$a
Electrical engineering.
$3
596380
655
7
$a
Electronic books.
$2
local
$3
554714
690
$a
0544
710
2
$a
ProQuest Information and Learning Co.
$3
1178819
710
2
$a
University of Colorado at Boulder.
$b
Electrical Engineering.
$3
1148661
773
0
$t
Masters Abstracts International
$g
57-01(E).
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=10604299
$z
click for full text (PQDT)
based on 0 review(s)
Multimedia
Reviews
Add a review
and share your thoughts with other readers
Export
pickup library
Processing
...
Change password
Login