語系:
繁體中文
English
說明(常見問題)
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
Learning Neural Representations that...
~
Princeton University.
Learning Neural Representations that Support Efficient Reinforcement Learning.
紀錄類型:
書目-語言資料,手稿 : Monograph/item
正題名/作者:
Learning Neural Representations that Support Efficient Reinforcement Learning./
作者:
Stachenfeld, Kimberly.
面頁冊數:
1 online resource (155 pages)
附註:
Source: Dissertation Abstracts International, Volume: 79-10(E), Section: B.
Contained By:
Dissertation Abstracts International79-10B(E).
標題:
Neurosciences. -
電子資源:
click for full text (PQDT)
ISBN:
9780438050419
Learning Neural Representations that Support Efficient Reinforcement Learning.
Stachenfeld, Kimberly.
Learning Neural Representations that Support Efficient Reinforcement Learning.
- 1 online resource (155 pages)
Source: Dissertation Abstracts International, Volume: 79-10(E), Section: B.
Thesis (Ph.D.)--Princeton University, 2018.
Includes bibliographical references
RL has been transformative for neuroscience by providing a normative anchor for interpreting neural and behavioral data. End-to-end RL methods have scored impressive victories with minimal compromises in autonomy, hand-engineering, and generality. The cost of this minimalism in practice is that model-free RL methods are slow to learn and generalize poorly. Humans and animals exhibit substantially improved flexibility and generalize learned information rapidly to new environment by learning invariants of the environment and features of the environment that support fast learning rapid transfer in new environments. An important question for both neuroscience and machine learning is what kind of ``representational objectives'' encourage humans and other animals to encode structure about the world. This can be formalized as ``representation feature learning,'' in which the animal or agent learns to form representations with information potentially relevant to the downstream RL process. We will overview different representational objectives that have received attention in neuroscience and in machine learning. The focus of this overview will be to first highlight conditions under which these seemingly unrelated objectives are actually mathematically equivalent. We will use this to motivate a breakdown of properties of different learned representations that are meaningfully different and can be used to inform contrasting hypotheses for neuroscience. We then use this perspective to motivate our model of the hippocampus. A cognitive map has long been the dominant metaphor for hippocampal function, embracing the idea that place cells encode a geometric representation of space. However, evidence for predictive coding, reward sensitivity, and policy dependence in place cells suggests that the representation is not purely spatial. We approach the problem of understanding hippocampal representations from a reinforcement learning perspective, focusing on what kind of spatial representation is most useful for maximizing future reward. We show that the answer takes the form of a predictive representation. This representation captures many aspects of place cell responses that fall outside the traditional view of a cognitive map. We go on to argue that entorhinal grid cells encode a low-dimensional basis set for the predictive representation, useful for suppressing noise in predictions and extracting multiscale structure for hierarchical planning.
Electronic reproduction.
Ann Arbor, Mich. :
ProQuest,
2018
Mode of access: World Wide Web
ISBN: 9780438050419Subjects--Topical Terms:
593561
Neurosciences.
Index Terms--Genre/Form:
554714
Electronic books.
Learning Neural Representations that Support Efficient Reinforcement Learning.
LDR
:03730ntm a2200349Ki 4500
001
918589
005
20181026115419.5
006
m o u
007
cr mn||||a|a||
008
190606s2018 xx obm 000 0 eng d
020
$a
9780438050419
035
$a
(MiAaPQ)AAI10824319
035
$a
(MiAaPQ)princeton:12624
035
$a
AAI10824319
040
$a
MiAaPQ
$b
eng
$c
MiAaPQ
$d
NTU
100
1
$a
Stachenfeld, Kimberly.
$3
1192947
245
1 0
$a
Learning Neural Representations that Support Efficient Reinforcement Learning.
264
0
$c
2018
300
$a
1 online resource (155 pages)
336
$a
text
$b
txt
$2
rdacontent
337
$a
computer
$b
c
$2
rdamedia
338
$a
online resource
$b
cr
$2
rdacarrier
500
$a
Source: Dissertation Abstracts International, Volume: 79-10(E), Section: B.
500
$a
Adviser: Matthew M. Botvinick.
502
$a
Thesis (Ph.D.)--Princeton University, 2018.
504
$a
Includes bibliographical references
520
$a
RL has been transformative for neuroscience by providing a normative anchor for interpreting neural and behavioral data. End-to-end RL methods have scored impressive victories with minimal compromises in autonomy, hand-engineering, and generality. The cost of this minimalism in practice is that model-free RL methods are slow to learn and generalize poorly. Humans and animals exhibit substantially improved flexibility and generalize learned information rapidly to new environment by learning invariants of the environment and features of the environment that support fast learning rapid transfer in new environments. An important question for both neuroscience and machine learning is what kind of ``representational objectives'' encourage humans and other animals to encode structure about the world. This can be formalized as ``representation feature learning,'' in which the animal or agent learns to form representations with information potentially relevant to the downstream RL process. We will overview different representational objectives that have received attention in neuroscience and in machine learning. The focus of this overview will be to first highlight conditions under which these seemingly unrelated objectives are actually mathematically equivalent. We will use this to motivate a breakdown of properties of different learned representations that are meaningfully different and can be used to inform contrasting hypotheses for neuroscience. We then use this perspective to motivate our model of the hippocampus. A cognitive map has long been the dominant metaphor for hippocampal function, embracing the idea that place cells encode a geometric representation of space. However, evidence for predictive coding, reward sensitivity, and policy dependence in place cells suggests that the representation is not purely spatial. We approach the problem of understanding hippocampal representations from a reinforcement learning perspective, focusing on what kind of spatial representation is most useful for maximizing future reward. We show that the answer takes the form of a predictive representation. This representation captures many aspects of place cell responses that fall outside the traditional view of a cognitive map. We go on to argue that entorhinal grid cells encode a low-dimensional basis set for the predictive representation, useful for suppressing noise in predictions and extracting multiscale structure for hierarchical planning.
533
$a
Electronic reproduction.
$b
Ann Arbor, Mich. :
$c
ProQuest,
$d
2018
538
$a
Mode of access: World Wide Web
650
4
$a
Neurosciences.
$3
593561
650
4
$a
Quantitative psychology.
$3
1182802
650
4
$a
Cognitive psychology.
$3
556029
655
7
$a
Electronic books.
$2
local
$3
554714
690
$a
0317
690
$a
0632
690
$a
0633
710
2
$a
ProQuest Information and Learning Co.
$3
1178819
710
2
$a
Princeton University.
$b
Neuroscience.
$3
1186586
773
0
$t
Dissertation Abstracts International
$g
79-10B(E).
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=10824319
$z
click for full text (PQDT)
筆 0 讀者評論
多媒體
評論
新增評論
分享你的心得
Export
取書館別
處理中
...
變更密碼[密碼必須為2種組合(英文和數字)及長度為10碼以上]
登入