語系:
繁體中文
English
說明(常見問題)
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
Learning and Inferring Representatio...
~
University of California, Berkeley.
Learning and Inferring Representations of Data in Neural Networks.
紀錄類型:
書目-語言資料,手稿 : Monograph/item
正題名/作者:
Learning and Inferring Representations of Data in Neural Networks./
作者:
Livezey, Jesse A.
面頁冊數:
1 online resource (91 pages)
附註:
Source: Dissertation Abstracts International, Volume: 78-11(E), Section: B.
標題:
Biophysics. -
電子資源:
click for full text (PQDT)
ISBN:
9780355034042
Learning and Inferring Representations of Data in Neural Networks.
Livezey, Jesse A.
Learning and Inferring Representations of Data in Neural Networks.
- 1 online resource (91 pages)
Source: Dissertation Abstracts International, Volume: 78-11(E), Section: B.
Thesis (Ph.D.)--University of California, Berkeley, 2017.
Includes bibliographical references
Finding useful representations of data in order to facilitate scientific knowledge generation is a ubiquitous concept across disciplines. Until the development of machine learning and statistical methods with hidden or latent representations, useful representations of data were generated "by hand" through scientific modeling or simple measurement observations. Scientific models often make explicit the underlying structure of a system which generates the data we observe and measure. To test a model, inferences must be made about the free parameters and the distributions of latent or unmeasured variables in the model conditioned on the data collected. At this time, many scientific disciplines such as astronomy, particle physics, wildlife conservation, and neuroscience have been moving towards collecting datasets that are large and complex enough so that no human will ever look at and analyze all measurements by hand. Datasets of this scale present an interesting scientific opportunity: to be able to derive insight into the structure of natural systems by creating models which can adapt themselves to the latent structure of large amounts of data, often called data-driven hypothesis testing. The three topics of this work fall under this umbrella, but are largely independent research directions. First, we show how deep learning can be used to infer representations of neural data which can be used to find the limits of information content in sparsely sampled neural activity and applied to improving the performance of brain-computer interfaces. Second, we derive a circuit model for a network neurons which implements approximate inference in a probabilistic model given the biological constraint of neuron-local computations. Finally, we provide a theoretical and empirical analysis of a family of methods for learning linear representations which have low coherence (cosine-similarity) and show that linear methods have limited applicability as compared to nonlinear, recurrent models which solve the same problem. Together, these results provide insight into how scientists and the brain can learn useful representations of data in deep and single layer networks.
Electronic reproduction.
Ann Arbor, Mich. :
ProQuest,
2018
Mode of access: World Wide Web
ISBN: 9780355034042Subjects--Topical Terms:
581576
Biophysics.
Index Terms--Genre/Form:
554714
Electronic books.
Learning and Inferring Representations of Data in Neural Networks.
LDR
:03372ntm a2200337K 4500
001
914366
005
20180703084808.5
006
m o u
007
cr mn||||a|a||
008
190606s2017 xx obm 000 0 eng d
020
$a
9780355034042
035
$a
(MiAaPQ)AAI10281928
035
$a
(MiAaPQ)berkeley:16970
035
$a
AAI10281928
040
$a
MiAaPQ
$b
eng
$c
MiAaPQ
100
1
$a
Livezey, Jesse A.
$3
1187598
245
1 0
$a
Learning and Inferring Representations of Data in Neural Networks.
264
0
$c
2017
300
$a
1 online resource (91 pages)
336
$a
text
$b
txt
$2
rdacontent
337
$a
computer
$b
c
$2
rdamedia
338
$a
online resource
$b
cr
$2
rdacarrier
500
$a
Source: Dissertation Abstracts International, Volume: 78-11(E), Section: B.
500
$a
Adviser: Michael R. DeWeese.
502
$a
Thesis (Ph.D.)--University of California, Berkeley, 2017.
504
$a
Includes bibliographical references
520
$a
Finding useful representations of data in order to facilitate scientific knowledge generation is a ubiquitous concept across disciplines. Until the development of machine learning and statistical methods with hidden or latent representations, useful representations of data were generated "by hand" through scientific modeling or simple measurement observations. Scientific models often make explicit the underlying structure of a system which generates the data we observe and measure. To test a model, inferences must be made about the free parameters and the distributions of latent or unmeasured variables in the model conditioned on the data collected. At this time, many scientific disciplines such as astronomy, particle physics, wildlife conservation, and neuroscience have been moving towards collecting datasets that are large and complex enough so that no human will ever look at and analyze all measurements by hand. Datasets of this scale present an interesting scientific opportunity: to be able to derive insight into the structure of natural systems by creating models which can adapt themselves to the latent structure of large amounts of data, often called data-driven hypothesis testing. The three topics of this work fall under this umbrella, but are largely independent research directions. First, we show how deep learning can be used to infer representations of neural data which can be used to find the limits of information content in sparsely sampled neural activity and applied to improving the performance of brain-computer interfaces. Second, we derive a circuit model for a network neurons which implements approximate inference in a probabilistic model given the biological constraint of neuron-local computations. Finally, we provide a theoretical and empirical analysis of a family of methods for learning linear representations which have low coherence (cosine-similarity) and show that linear methods have limited applicability as compared to nonlinear, recurrent models which solve the same problem. Together, these results provide insight into how scientists and the brain can learn useful representations of data in deep and single layer networks.
533
$a
Electronic reproduction.
$b
Ann Arbor, Mich. :
$c
ProQuest,
$d
2018
538
$a
Mode of access: World Wide Web
650
4
$a
Biophysics.
$3
581576
650
4
$a
Statistics.
$3
556824
650
4
$a
Neurosciences.
$3
593561
655
7
$a
Electronic books.
$2
local
$3
554714
690
$a
0786
690
$a
0463
690
$a
0317
710
2
$a
ProQuest Information and Learning Co.
$3
1178819
710
2
$a
University of California, Berkeley.
$b
Physics.
$3
1148718
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=10281928
$z
click for full text (PQDT)
筆 0 讀者評論
多媒體
評論
新增評論
分享你的心得
Export
取書館別
處理中
...
變更密碼[密碼必須為2種組合(英文和數字)及長度為10碼以上]
登入