語系:
繁體中文
English
說明(常見問題)
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
"Monkey Upload" - Using Neural Data for Developing Better Neural Networks.
紀錄類型:
書目-語言資料,手稿 : Monograph/item
正題名/作者:
"Monkey Upload" - Using Neural Data for Developing Better Neural Networks./
作者:
Jain, Utkarsh.
面頁冊數:
1 online resource (84 pages)
附註:
Source: Masters Abstracts International, Volume: 85-12.
Contained By:
Masters Abstracts International85-12.
標題:
Neurosciences. -
電子資源:
click for full text (PQDT)
ISBN:
9798383058275
"Monkey Upload" - Using Neural Data for Developing Better Neural Networks.
Jain, Utkarsh.
"Monkey Upload" - Using Neural Data for Developing Better Neural Networks.
- 1 online resource (84 pages)
Source: Masters Abstracts International, Volume: 85-12.
Thesis (M.S.)--University of California, San Diego, 2024.
Includes bibliographical references
The neural co-training hypothesis (Sinz et al., 2019) proposes that training a network on a downstream task alongside predicting neural responses can transfer useful inductive biases from primate vision into neural networks, thus enhancing their robustness and ability to generalize to out-of-distribution (OOD) data. Prior research (Federer et al., 2020; Li et al., 2019; Pirlot et al., 2022; Safarani et al., 2021) has shown the benefits of regularizing neural networks with primate brain data, particularly from the V1 cortex. However, similar gains are observed when V1 data is replaced with noise distributions of similar statistical properties. This is likely because the V1 cortex encodes simpler Gabor-like filters and is highly sensitive to perturbations, resulting in inherently noisy representations. Only one previous study (Dapello et al., 2022) has examined the impact of using IT representations, which are known to be more stable and encode well-defined object identity solutions, on a network's adversarial robustness. In this research, we investigate the effects of aligning a network's representations with macaque brain data from the V1, V4, and IT cortices simultaneously while training the network on an object categorization task. We find significant improvements in the model's robustness across 19 types of corruptions, even surpassing the gains from single-stage neural alignment. Additionally, past studies often use sophisticated similarity indices, like centered kernel alignment (Kornblith et al., 2019) and representational similarity matrices (Kriegeskorte et al., 2008), for alignment. We explore the use of contrastive loss functions for neural alignment and report further improvements in the network's corruption robustness. Overall, our results highlight the utility of neural data in developing better neural networks.
Electronic reproduction.
Ann Arbor, Mich. :
ProQuest,
2024
Mode of access: World Wide Web
ISBN: 9798383058275Subjects--Topical Terms:
593561
Neurosciences.
Subjects--Index Terms:
Cognitive ScienceIndex Terms--Genre/Form:
554714
Electronic books.
"Monkey Upload" - Using Neural Data for Developing Better Neural Networks.
LDR
:03183ntm a22003857 4500
001
1146358
005
20240812064417.5
006
m o d
007
cr bn ---uuuuu
008
250605s2024 xx obm 000 0 eng d
020
$a
9798383058275
035
$a
(MiAaPQ)AAI31301896
035
$a
AAI31301896
040
$a
MiAaPQ
$b
eng
$c
MiAaPQ
$d
NTU
100
1
$a
Jain, Utkarsh.
$e
editor.
$3
1356475
245
1 0
$a
"Monkey Upload" - Using Neural Data for Developing Better Neural Networks.
264
0
$c
2024
300
$a
1 online resource (84 pages)
336
$a
text
$b
txt
$2
rdacontent
337
$a
computer
$b
c
$2
rdamedia
338
$a
online resource
$b
cr
$2
rdacarrier
500
$a
Source: Masters Abstracts International, Volume: 85-12.
500
$a
Advisor: Cottrell, Garrison W.
502
$a
Thesis (M.S.)--University of California, San Diego, 2024.
504
$a
Includes bibliographical references
520
$a
The neural co-training hypothesis (Sinz et al., 2019) proposes that training a network on a downstream task alongside predicting neural responses can transfer useful inductive biases from primate vision into neural networks, thus enhancing their robustness and ability to generalize to out-of-distribution (OOD) data. Prior research (Federer et al., 2020; Li et al., 2019; Pirlot et al., 2022; Safarani et al., 2021) has shown the benefits of regularizing neural networks with primate brain data, particularly from the V1 cortex. However, similar gains are observed when V1 data is replaced with noise distributions of similar statistical properties. This is likely because the V1 cortex encodes simpler Gabor-like filters and is highly sensitive to perturbations, resulting in inherently noisy representations. Only one previous study (Dapello et al., 2022) has examined the impact of using IT representations, which are known to be more stable and encode well-defined object identity solutions, on a network's adversarial robustness. In this research, we investigate the effects of aligning a network's representations with macaque brain data from the V1, V4, and IT cortices simultaneously while training the network on an object categorization task. We find significant improvements in the model's robustness across 19 types of corruptions, even surpassing the gains from single-stage neural alignment. Additionally, past studies often use sophisticated similarity indices, like centered kernel alignment (Kornblith et al., 2019) and representational similarity matrices (Kriegeskorte et al., 2008), for alignment. We explore the use of contrastive loss functions for neural alignment and report further improvements in the network's corruption robustness. Overall, our results highlight the utility of neural data in developing better neural networks.
533
$a
Electronic reproduction.
$b
Ann Arbor, Mich. :
$c
ProQuest,
$d
2024
538
$a
Mode of access: World Wide Web
650
4
$a
Neurosciences.
$3
593561
650
4
$a
Computer science.
$3
573171
653
$a
Cognitive Science
653
$a
Computer vision task
653
$a
Deep learning
653
$a
Robustness
655
7
$a
Electronic books.
$2
local
$3
554714
690
$a
0984
690
$a
0800
690
$a
0317
710
2
$a
University of California, San Diego.
$b
Computer Science and Engineering.
$3
1189479
710
2
$a
ProQuest Information and Learning Co.
$3
1178819
773
0
$t
Masters Abstracts International
$g
85-12.
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=31301896
$z
click for full text (PQDT)
筆 0 讀者評論
多媒體
評論
新增評論
分享你的心得
Export
取書館別
處理中
...
變更密碼[密碼必須為2種組合(英文和數字)及長度為10碼以上]
登入