語系:
繁體中文
English
說明(常見問題)
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
Attribution Robustness of Neural Networks.
紀錄類型:
書目-語言資料,手稿 : Monograph/item
正題名/作者:
Attribution Robustness of Neural Networks./
作者:
Gamage, Sunanda.
面頁冊數:
1 online resource (222 pages)
附註:
Source: Dissertations Abstracts International, Volume: 85-11, Section: B.
Contained By:
Dissertations Abstracts International85-11B.
標題:
Computer engineering. -
電子資源:
click for full text (PQDT)
ISBN:
9798382236599
Attribution Robustness of Neural Networks.
Gamage, Sunanda.
Attribution Robustness of Neural Networks.
- 1 online resource (222 pages)
Source: Dissertations Abstracts International, Volume: 85-11, Section: B.
Thesis (Ph.D.)--The University of Western Ontario (Canada), 2024.
Includes bibliographical references
While deep neural networks have demonstrated excellent learning capabilities, explainability of model predictions remains a challenge due to their black box nature. Attributions or feature significance methods are tools for explaining model predictions, facilitating model debugging, human-machine collaborative decision making, and establishing trust and compliance in critical applications. Recent work has shown that attributions of neural networks can be distorted by imperceptible adversarial input perturbations, which makes attributions unreliable as an explainability method. This thesis addresses the research problem of attribution robustness of neural networks and introduces novel techniques that enable robust training at scale. Firstly, a novel generic framework of loss functions for robust neural net training is introduced, addressing the restrictive nature of existing frameworks. Secondly, the bottleneck issue of high computational cost of existing robust objectives is addressed by deriving a new, simple and efficient robust training objective termed "cross entropy of attacks". It is 2 to 10 times faster than existing regularization-based robust objectives for training neural nets on image data while achieving higher attribution robustness (3.5% to 6.2% higher top-k intersection). Thirdly, this thesis presents a comprehensive analysis of three key challenges in attribution robust neural net training: the high computational cost, the trade-off between robustness and accuracy, and the difficulty of hyperparameter tuning. Empirical evidence and guidelines are provided to help researchers navigate these challenges. Techniques to improve robust training efficiency are proposed, including hybrid standard and robust training, using a fast one-step attack, and optimized computation of integrated gradients, yielding 2x to 6x speed gains. Finally, an investigation of two properties of attribution robust neural networks is conducted. It is shown that attribution robust neural nets are also robust against image corruptions, achieving accuracy gains of 3.58% to 11.94% over standard models. Empirical results suggest that robust models do not exhibit resilience against spurious correlations. This thesis also presents work on utilizing deep learning classifiers in multiple application domains: an empirical benchmark of deep learning in intrusion detection, an LSTM-based pipeline for detecting structural damage in physical structures, and a self-supervised learning pipeline to classify industrial time-series in a label efficient manner.
Electronic reproduction.
Ann Arbor, Mich. :
ProQuest,
2024
Mode of access: World Wide Web
ISBN: 9798382236599Subjects--Topical Terms:
569006
Computer engineering.
Subjects--Index Terms:
Attribution robustnessIndex Terms--Genre/Form:
554714
Electronic books.
Attribution Robustness of Neural Networks.
LDR
:03949ntm a22003857 4500
001
1148397
005
20240924101915.5
006
m o d
007
cr bn ---uuuuu
008
250605s2024 xx obm 000 0 eng d
020
$a
9798382236599
035
$a
(MiAaPQ)AAI31272771
035
$a
(MiAaPQ)oaiirlibuwocaetd12827
035
$a
AAI31272771
040
$a
MiAaPQ
$b
eng
$c
MiAaPQ
$d
NTU
100
1
$a
Gamage, Sunanda.
$3
1474350
245
1 0
$a
Attribution Robustness of Neural Networks.
264
0
$c
2024
300
$a
1 online resource (222 pages)
336
$a
text
$b
txt
$2
rdacontent
337
$a
computer
$b
c
$2
rdamedia
338
$a
online resource
$b
cr
$2
rdacarrier
500
$a
Source: Dissertations Abstracts International, Volume: 85-11, Section: B.
500
$a
Advisor: Samarabandu, Jagath.
502
$a
Thesis (Ph.D.)--The University of Western Ontario (Canada), 2024.
504
$a
Includes bibliographical references
520
$a
While deep neural networks have demonstrated excellent learning capabilities, explainability of model predictions remains a challenge due to their black box nature. Attributions or feature significance methods are tools for explaining model predictions, facilitating model debugging, human-machine collaborative decision making, and establishing trust and compliance in critical applications. Recent work has shown that attributions of neural networks can be distorted by imperceptible adversarial input perturbations, which makes attributions unreliable as an explainability method. This thesis addresses the research problem of attribution robustness of neural networks and introduces novel techniques that enable robust training at scale. Firstly, a novel generic framework of loss functions for robust neural net training is introduced, addressing the restrictive nature of existing frameworks. Secondly, the bottleneck issue of high computational cost of existing robust objectives is addressed by deriving a new, simple and efficient robust training objective termed "cross entropy of attacks". It is 2 to 10 times faster than existing regularization-based robust objectives for training neural nets on image data while achieving higher attribution robustness (3.5% to 6.2% higher top-k intersection). Thirdly, this thesis presents a comprehensive analysis of three key challenges in attribution robust neural net training: the high computational cost, the trade-off between robustness and accuracy, and the difficulty of hyperparameter tuning. Empirical evidence and guidelines are provided to help researchers navigate these challenges. Techniques to improve robust training efficiency are proposed, including hybrid standard and robust training, using a fast one-step attack, and optimized computation of integrated gradients, yielding 2x to 6x speed gains. Finally, an investigation of two properties of attribution robust neural networks is conducted. It is shown that attribution robust neural nets are also robust against image corruptions, achieving accuracy gains of 3.58% to 11.94% over standard models. Empirical results suggest that robust models do not exhibit resilience against spurious correlations. This thesis also presents work on utilizing deep learning classifiers in multiple application domains: an empirical benchmark of deep learning in intrusion detection, an LSTM-based pipeline for detecting structural damage in physical structures, and a self-supervised learning pipeline to classify industrial time-series in a label efficient manner.
533
$a
Electronic reproduction.
$b
Ann Arbor, Mich. :
$c
ProQuest,
$d
2024
538
$a
Mode of access: World Wide Web
650
4
$a
Computer engineering.
$3
569006
650
4
$a
Electrical engineering.
$3
596380
653
$a
Attribution robustness
653
$a
Attribution attacks
653
$a
Feature significance
653
$a
Self-supervised learning
655
7
$a
Electronic books.
$2
local
$3
554714
690
$a
0544
690
$a
0464
710
2
$a
ProQuest Information and Learning Co.
$3
1178819
710
2
$a
The University of Western Ontario (Canada).
$3
1184598
773
0
$t
Dissertations Abstracts International
$g
85-11B.
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=31272771
$z
click for full text (PQDT)
筆 0 讀者評論
多媒體
評論
新增評論
分享你的心得
Export
取書館別
處理中
...
變更密碼[密碼必須為2種組合(英文和數字)及長度為10碼以上]
登入