語系:
繁體中文
English
說明(常見問題)
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
Learning to Attack, Protect, and Enhance Deep Networks /
紀錄類型:
書目-語言資料,印刷品 : Monograph/item
正題名/作者:
Learning to Attack, Protect, and Enhance Deep Networks // Zikui Cai.
作者:
Cai, Zikui,
面頁冊數:
1 electronic resource (276 pages)
附註:
Source: Dissertations Abstracts International, Volume: 86-01, Section: B.
Contained By:
Dissertations Abstracts International86-01B.
標題:
Computer science. -
電子資源:
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=31327668
ISBN:
9798383420683
Learning to Attack, Protect, and Enhance Deep Networks /
Cai, Zikui,
Learning to Attack, Protect, and Enhance Deep Networks /
Zikui Cai. - 1 electronic resource (276 pages)
Source: Dissertations Abstracts International, Volume: 86-01, Section: B.
Artificial intelligence (AI) systems have demonstrated remarkable capabilities, yet concerns about their security and safe deployment persist. With the rapid adoption of AI across critical domains, ensuring the robustness and reliability of these models is imperative. This research addresses this challenge by exposing vulnerabilities in AI systems and enhancing their trustworthiness. By systematically uncovering flaws, it aims to raise awareness of the precautions necessary for utilizing AI in high-stakes scenarios. The methodology involves identifying vulnerabilities, quantifying worst-case performance via attacks, and generalizing insights to practical deployment settings. Additionally, it investigates techniques to strengthen model trustworthiness in real-world scenarios, contributing to rigorous AI safety research that promotes responsible and beneficial system development. Specifically, this research reveals vulnerabilities in neural networks by developing efficient black-box attacks on various deep learning models across different tasks. Additionally, it focuses on improving AI trustworthiness by detecting adversarial examples using language models and enhancing user privacy through innovative facial de-identification methods.For highly effective black-box attacks, ensemble-based and context-aware approaches were developed. These methods optimize over ensemble model weight spaces to craft adversarial examples with extreme efficiency, significantly outperforming existing input space attacks. Multimodal testing demonstrated that these attacks could fool systems on diverse tasks, highlighting the need to evaluate deployment robustness against such methods. Additionally, by weaponizing context to manipulate statistical relationships that models rely on, context-aware attacks were shown to profoundly mislead systems, revealing reasoning vulnerabilities.To protect user privacy, an algorithm was developed for seamlessly de-identifying facial images while retaining utility for downstream tasks. This approach, grounded in differential privacy and ensemble learning, maximizes obfuscation and non-invertibility to prevent re-identification. By disentangling identity attributes from utility attributes like expressions, the method significantly enhances de-identification rates while preserving utility.To enhance the robustness and efficiency of computational imaging pipelines, including Fourier phase retrieval and coded diffraction imaging, I developed a framework that learns reference signals or illumination patterns using a small number of training images. This framework employs an unrolled network as a solver. Once learned, the reference signals or illumination patterns serve as priors, significantly improving the efficiency of signal reconstruction.Overall, this research contributes to a more secure and reliable deployment of AI systems, ensuring their safe and beneficial use across critical domains.
English
ISBN: 9798383420683Subjects--Topical Terms:
573171
Computer science.
Subjects--Index Terms:
Adversarial machine learning
Learning to Attack, Protect, and Enhance Deep Networks /
LDR
:04398nam a22004213i 4500
001
1157811
005
20250603111419.5
006
m o d
007
cr|nu||||||||
008
250804s2024 miu||||||m |||||||eng d
020
$a
9798383420683
035
$a
(MiAaPQD)AAI31327668
035
$a
AAI31327668
040
$a
MiAaPQD
$b
eng
$c
MiAaPQD
$e
rda
100
1
$a
Cai, Zikui,
$e
author.
$3
1484086
245
1 0
$a
Learning to Attack, Protect, and Enhance Deep Networks /
$c
Zikui Cai.
264
1
$a
Ann Arbor :
$b
ProQuest Dissertations & Theses,
$c
2024
300
$a
1 electronic resource (276 pages)
336
$a
text
$b
txt
$2
rdacontent
337
$a
computer
$b
c
$2
rdamedia
338
$a
online resource
$b
cr
$2
rdacarrier
500
$a
Source: Dissertations Abstracts International, Volume: 86-01, Section: B.
500
$a
Advisors: Asif, M. Salman Committee members: Roy-Chowdhury, Amit K.; Karydis, Konstantinos.
502
$b
Ph.D.
$c
University of California, Riverside
$d
2024.
520
$a
Artificial intelligence (AI) systems have demonstrated remarkable capabilities, yet concerns about their security and safe deployment persist. With the rapid adoption of AI across critical domains, ensuring the robustness and reliability of these models is imperative. This research addresses this challenge by exposing vulnerabilities in AI systems and enhancing their trustworthiness. By systematically uncovering flaws, it aims to raise awareness of the precautions necessary for utilizing AI in high-stakes scenarios. The methodology involves identifying vulnerabilities, quantifying worst-case performance via attacks, and generalizing insights to practical deployment settings. Additionally, it investigates techniques to strengthen model trustworthiness in real-world scenarios, contributing to rigorous AI safety research that promotes responsible and beneficial system development. Specifically, this research reveals vulnerabilities in neural networks by developing efficient black-box attacks on various deep learning models across different tasks. Additionally, it focuses on improving AI trustworthiness by detecting adversarial examples using language models and enhancing user privacy through innovative facial de-identification methods.For highly effective black-box attacks, ensemble-based and context-aware approaches were developed. These methods optimize over ensemble model weight spaces to craft adversarial examples with extreme efficiency, significantly outperforming existing input space attacks. Multimodal testing demonstrated that these attacks could fool systems on diverse tasks, highlighting the need to evaluate deployment robustness against such methods. Additionally, by weaponizing context to manipulate statistical relationships that models rely on, context-aware attacks were shown to profoundly mislead systems, revealing reasoning vulnerabilities.To protect user privacy, an algorithm was developed for seamlessly de-identifying facial images while retaining utility for downstream tasks. This approach, grounded in differential privacy and ensemble learning, maximizes obfuscation and non-invertibility to prevent re-identification. By disentangling identity attributes from utility attributes like expressions, the method significantly enhances de-identification rates while preserving utility.To enhance the robustness and efficiency of computational imaging pipelines, including Fourier phase retrieval and coded diffraction imaging, I developed a framework that learns reference signals or illumination patterns using a small number of training images. This framework employs an unrolled network as a solver. Once learned, the reference signals or illumination patterns serve as priors, significantly improving the efficiency of signal reconstruction.Overall, this research contributes to a more secure and reliable deployment of AI systems, ensuring their safe and beneficial use across critical domains.
546
$a
English
590
$a
School code: 0032
650
4
$a
Computer science.
$3
573171
650
4
$a
Electrical engineering.
$3
596380
653
$a
Adversarial machine learning
653
$a
Deep learning models
653
$a
Signal reconstruction
653
$a
Language models
690
$a
0544
690
$a
0984
690
$a
0800
710
2
$a
University of California, Riverside.
$b
Electrical Engineering.
$3
845380
720
1
$a
Asif, M. Salman
$e
degree supervisor.
773
0
$t
Dissertations Abstracts International
$g
86-01B.
790
$a
0032
791
$a
Ph.D.
792
$a
2024
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=31327668
筆 0 讀者評論
多媒體
評論
新增評論
分享你的心得
Export
取書館別
處理中
...
變更密碼[密碼必須為2種組合(英文和數字)及長度為10碼以上]
登入