語系:
繁體中文
English
說明(常見問題)
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
Structured Sparse Optimization.
紀錄類型:
書目-語言資料,手稿 : Monograph/item
正題名/作者:
Structured Sparse Optimization./
作者:
Dai, Yutong.
面頁冊數:
1 online resource (248 pages)
附註:
Source: Dissertations Abstracts International, Volume: 85-07, Section: B.
Contained By:
Dissertations Abstracts International85-07B.
標題:
Applied mathematics. -
電子資源:
click for full text (PQDT)
ISBN:
9798381377385
Structured Sparse Optimization.
Dai, Yutong.
Structured Sparse Optimization.
- 1 online resource (248 pages)
Source: Dissertations Abstracts International, Volume: 85-07, Section: B.
Thesis (Ph.D.)--Lehigh University, 2024.
Includes bibliographical references
In the age of high-dimensional data-driven science, sparse optimization techniques play a vital role. Sparse optimization aims to discover solutions with compact representations in low-dimensional spaces, i.e., solutions with relatively few nonzero entries. These techniques have succeeded in areas like signal processing, statistics, and model compression. As data becomes increasingly complex, the challenges mount. Classical sparse optimization techniques use l1 regularization to obtain sparse solution estimates, which may be inadequate if a more complicated solution structure is required. Hence, the need arises for structured sparsity, which follows distinct patterns, to produce more interpretable and meaningful solutions.The development of structured sparse optimization algorithms faces many challenges, such as efficiency, scalability, noise management, and support estimation (correctly identifying the zero/nonzero structure of an exact solution). This dissertation addresses these challenges by analyzing new support identification and subspace optimization techniques.The dissertation is structured into two parts. The first part focuses on developing deterministic algorithms. In particular, Chapter 2 focuses on non-overlapping group-sparse problems. It utilizes the exact proximal operator evaluation to achieve support identification. By further integrating Newton's method into the subspace optimization procedure, i.e., only optimizing groups of variables that are predicted to be in the support, the method achieves a fast local convergence rate. Numerical results validate that the proposed method achieves state-of-art performance on regularized regression problems. Chapter 3 exploits a challenging overlapping group-sparse structure and establishes the support identification property of iterates computed as particular inexact estimates of the proximal operator. By properly managing the errors incurred from the inexact proximal operator evaluation and performing special projections, our analysis provides an upper bound on the number of iterations before the support is identified, which is new to the literature.The second part of the dissertation considers the stochastic setting. Chapter 4 presents a stochastic algorithm for non-overlapping group-sparse problems. It enjoys a consistent support identification property with high probability (i.e., with high probability, the correct support is identified for all sufficiently large iterations), avoiding the reliance on exact gradient evaluation or the explicit storage of historical gradient information. Chapter 5 extends this success to overlapping group-sparse structured problems. Borrowing the tools developed in Chapter 3 and Chapter 4, we proposed a method to achieve consistent support identification with high probability with the presence of noise from both inexact proximal operator evaluation and stochastic gradient approximation. Lastly, Chapter 6 applies the idea of subspace optimization to the training of deep neural networks. Armed with a carefully designed subspace projection mechanism, we designed a method that consistently recovers high group-sparse neural networks while maintaining high task-relevant performance. This is beneficial for deploying deep neural networks for inference in devices with low computing resources since the number of parameters and the computation cost are greatly reduced.
Electronic reproduction.
Ann Arbor, Mich. :
ProQuest,
2024
Mode of access: World Wide Web
ISBN: 9798381377385Subjects--Topical Terms:
1069907
Applied mathematics.
Subjects--Index Terms:
AccelerationIndex Terms--Genre/Form:
554714
Electronic books.
Structured Sparse Optimization.
LDR
:04758ntm a22004097 4500
001
1148758
005
20240930100138.5
006
m o d
007
cr bn ---uuuuu
008
250605s2024 xx obm 000 0 eng d
020
$a
9798381377385
035
$a
(MiAaPQ)AAI30810722
035
$a
AAI30810722
040
$a
MiAaPQ
$b
eng
$c
MiAaPQ
$d
NTU
100
1
$a
Dai, Yutong.
$3
1474809
245
1 0
$a
Structured Sparse Optimization.
264
0
$c
2024
300
$a
1 online resource (248 pages)
336
$a
text
$b
txt
$2
rdacontent
337
$a
computer
$b
c
$2
rdamedia
338
$a
online resource
$b
cr
$2
rdacarrier
500
$a
Source: Dissertations Abstracts International, Volume: 85-07, Section: B.
500
$a
Advisor: Robinson, Daniel P.
502
$a
Thesis (Ph.D.)--Lehigh University, 2024.
504
$a
Includes bibliographical references
520
$a
In the age of high-dimensional data-driven science, sparse optimization techniques play a vital role. Sparse optimization aims to discover solutions with compact representations in low-dimensional spaces, i.e., solutions with relatively few nonzero entries. These techniques have succeeded in areas like signal processing, statistics, and model compression. As data becomes increasingly complex, the challenges mount. Classical sparse optimization techniques use l1 regularization to obtain sparse solution estimates, which may be inadequate if a more complicated solution structure is required. Hence, the need arises for structured sparsity, which follows distinct patterns, to produce more interpretable and meaningful solutions.The development of structured sparse optimization algorithms faces many challenges, such as efficiency, scalability, noise management, and support estimation (correctly identifying the zero/nonzero structure of an exact solution). This dissertation addresses these challenges by analyzing new support identification and subspace optimization techniques.The dissertation is structured into two parts. The first part focuses on developing deterministic algorithms. In particular, Chapter 2 focuses on non-overlapping group-sparse problems. It utilizes the exact proximal operator evaluation to achieve support identification. By further integrating Newton's method into the subspace optimization procedure, i.e., only optimizing groups of variables that are predicted to be in the support, the method achieves a fast local convergence rate. Numerical results validate that the proposed method achieves state-of-art performance on regularized regression problems. Chapter 3 exploits a challenging overlapping group-sparse structure and establishes the support identification property of iterates computed as particular inexact estimates of the proximal operator. By properly managing the errors incurred from the inexact proximal operator evaluation and performing special projections, our analysis provides an upper bound on the number of iterations before the support is identified, which is new to the literature.The second part of the dissertation considers the stochastic setting. Chapter 4 presents a stochastic algorithm for non-overlapping group-sparse problems. It enjoys a consistent support identification property with high probability (i.e., with high probability, the correct support is identified for all sufficiently large iterations), avoiding the reliance on exact gradient evaluation or the explicit storage of historical gradient information. Chapter 5 extends this success to overlapping group-sparse structured problems. Borrowing the tools developed in Chapter 3 and Chapter 4, we proposed a method to achieve consistent support identification with high probability with the presence of noise from both inexact proximal operator evaluation and stochastic gradient approximation. Lastly, Chapter 6 applies the idea of subspace optimization to the training of deep neural networks. Armed with a carefully designed subspace projection mechanism, we designed a method that consistently recovers high group-sparse neural networks while maintaining high task-relevant performance. This is beneficial for deploying deep neural networks for inference in devices with low computing resources since the number of parameters and the computation cost are greatly reduced.
533
$a
Electronic reproduction.
$b
Ann Arbor, Mich. :
$c
ProQuest,
$d
2024
538
$a
Mode of access: World Wide Web
650
4
$a
Applied mathematics.
$3
1069907
650
4
$a
Computer science.
$3
573171
650
4
$a
Industrial engineering.
$3
679492
653
$a
Acceleration
653
$a
Sparse optimization
653
$a
Proximal operator
653
$a
Sparsity
653
$a
Support identification
655
7
$a
Electronic books.
$2
local
$3
554714
690
$a
0364
690
$a
0796
690
$a
0984
690
$a
0546
710
2
$a
ProQuest Information and Learning Co.
$3
1178819
710
2
$a
Lehigh University.
$b
Industrial Engineering.
$3
1182234
773
0
$t
Dissertations Abstracts International
$g
85-07B.
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=30810722
$z
click for full text (PQDT)
筆 0 讀者評論
多媒體
評論
新增評論
分享你的心得
Export
取書館別
處理中
...
變更密碼[密碼必須為2種組合(英文和數字)及長度為10碼以上]
登入