語系:
繁體中文
English
說明(常見問題)
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
Architecture and Mapping Co-Exploration and Optimization for DNN Accelerators.
紀錄類型:
書目-語言資料,手稿 : Monograph/item
正題名/作者:
Architecture and Mapping Co-Exploration and Optimization for DNN Accelerators./
作者:
Trewin, Benjamin.
面頁冊數:
1 online resource (44 pages)
附註:
Source: Masters Abstracts International, Volume: 85-12.
Contained By:
Masters Abstracts International85-12.
標題:
Computer engineering. -
電子資源:
click for full text (PQDT)
ISBN:
9798383058459
Architecture and Mapping Co-Exploration and Optimization for DNN Accelerators.
Trewin, Benjamin.
Architecture and Mapping Co-Exploration and Optimization for DNN Accelerators.
- 1 online resource (44 pages)
Source: Masters Abstracts International, Volume: 85-12.
Thesis (M.S.)--Southern Illinois University at Carbondale, 2024.
Includes bibliographical references
It is extremely difficult to optimize a deep neural network (DNN) accelerator's performance on various networks in terms of energy and/or latency because of the sheer size of the search space. Not only do DNN accelerators have a huge search space of different hardware architecture topologies and characteristics, which may perform better or worse on certain DNNs, but also DNN layers can be mapped to hardware in a huge array of different configurations. Further, an optimal mapping for one DNN architecture is not consistently the same on a different architecture. These two factors depend on one another. Thus there is a need for co-optimization to take place so hardware characteristics and mapping can be optimized simultaneously, to find not only an optimal mapping but also the best architecture for a DNN as well. This work presents Blink, a design space exploration (DSE) tool, which co-optimizes hardware attributes and mapping configurations. This tool enables users to find optimal hardware architectures through the use of a genetic algorithm and further finds optimal mappings for each hardware configuration using a pruned random selection method. Architecture, layers, and mappings are each sent to Timeloop, a DNN accelerator simulator, to obtain accelerator statistics, which are sent back to the genetic algorithm for next population selection. Through this method, novel DNN accelerator solutions can be identified without tackling the computationally massive task of simulating exhaustively.
Electronic reproduction.
Ann Arbor, Mich. :
ProQuest,
2024
Mode of access: World Wide Web
ISBN: 9798383058459Subjects--Topical Terms:
569006
Computer engineering.
Subjects--Index Terms:
Co-optimizationIndex Terms--Genre/Form:
554714
Electronic books.
Architecture and Mapping Co-Exploration and Optimization for DNN Accelerators.
LDR
:02922ntm a22003977 4500
001
1150169
005
20241022111606.5
006
m o d
007
cr bn ---uuuuu
008
250605s2024 xx obm 000 0 eng d
020
$a
9798383058459
035
$a
(MiAaPQ)AAI31145875
035
$a
AAI31145875
040
$a
MiAaPQ
$b
eng
$c
MiAaPQ
$d
NTU
100
1
$a
Trewin, Benjamin.
$3
1476604
245
1 0
$a
Architecture and Mapping Co-Exploration and Optimization for DNN Accelerators.
264
0
$c
2024
300
$a
1 online resource (44 pages)
336
$a
text
$b
txt
$2
rdacontent
337
$a
computer
$b
c
$2
rdamedia
338
$a
online resource
$b
cr
$2
rdacarrier
500
$a
Source: Masters Abstracts International, Volume: 85-12.
500
$a
Includes supplementary digital materials.
500
$a
Advisor: Anagnostopoulos, Iraklis.
502
$a
Thesis (M.S.)--Southern Illinois University at Carbondale, 2024.
504
$a
Includes bibliographical references
520
$a
It is extremely difficult to optimize a deep neural network (DNN) accelerator's performance on various networks in terms of energy and/or latency because of the sheer size of the search space. Not only do DNN accelerators have a huge search space of different hardware architecture topologies and characteristics, which may perform better or worse on certain DNNs, but also DNN layers can be mapped to hardware in a huge array of different configurations. Further, an optimal mapping for one DNN architecture is not consistently the same on a different architecture. These two factors depend on one another. Thus there is a need for co-optimization to take place so hardware characteristics and mapping can be optimized simultaneously, to find not only an optimal mapping but also the best architecture for a DNN as well. This work presents Blink, a design space exploration (DSE) tool, which co-optimizes hardware attributes and mapping configurations. This tool enables users to find optimal hardware architectures through the use of a genetic algorithm and further finds optimal mappings for each hardware configuration using a pruned random selection method. Architecture, layers, and mappings are each sent to Timeloop, a DNN accelerator simulator, to obtain accelerator statistics, which are sent back to the genetic algorithm for next population selection. Through this method, novel DNN accelerator solutions can be identified without tackling the computationally massive task of simulating exhaustively.
533
$a
Electronic reproduction.
$b
Ann Arbor, Mich. :
$c
ProQuest,
$d
2024
538
$a
Mode of access: World Wide Web
650
4
$a
Computer engineering.
$3
569006
650
4
$a
Electrical engineering.
$3
596380
653
$a
Co-optimization
653
$a
Deep neural network
653
$a
Design space exploration
653
$a
Optimization
655
7
$a
Electronic books.
$2
local
$3
554714
690
$a
0464
690
$a
0800
690
$a
0544
710
2
$a
ProQuest Information and Learning Co.
$3
1178819
710
2
$a
Southern Illinois University at Carbondale.
$b
Electrical and Computer Engineering.
$3
1192686
773
0
$t
Masters Abstracts International
$g
85-12.
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=31145875
$z
click for full text (PQDT)
筆 0 讀者評論
多媒體
評論
新增評論
分享你的心得
Export
取書館別
處理中
...
變更密碼[密碼必須為2種組合(英文和數字)及長度為10碼以上]
登入