語系:
繁體中文
English
說明(常見問題)
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
Data orchestration in deep learning ...
~
Krishna, Tushar,
Data orchestration in deep learning accelerators /
紀錄類型:
書目-語言資料,印刷品 : Monograph/item
正題名/作者:
Data orchestration in deep learning accelerators // Tushar Krishna, Hyoukjun Kwon, Angshuman Parashar, Michael Pellauer, Ananda Samajdar.
作者:
Krishna, Tushar,
其他作者:
Samajdar, Ananda,
面頁冊數:
1 online resource (166 p.)
標題:
Data flow computing. -
電子資源:
https://portal.igpublish.com/iglibrary/search/MCPB0006576.html
ISBN:
9781681738697
Data orchestration in deep learning accelerators /
Krishna, Tushar,
Data orchestration in deep learning accelerators /
Tushar Krishna, Hyoukjun Kwon, Angshuman Parashar, Michael Pellauer, Ananda Samajdar. - 1 online resource (166 p.) - Synthesis lectures on computer architecture ;52. - Synthesis lectures on computer architecture ;#48..
Includes bibliographical references (pages 131-143).
Access restricted to authorized users and institutions.
This Synthesis Lecture focuses on techniques for efficient data orchestration within DNN accelerators. The End of Moore's Law, coupled with the increasing growth in deep learning and other AI applications has led to the emergence of custom Deep Neural Network (DNN) accelerators for energy-efficient inference on edge devices. Modern DNNs have millions of hyper parameters and involve billions of computations this necessitates extensive data movement from memory to on-chip processing engines. It is well known that the cost of data movement today surpasses the cost of the actual computation therefore, DNN accelerators require careful orchestration of data across on-chip compute, network, and memory elements to minimize the number of accesses to external DRAM. The book covers DNN dataflows, data reuse, buffer hierarchies, networks-on-chip, and automated design-space exploration. It concludes with data orchestration challenges with compressed and sparse DNNs and future trends. The target audience is students, engineers, and researchers interested in designing high-performance and low-energy accelerators for DNN inference
Mode of access: World Wide Web.
ISBN: 9781681738697Subjects--Topical Terms:
897923
Data flow computing.
Index Terms--Genre/Form:
554714
Electronic books.
LC Class. No.: Q342
Dewey Class. No.: 006.3
Data orchestration in deep learning accelerators /
LDR
:02261nam a2200301 i 4500
001
1041733
006
m eo d
007
cr cn |||m|||a
008
211215t20202020cau ob 000 0 eng d
020
$a
9781681738697
020
$a
9781681738703
020
$a
9781681738710
035
$a
MCPB0006576
040
$a
iG Publishing
$b
eng
$c
iG Publishing
$e
rda
050
0 0
$a
Q342
082
0 0
$a
006.3
100
1
$a
Krishna, Tushar,
$e
author.
$3
1341652
245
1 0
$a
Data orchestration in deep learning accelerators /
$c
Tushar Krishna, Hyoukjun Kwon, Angshuman Parashar, Michael Pellauer, Ananda Samajdar.
264
1
$a
San Rafael, California :
$b
Morgan & Claypool Publishers,
$c
2020.
264
4
$c
©2020
300
$a
1 online resource (166 p.)
336
$a
text
$b
txt
$2
rdacontent
337
$a
computer
$b
c
$2
rdamedia
338
$a
online resource
$b
cr
$2
rdacarrier
490
1
$a
Synthesis lectures on computer architecture ;
$v
52
504
$a
Includes bibliographical references (pages 131-143).
506
$a
Access restricted to authorized users and institutions.
520
3
$a
This Synthesis Lecture focuses on techniques for efficient data orchestration within DNN accelerators. The End of Moore's Law, coupled with the increasing growth in deep learning and other AI applications has led to the emergence of custom Deep Neural Network (DNN) accelerators for energy-efficient inference on edge devices. Modern DNNs have millions of hyper parameters and involve billions of computations this necessitates extensive data movement from memory to on-chip processing engines. It is well known that the cost of data movement today surpasses the cost of the actual computation therefore, DNN accelerators require careful orchestration of data across on-chip compute, network, and memory elements to minimize the number of accesses to external DRAM. The book covers DNN dataflows, data reuse, buffer hierarchies, networks-on-chip, and automated design-space exploration. It concludes with data orchestration challenges with compressed and sparse DNNs and future trends. The target audience is students, engineers, and researchers interested in designing high-performance and low-energy accelerators for DNN inference
538
$a
Mode of access: World Wide Web.
650
0
$a
Data flow computing.
$3
897923
650
0
$a
Machine learning.
$3
561253
650
0
$a
Neural networks (Computer science)
$3
528588
655
4
$a
Electronic books.
$2
local
$3
554714
700
1
$a
Samajdar, Ananda,
$e
author.
$3
1341656
700
1
$a
Pellauer, Michael,
$e
author.
$3
1341655
700
1
$a
Parashar, Angshuman,
$e
author.
$3
1341654
700
1
$a
Kwon, Hyoukjun,
$e
author.
$3
1341653
830
0
$a
Synthesis lectures on computer architecture ;
$v
#48.
$3
1253131
856
4 0
$u
https://portal.igpublish.com/iglibrary/search/MCPB0006576.html
筆 0 讀者評論
多媒體
評論
新增評論
分享你的心得
Export
取書館別
處理中
...
變更密碼[密碼必須為2種組合(英文和數字)及長度為10碼以上]
登入