Language:
English
繁體中文
Help
Login
Back
Switch To:
Labeled
|
MARC Mode
|
ISBD
Data orchestration in deep learning ...
~
Krishna, Tushar,
Data orchestration in deep learning accelerators /
Record Type:
Language materials, printed : Monograph/item
Title/Author:
Data orchestration in deep learning accelerators // Tushar Krishna, Hyoukjun Kwon, Angshuman Parashar, Michael Pellauer, Ananda Samajdar.
Author:
Krishna, Tushar,
other author:
Kwon, Hyoukjun,
Description:
1 online resource (166 p.)
Subject:
Neural networks (Computer science) -
Online resource:
https://portal.igpublish.com/iglibrary/search/MCPB0006576.html
ISBN:
9781681738697
Data orchestration in deep learning accelerators /
Krishna, Tushar,
Data orchestration in deep learning accelerators /
Tushar Krishna, Hyoukjun Kwon, Angshuman Parashar, Michael Pellauer, Ananda Samajdar. - 1 online resource (166 p.) - Synthesis lectures on computer architecture ;52. - Synthesis lectures on computer architecture ;#48..
Includes bibliographical references (pages 131-143).
Access restricted to authorized users and institutions.
This Synthesis Lecture focuses on techniques for efficient data orchestration within DNN accelerators. The End of Moore's Law, coupled with the increasing growth in deep learning and other AI applications has led to the emergence of custom Deep Neural Network (DNN) accelerators for energy-efficient inference on edge devices. Modern DNNs have millions of hyper parameters and involve billions of computations this necessitates extensive data movement from memory to on-chip processing engines. It is well known that the cost of data movement today surpasses the cost of the actual computation therefore, DNN accelerators require careful orchestration of data across on-chip compute, network, and memory elements to minimize the number of accesses to external DRAM. The book covers DNN dataflows, data reuse, buffer hierarchies, networks-on-chip, and automated design-space exploration. It concludes with data orchestration challenges with compressed and sparse DNNs and future trends. The target audience is students, engineers, and researchers interested in designing high-performance and low-energy accelerators for DNN inference
Mode of access: World Wide Web.
ISBN: 9781681738697Subjects--Topical Terms:
528588
Neural networks (Computer science)
Index Terms--Genre/Form:
554714
Electronic books.
LC Class. No.: Q342
Dewey Class. No.: 006.3
Data orchestration in deep learning accelerators /
LDR
:02261nam a2200301 i 4500
001
1041733
006
m eo d
007
cr cn |||m|||a
008
211215t20202020cau ob 000 0 eng d
020
$a
9781681738697
020
$a
9781681738703
020
$a
9781681738710
035
$a
MCPB0006576
040
$a
iG Publishing
$b
eng
$c
iG Publishing
$e
rda
050
0 0
$a
Q342
082
0 0
$a
006.3
100
1
$a
Krishna, Tushar,
$e
author.
$3
1341652
245
1 0
$a
Data orchestration in deep learning accelerators /
$c
Tushar Krishna, Hyoukjun Kwon, Angshuman Parashar, Michael Pellauer, Ananda Samajdar.
264
1
$a
San Rafael, California :
$b
Morgan & Claypool Publishers,
$c
2020.
264
4
$c
©2020
300
$a
1 online resource (166 p.)
336
$a
text
$b
txt
$2
rdacontent
337
$a
computer
$b
c
$2
rdamedia
338
$a
online resource
$b
cr
$2
rdacarrier
490
1
$a
Synthesis lectures on computer architecture ;
$v
52
504
$a
Includes bibliographical references (pages 131-143).
506
$a
Access restricted to authorized users and institutions.
520
3
$a
This Synthesis Lecture focuses on techniques for efficient data orchestration within DNN accelerators. The End of Moore's Law, coupled with the increasing growth in deep learning and other AI applications has led to the emergence of custom Deep Neural Network (DNN) accelerators for energy-efficient inference on edge devices. Modern DNNs have millions of hyper parameters and involve billions of computations this necessitates extensive data movement from memory to on-chip processing engines. It is well known that the cost of data movement today surpasses the cost of the actual computation therefore, DNN accelerators require careful orchestration of data across on-chip compute, network, and memory elements to minimize the number of accesses to external DRAM. The book covers DNN dataflows, data reuse, buffer hierarchies, networks-on-chip, and automated design-space exploration. It concludes with data orchestration challenges with compressed and sparse DNNs and future trends. The target audience is students, engineers, and researchers interested in designing high-performance and low-energy accelerators for DNN inference
538
$a
Mode of access: World Wide Web.
650
0
$a
Neural networks (Computer science)
$3
528588
650
0
$a
Machine learning.
$3
561253
650
0
$a
Data flow computing.
$3
897923
655
4
$a
Electronic books.
$2
local
$3
554714
700
1
$a
Kwon, Hyoukjun,
$e
author.
$3
1341653
700
1
$a
Parashar, Angshuman,
$e
author.
$3
1341654
700
1
$a
Pellauer, Michael,
$e
author.
$3
1341655
700
1
$a
Samajdar, Ananda,
$e
author.
$3
1341656
830
0
$a
Synthesis lectures on computer architecture ;
$v
#48.
$3
1253131
856
4 0
$u
https://portal.igpublish.com/iglibrary/search/MCPB0006576.html
based on 0 review(s)
Multimedia
Reviews
Add a review
and share your thoughts with other readers
Export
pickup library
Processing
...
Change password
Login