語系:
繁體中文
English
說明(常見問題)
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
Caffeinated FPGAs : = FPGA Framework...
~
ProQuest Information and Learning Co.
Caffeinated FPGAs : = FPGA Framework for Training and Inference of Convolutional Neural Networks With Reduced Precision Floating-Point Arithmetic.
紀錄類型:
書目-語言資料,手稿 : Monograph/item
正題名/作者:
Caffeinated FPGAs :/
其他題名:
FPGA Framework for Training and Inference of Convolutional Neural Networks With Reduced Precision Floating-Point Arithmetic.
作者:
DiCecco, Roberto.
面頁冊數:
1 online resource (87 pages)
附註:
Source: Masters Abstracts International, Volume: 58-01.
Contained By:
Masters Abstracts International58-01(E).
標題:
Computer engineering. -
電子資源:
click for full text (PQDT)
ISBN:
9780438185005
Caffeinated FPGAs : = FPGA Framework for Training and Inference of Convolutional Neural Networks With Reduced Precision Floating-Point Arithmetic.
DiCecco, Roberto.
Caffeinated FPGAs :
FPGA Framework for Training and Inference of Convolutional Neural Networks With Reduced Precision Floating-Point Arithmetic. - 1 online resource (87 pages)
Source: Masters Abstracts International, Volume: 58-01.
Thesis (M.A.S.)--University of Toronto (Canada), 2018.
Includes bibliographical references
This thesis presents a framework for performing training and inference of Convolutional Neural Networks (CNNs) with reduced precision floating-point arithmetic. This work aims to provide a means for FPGA and machine learning researchers to use the customizability of FPGAs to explore the precision requirements of training CNNs with an open-source framework. This is accomplished through the creation of a High-Level Synthesis library with a Custom Precision Floating-Point data type that is configurable in both exponent and mantissa widths, with several standard operators and rounding modes supported. With this library a FPGA CNN Training Engine (FCTE) has been created along with a FPGA CNN framework FPGA Caffe, which is built on Caffe. FCTE has a peak performance of approximately 350 GFLOPs, and has been used to show that a mantissa width of 5 and exponent width of 6 is sufficient for training several models targeting the MNIST and CIFAR-10 datasets.
Electronic reproduction.
Ann Arbor, Mich. :
ProQuest,
2018
Mode of access: World Wide Web
ISBN: 9780438185005Subjects--Topical Terms:
569006
Computer engineering.
Index Terms--Genre/Form:
554714
Electronic books.
Caffeinated FPGAs : = FPGA Framework for Training and Inference of Convolutional Neural Networks With Reduced Precision Floating-Point Arithmetic.
LDR
:02237ntm a2200337Ki 4500
001
916894
005
20180928111502.5
006
m o u
007
cr mn||||a|a||
008
190606s2018 xx obm 000 0 eng d
020
$a
9780438185005
035
$a
(MiAaPQ)AAI10791098
035
$a
(MiAaPQ)toronto:17464
035
$a
AAI10791098
040
$a
MiAaPQ
$b
eng
$c
MiAaPQ
$d
NTU
100
1
$a
DiCecco, Roberto.
$3
1190758
245
1 0
$a
Caffeinated FPGAs :
$b
FPGA Framework for Training and Inference of Convolutional Neural Networks With Reduced Precision Floating-Point Arithmetic.
264
0
$c
2018
300
$a
1 online resource (87 pages)
336
$a
text
$b
txt
$2
rdacontent
337
$a
computer
$b
c
$2
rdamedia
338
$a
online resource
$b
cr
$2
rdacarrier
500
$a
Source: Masters Abstracts International, Volume: 58-01.
500
$a
Adviser: Paul Chow.
502
$a
Thesis (M.A.S.)--University of Toronto (Canada), 2018.
504
$a
Includes bibliographical references
520
$a
This thesis presents a framework for performing training and inference of Convolutional Neural Networks (CNNs) with reduced precision floating-point arithmetic. This work aims to provide a means for FPGA and machine learning researchers to use the customizability of FPGAs to explore the precision requirements of training CNNs with an open-source framework. This is accomplished through the creation of a High-Level Synthesis library with a Custom Precision Floating-Point data type that is configurable in both exponent and mantissa widths, with several standard operators and rounding modes supported. With this library a FPGA CNN Training Engine (FCTE) has been created along with a FPGA CNN framework FPGA Caffe, which is built on Caffe. FCTE has a peak performance of approximately 350 GFLOPs, and has been used to show that a mantissa width of 5 and exponent width of 6 is sufficient for training several models targeting the MNIST and CIFAR-10 datasets.
533
$a
Electronic reproduction.
$b
Ann Arbor, Mich. :
$c
ProQuest,
$d
2018
538
$a
Mode of access: World Wide Web
650
4
$a
Computer engineering.
$3
569006
650
4
$a
Artificial intelligence.
$3
559380
655
7
$a
Electronic books.
$2
local
$3
554714
690
$a
0464
690
$a
0800
710
2
$a
ProQuest Information and Learning Co.
$3
1178819
710
2
$a
University of Toronto (Canada).
$b
Electrical and Computer Engineering.
$3
1148628
773
0
$t
Masters Abstracts International
$g
58-01(E).
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=10791098
$z
click for full text (PQDT)
筆 0 讀者評論
多媒體
評論
新增評論
分享你的心得
Export
取書館別
處理中
...
變更密碼[密碼必須為2種組合(英文和數字)及長度為10碼以上]
登入