語系:
繁體中文
English
說明(常見問題)
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
Methods for GPU Acceleration of Big ...
~
ProQuest Information and Learning Co.
Methods for GPU Acceleration of Big Data Applications.
紀錄類型:
書目-語言資料,手稿 : Monograph/item
正題名/作者:
Methods for GPU Acceleration of Big Data Applications./
作者:
Mokhtari, Reza.
面頁冊數:
1 online resource (105 pages)
附註:
Source: Dissertation Abstracts International, Volume: 79-04(E), Section: B.
Contained By:
Dissertation Abstracts International79-04B(E).
標題:
Computer engineering. -
電子資源:
click for full text (PQDT)
ISBN:
9780355452181
Methods for GPU Acceleration of Big Data Applications.
Mokhtari, Reza.
Methods for GPU Acceleration of Big Data Applications.
- 1 online resource (105 pages)
Source: Dissertation Abstracts International, Volume: 79-04(E), Section: B.
Thesis (Ph.D.)
Includes bibliographical references
Big Data applications are trivially parallelizable because they typically consist of simple and straightforward operations performed on a large number of independent input records. GPUs appear to be particularly well suited for this class of applications given their high degree of parallelism and high memory bandwidth. However, a number of issues severely complicate matters when trying to exploit GPUs to accelerate these applications. First, Big Data is often too large to fit in the GPU's separate, limited-sized memory. Second, data transfers to and from GPUs are expensive because the bus that connects CPUs and GPUs has limited bandwidth and high latency; in practice, this often results in data-starved GPU cores. Third, GPU memory bandwidth is high only if data is layed out in memory such that the GPU threads accessing memory at the same time access adjacent memory; unfortunately this is not how Big Data is layed out in practice.
Electronic reproduction.
Ann Arbor, Mich. :
ProQuest,
2018
Mode of access: World Wide Web
ISBN: 9780355452181Subjects--Topical Terms:
569006
Computer engineering.
Index Terms--Genre/Form:
554714
Electronic books.
Methods for GPU Acceleration of Big Data Applications.
LDR
:03326ntm a2200361Ki 4500
001
908922
005
20180419104821.5
006
m o u
007
cr mn||||a|a||
008
190606s2017 xx obm 000 0 eng d
020
$a
9780355452181
035
$a
(MiAaPQ)AAI10250317
035
$a
(MiAaPQ)toronto:15145
035
$a
AAI10250317
040
$a
MiAaPQ
$b
eng
$c
MiAaPQ
099
$a
TUL
$f
hyy
$c
available through World Wide Web
100
1
$a
Mokhtari, Reza.
$3
1179327
245
1 0
$a
Methods for GPU Acceleration of Big Data Applications.
264
0
$c
2017
300
$a
1 online resource (105 pages)
336
$a
text
$b
txt
$2
rdacontent
337
$a
computer
$b
c
$2
rdamedia
338
$a
online resource
$b
cr
$2
rdacarrier
500
$a
Source: Dissertation Abstracts International, Volume: 79-04(E), Section: B.
500
$a
Adviser: Michael Stumm.
502
$a
Thesis (Ph.D.)
$c
University of Toronto (Canada)
$d
2017.
504
$a
Includes bibliographical references
520
$a
Big Data applications are trivially parallelizable because they typically consist of simple and straightforward operations performed on a large number of independent input records. GPUs appear to be particularly well suited for this class of applications given their high degree of parallelism and high memory bandwidth. However, a number of issues severely complicate matters when trying to exploit GPUs to accelerate these applications. First, Big Data is often too large to fit in the GPU's separate, limited-sized memory. Second, data transfers to and from GPUs are expensive because the bus that connects CPUs and GPUs has limited bandwidth and high latency; in practice, this often results in data-starved GPU cores. Third, GPU memory bandwidth is high only if data is layed out in memory such that the GPU threads accessing memory at the same time access adjacent memory; unfortunately this is not how Big Data is layed out in practice.
520
$a
This dissertation presents three solutions that help mitigate the above issues and enable GPU-acceleration of Big Data applications, namely BigKernel, a system that automates and optimizes CPU-GPU communication and GPU memory accesses, S-L1, a caching subsystem implemented in software, and a hash table designed for GPUs. Our key contributions include: (i) the first automatic CPU-GPU data management system that improves on the performance of state-of-the-art double-buffering scheme (a scheme that overlaps communication with computation to improve the GPU performance), (ii) a GPU level 1 cache implemented entirely in the software that outperforms hardware L1 when used by Big Data applications and, (iii) a GPU-based hash table (for storing key-value pairs popular in Big Data applications) that can grow beyond the available GPU memory yet retain reasonable performance. These solutions allow many existing Big Data applications to be ported to GPUs in a straightforward way and achieve performance gains of between 1.04X and 7.2X over the fastest CPU-based multi-threaded implementations.
533
$a
Electronic reproduction.
$b
Ann Arbor, Mich. :
$c
ProQuest,
$d
2018
538
$a
Mode of access: World Wide Web
650
4
$a
Computer engineering.
$3
569006
650
4
$a
Computer science.
$3
573171
655
7
$a
Electronic books.
$2
local
$3
554714
690
$a
0464
690
$a
0984
710
2
$a
ProQuest Information and Learning Co.
$3
1178819
710
2
$a
University of Toronto (Canada).
$b
Electrical and Computer Engineering.
$3
1148628
773
0
$t
Dissertation Abstracts International
$g
79-04B(E).
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=10250317
$z
click for full text (PQDT)
筆 0 讀者評論
多媒體
評論
新增評論
分享你的心得
Export
取書館別
處理中
...
變更密碼[密碼必須為2種組合(英文和數字)及長度為10碼以上]
登入